id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
244207555 | pes2o/s2orc | v3-fos-license | Introduction of home exercise program for current circumstance worldwide
Due to current pandemic of Coronavirus disease-2019 (COVID-19), elderly people tend to show deterioration of physical and cognitive function and exacerbation of nutritional status, leading to the progression of sarcopenia and frailty. Recent topic includes the National Center for Geriatrics and Gerontology-Home Exercise Program for Older People (NCGG-HEPOP) 2020. It has six packs for health maintenance, such as balance improvement, strengthening, inactivity prevention, feeding/swallowing improvement, nutrition improvement and cognition. The pandemic may bring decreased social interaction, decreased physical activity, and increased physical stress of viral infection. NCGG-HEPOP will hopefully prevent such exacerbation in the light of geriatrics and rehabilitation medicine.
How much exercise and activity were actually reduced by COVID-19? There are data for 30days after the WHO pandemic declaration. The average steps in 455million subjects in 187 countries decreased by 1432 steps (27.3%). 8 An internet survey of 2381 patients aged >18years reported reduced physical activity in 43% of subjects. 9 Another study uses wearable devices to track individual physical activity levels. The average steps for 30million users were 7% -38% lower in mid-March 2020 compared to the same period last year in almost all countries. An internet survey of 1600 elderly people aged >65 showed that physical activity time per week decreased by about 30% (about 60minutes) after the pandemic. 10 Furthermore, during the emergency period for 40days from April, the steps decreased by up to 30%. 11 During this period, 41% of outpatients refrained from visiting the hospital for dementia rehabilitation at NCGG facilities.
Research on the influence of COVID-19 on various subjects has progressed in many fields. As a result, subjects who originally had some underlying diseases were more likely to become seriously ill. 12 Further, older people were much more likely to become severe compared to younger people. Among these, fatality rate was increased dramatically with advancing age. The occurrence of numerous clusters and frequent deaths of the elderly has been found in the hospitals, long-term care facilities, community gatherings and so on. 13,14 In addition, significant news has spread around the world. They include the infection and sudden death of older people who have been active in their daily lives, the lockdown in many cities, the collapse of medical care, and the interrupt of artificial respirators for the elderly. It means the situation for clinical triage procedure of the elderly. 11 This situation involves major problems. For many years, sincere research and practice on ageism has been carried out. Among them, optimal medical care and care has been provided for extending healthy longevity, maintaining daily activities, and improving QOL. However, it is the astonishing fact that such triage has to be used owing to COVID-19 infection. 15 In other words, it showed the impact of overturning the medical ethics that underlie geriatrics and rehabilitation medicine.
The COVID-19 pandemic cause's indirect stress such as inactivity, decreased social interaction, and decreased physical activity, as well as the direct physical stress of viral infection. There is no doubt that this continuation will exacerbate the frailty. Adequate method to manage both of them would be important 16 and concrete measure has been tried as NCGG-HEPOP.
NCGG-HEPOP can be judged by applying a flowchart as to the situation of the elderly. 6,7 Among them, there are three main items related to exercise: i) balance improvement pack, ii) strengthening pack, and iii) inactivity prevention pack. In addition, there are two types of iv) feeding and swallowing improvement packs and v) nutrition improvement packs related to swallowing/oral frailty, and vi) cogni-pack related to cognitive frailty. In this way, these six packs can cover elderly problems (Figure 1). The benefits of HEPOP include i) easily clarifying mental and physical problems with a flowchart, ii) including 6 kinds of various aspects and concepts, and iii) selecting and quickly conducting an appropriate exercise. In the light of sports medicine, exercise choices are effective for frail and sarcopenic elderly. 17,18 When aiming for muscle strengthening by resistance exercise, it is certainly ideal to carry out a high load of 70-80% of a repetition maximum (1RM). However, recent studies on elderly people have reported that muscle strengthening can be obtained even with a low output of about 20% of 1RM, 19 and that even about 16% can promote muscle protein synthesis. 20 Prevention of dysphagia is also important for the prevention of frailty and sarcopenia. 21,22 In this way, the entire cover of HEPOP would be expected.
Finally, for COVID-19, the development of vaccines and therapeutic agents has been in progress for pandemic COVID-19. However, short-term improvement is not easy. The crucial point would be adequate balancing of some measures of stability and activity. 23 For the period of with and/or post COVID-19 era from now on, the elderly will manage problems of reduced mental and physical reserves and resilience from stress. 24 Home exercise program will hopefully become a useful tool for obtaining resilience to various changes. | 2021-10-18T17:21:24.695Z | 2021-08-27T00:00:00.000 | {
"year": 2021,
"sha1": "3cf6374105d8b81ad8ea08a36ac1150a89b7a1bb",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/IJCAM/IJCAM-14-00560.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a9bcc292c28f344a54bf30c9b25a85ce9079303a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
29934155 | pes2o/s2orc | v3-fos-license | Postpartum home care and its effects on mothers’ health: A clinical trial
Background: Postpartum home care plays an important role in prevention of postpartum complications. Regular visits of mothers during this period are imperative. This study aimed to provide postpartum home care for mothers to assess its effects on mothers’ health in Iran. Materials and Methods: This study was carried out in two phases. First, a comprehensive postpartum home care program was compiled by performing a comparative study, using the available guidelines in this regard in different countries and based on the opinions of the experts. Next, a clinical trial was carried out on 276 women who gave birth in the university hospitals affiliated to Shahid Beheshti University of Medical Sciences. There were 92 mothers in the intervention and 184 in the control group. The intervention group mothers were provided with postpartum home care service while the control group did not receive such a service. Results: Outcome assessment at 60 days’ postpartum revealed a significant difference between the two groups in terms of the use of supplements, birth control methods, postpartum depression, breastfeeding problems, constipation, and fatigue (P < 0.05). No significant differences were noted between the two groups with regard to hospitalization, hemorrhoids, backache and lumbar pain (P > 0.05). Conclusion: The postpartum home care program had a positive effect on some aspects of the mothers’ health status and their satisfaction in our society.
INTRODUCTION
The postpartum period starts from 1 h after delivery to 42 days and is a critical period for the mothers' health. [1] Women experience various physical, mental, and emotional changes during this period, which may interfere with their daily routine. [2] A wide range of complications have been reported during this period such as physical, mental, and emotional problems including fatigue, concerns with regard to sexual intercourse, hemorrhoids, constipation, breastfeeding problem, anxiety, stress, depression, sleep disorders, bleeding, urinary incontinence, and posttraumatic stress disorder. Women's health after delivery is the most important factor affecting the health of their children. [3] convenience of their home. [8] Postpartum home care refers to the measures taken to prevent complications and promote the health of mothers during this period and improve the quality of their relationship with their newborns. This service helps mothers to better cope with their new, stressful life and empowers them to better manage taking care of themselves and their infants. [9] Moreover, postpartum home care may have unique advantages for prevention of mental and psychological complications in mothers. [10] Women who received midwifery care services at home reported the best quality of services compared to those receiving these services at the hospital. Furthermore, women who received postpartum home visits had higher level of satisfaction with the services. [11] Considering the limited number of studies on the postpartum home care services in Iran, this study aimed to design and provide postpartum home care service for mothers to assess its efficacy. The results of this study can help promote mothers' health and their satisfaction since it is believed that postpartum home services can significantly decrease the common complications in this period and increase mothers' satisfaction.
MATERIALS AND METHODS
This study was conducted in two phases. In the first phase, a comparative study was conducted to review the postpartum care guidelines in different countries. After brainstorming with the experts in the field, a consensus with regard to the necessary guidelines was achieved. A search was carried out in Google Scholar, Google, National Guideline Clearinghouse, World Health Organization (WHO), PubMed, NICE, and Cochrane databases to find postpartum home care guides in different countries. More comprehensive guidelines were selected. Next, based on the opinions of gynecologists and obstetricians and community medicine specialists, some modifications were made in the guidelines (according the conditions and needs of mothers in Iran); these changes were applied to the national care protocol for the mothers' health, which is routinely provided for mothers in the health centers as postpartum care service.
In the second phase, a clinical trial was carried out to assess the effect of home visits on mothers' health and was registered in IRCT (code 2013060313565N1). The compiled instruction was used for home visit of mothers. The study population included women who gave birth in Taleghani, Shohada, Mahdiyeh, and Imam Hossein Hospitals. A total of 276 mothers who met the following inclusion criteria and signed written informed consent forms were consecutively selected and entered in the study [ Figure 1]. The inclusion criteria were: Iranian nationality, no underlying disease, no pregnancy hypertension, no preeclampsia, no pregnancy diabetes, single birth with normal birth weight, no congenital anomaly, Edinburgh Postnatal Depression Scale of <10, no history of depression, and not taking antidepressants. Sixty-eight participants were chosen from each hospital. The selected mothers filled out the Edinburgh Postnatal Depression questionnaire. Those scoring over 10 in this questionnaire and whose had suicide thoughts were excluded and referred to a psychiatrist. The exclusion criteria were unwillingness for participation or wishing to quit for any reason. A total of 92 mothers were evaluated in the intervention and 184 in the control group (two controls per each mother in the intervention group). The intervention group mothers received postpartum home care service while the control group did not receive such a service. The data were collected by a trained midwife through observation, interview, history taking, and clinical examination of mothers. The collected data were recorded in the respective questionnaire. These data included demographics and pregnancy information. The questionnaire was filled out for all mothers. For those receiving postpartum home care, postpartum care checklist was filled out on day's 3-5 and 13-15 after childbirth. The content of care was the items mentioned in the Ministry of Health instruction [12] and included examinations, observations, questions and necessary instructions and training with regard to personal hygiene, mental, psychological, and sexual health, oral and dental health, risk factors, common complaints in the postpartum period, nutrition in this period and use of supplements, breastfeeding and its related problems and duration, care for the infant, and contraception. Anti-D globulin if is indicated injection was emphasized in the first and dental examination; the Papanicolaou test and birth control were emphasized in the second phase of care. Instructions on how to manage postpartum complications and exercise activities in this period were also added to the respective national protocol. Outcome assessment was done on day 60, and health parameters of the mothers in the two groups of intervention and control were assessed and compared. These parameters included the use of supplements by mothers, postpartum depression, hospitalization due to postpartum complications, common physical complaints and their significance and satisfaction of mothers with the emotional, communication, and educational services received in the two groups. To diagnose postpartum depression, Edinburgh Postnatal Depression questionnaire was used. This questionnaire, designed by Cox et al. in 1978, [13] has 10 questions and is widely used as a Postnatal Depression Scale. Its sensitivity, specificity, and predictive value have been previously confirmed for use in the Iranian populations. [14,15] In this questionnaire, scores of <10 indicate no depression, 10-12 indicate mild depression, and scores of over 13 and presence of suicidal thoughts indicate severe depression. [16] To determine the level of satisfaction of mothers with the service received, the questionnaire designed by Mirmolaei et al. [1] was used. To assess the validity of the questionnaire, content validity was evaluated. For this purpose, 10 experts, who were the scientific faculty members of School of Nursing and Midwifery of Tehran University of Medical Sciences, confirmed the questionnaire for its content. Reliability was assessed by retest, and the correlation coefficient was calculated to be 0.8. [1] The data were analyzed using Chi-square test, Fisher's exact test, and t-test. Descriptive statistics were also applied. All statistical analyses were carried out using SPSS version 18 (IBM Company). P = 0.05 or less was considered statistically significant.
RESULTS
The guidelines by the WHO, Australia, United Kingdom, United States, Canada, and Latin America were selected since they were more comprehensive than the others. Parts of these guidelines were added to the national protocol for mothers' health based on the opinions of the experts and stakeholders; the added sections discussed the management of common postpartum complications and conduction of sport activities and physical exercise. For diagnosis of postpartum depression during the visits on days 13 and 15, Edinburgh Postnatal Depression questionnaire was used.
In the second phase of the study, the following results were obtained in the two groups. No significant differences existed between the two groups of intervention and control for the demographic factors and some factors related to depression, and the two groups were similar in this respect [ Table 1]. Assessment of health parameters of the mothers and newborns at 60 days after delivery revealed the following results: With regard to the use of supplements by the mothers in the intervention group, 88% and 12% reported regular and irregular use of supplements, hematinic and multivitamin tablets during the first 60 days postpartum, respectively. All mothers in this group used supplements. In the control group, 73.4% and 6.19% reported regular and irregular use of supplements during the first 60 days, respectively; 13% used no supplement. The difference in this regard between the intervention and control groups was significant (P < 0.05). In terms of maternal hospitalization due to postpartum complications, although the hospitalization rate was lower in the group visited, the difference was not statistically significant (P > 0.05).
With regard to birth control to keep a minimum of 2-year interval between pregnancies, at 60 days postpartum, birth control was reported by 89.1% in the intervention group and 60.8% in the control group; the difference in this regard between the two groups was statistically significant (P < 0.05).
In terms of postpartum depression, 92.4% gained a score of <10 (no depression), 6.5% gained a score of over 13 (severe depression), and 1.1% gained a score of 10-13 (mild depression) in Edinburgh Postnatal Depression questionnaire. In the control group, 81.4% gained a score of <10 (no depression), 11.4% gained a score of 10-13 (mild depression), and 7.6% gained a score of over 13 (severe depression). The difference in this regard between the two groups was statistically significant (P < 0.05).
The two groups of intervention and control were not significantly different in terms of frequency of hemorrhoids (P > 0.05).
The frequency of backache and lumbar pain was lower in the intervention group, but the difference in this regard between the two groups was not statistically significant (P > 0.05).
Of the intervention group mothers, 10.9% complained of fatigue; this rate was 28.8% in the control group. The difference in this regard between the two groups was statistically significant (P < 0.05).
With regard to breastfeeding problems following lactation, 12% in the intervention group and 27.7% in the control group complained of congestion, mastitis, and cracked nipples; the difference in this regard was significant between the two groups (P < 0.05).
With regard to postpartum constipation, 13% in the intervention and 26.1% in the control group complained of constipation; the difference in this regard was significant between the two groups (P < 0.05).
Control mothers were evaluated in terms of seeking postpartum care. Based on the results, of 184 mothers who did not receive postpartum home care, 130 (70.7%) presented to health centers at least once during the first 60 days after delivery and 54 (29.3%) did not present to any health center during this period.
The mean and standard deviation of significance, receipt, and satisfaction of mothers with emotional, communication, and educational postpartum care services received at home were higher compared to controls, and this difference was statistically significant [ Table 2].
DISCUSSION
The results of the first phase of the study revealed that no solution had been provided in the national protocol (currently implemented in health centers) with regard to the prevention or treatment of some common postpartum complications such as fatigue, backache, lumbar pain, headache, constipation, type of sport activities and exercise, and time of starting them and screening for postpartum depression. In Australia and the United States, screening for postpartum depression is performed using Edinburgh questionnaire; however, NICE guideline of the UK did not recommend the use of this questionnaire; instead, it recommended assessing depression by two simple questions, i.e., "have you ever felt hopeless or depressed in the past month?" and "were you interested to do some chores in the past month?" These questions can help assess the mothers' mood. [16,17] In the guidelines of the South American countries, the UK, and the WHO, some solutions have been offered for common postpartum complications such as fatigue, headache, backache, lumbar pain, and constipation. [18][19][20][21][22] In the Australian guideline, some recommendations existed on how to manage fatigue and constipation, but no recommendation was given for the management of headache, backache, or lumbar pain. [20] With regard to exercise activities in days after delivery, detailed explanations had been provided in the Canadian, Australian, and American guidelines. [22][23][24] However, no instruction was given in the NICE health guide of the UK with regard to postpartum physical activities. [16,17] Based on the results of the second phase of the current study, most pregnancies occurred within the safe age range for pregnancy. In general, of all mothers, 64% had vaginal delivery and 36% had cesarean section, which is in agreement with the results of other studies in Iran. The results of DHS study in 2010 showed that the prevalence of C-section in Iran was much higher than that in the European countries and also higher than the acceptable range recommended by the WHO (5%-15%). [25] Our results showed that the use of supplements and modern birth control methods was higher in the intervention group than controls and this difference was statistically significant. Significance, receipt, and satisfaction of mothers with the emotional, communication, and educational services provided were higher in the intervention group at 60 days' postpartum. These findings are in accordance with those of a study by Mirmolaei et al., in 2011, in Tehran. [1] Similarly, higher satisfaction of mothers with the services provided at home was also reported in a cluster-randomized trial by Christie and Bunting in 2011. [26] However, Ian in his study in 2011 showed that home visits had no significant effect on satisfaction with services. [27] In the current study, the prevalence of postpartum depression was lower in the intervention group compared to the control group and this difference was statistically significant. With regard to postpartum depression, our findings were in agreement with those of Christie and Bunting; they indicated that the rate of postpartum depression assessed by Edinburgh questionnaire 8 weeks' postpartum decreased in the group that received home care. In contrast, Ian showed that home visits had no significant effect on postpartum depression. [27] One strength of the current study was that it enabled visit of mothers at home in the 1 st week after delivery, which had a significant effect on mothers' coping with the new situation because most problems related to the mother and newborn occur in the first 10 days' postpartum.
The current study also had a limitation. Mothers were only visited twice at 3-5 and 13-15 days' postpartum, and no other visits were made unto day 60. However, the mothers were allowed to contact the researcher in case of any problem, and midwives and physicians were ready to answer the mothers' questions.
CONCLUSION
Based on the results of this study, postpartum home visits obviated the needs of the mothers to a great extent and decreased the prevalence of some common physical postpartum complications. [28] The intervention group had higher frequency of the use of supplements and higher satisfaction with the service provided. Thus, it is recommended that the health authorities consider providing mothers with home care services to promote their health. This intervention can also be included in the national protocol of mothers' health. | 2018-04-03T05:25:41.987Z | 2017-08-16T00:00:00.000 | {
"year": 2017,
"sha1": "86590bef22f56c268eb77d499481e1d105cc5752",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jrms.jrms_319_17",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a28d6a0fd5cb33f55fed311caddfa9ea36ae2467",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46231131 | pes2o/s2orc | v3-fos-license | ETHANOLIC EXTRACT OF LEAVES OF NEWBOULDIA LAEVIS ATTENUATES GLYCOSYLATION OF HEMOGLOBIN AND LIPID PEROXIDATION IN DIABETIC RATS
Diabetes is one of the major health problems around the world and the incidence of this metabolic diso rder is on the increase. Current therapeutic interventio ns have not done much in preventing complications o f diabetes. Therefore this study investigated the eff ct of ethanolic extract of Newbouldia laevis leaves (NLet) on lipid peroxidation and glycosylation of hemoglob in, which are pathological indicators of diabetes mellitus. Diabetes was induced in Wistar rats by in travenous injection of streptozotocin (60 mg kg ). Diabetic rats were then treated orally with NLet fo r 28 days. After the treatment, the concentration o f Malondialdehyde (MDA) in the liver, kidney and panc reas of the rats was estimated. Fasting blood gluco se was determined and oral glucose tolerance test was also carried out. Other groups of STZ-diabetic rats were treated for 8 weeks and percentage glycosylated hem oglobin (HbA1c) was measured. Fasting blood glucose of treated diabetic rats significantly (p<0.05) dec reased in a dose-dependent manner when compared wit h untreated diabetic control rats. After oral glucose load, blood glucose level reached a peak at 60 min . In both non-diabetic and diabetic rats, treatment with the extract significantly (p<0.01) reduced the blo od glucose level at 120 and 180 min. The percentage to tal hemoglobin glycated in diabetic rats significan tly reduced (p<0.05) after the 8-week treatment with NL et. MDA concentration in the liver, kidney and pancreas of diabetic rats was also dose-dependently reduced by the extract. At 300 and 500 mg kg , the reduction was significant (p<0.05) compared with th e diabetic control. The effects of NLet were comparable to those observed with glibenclamide. Th e results of this study suggest that NLet can preve nt the complications of diabetes that result from glyc ation of hemoglobin and lipid peroxidation.
INTRODUCTION
Diabetes mellitus is a metabolic disorder with heterogeneous etiologies. It is characterized by chronic hyperglycemia and disturbances of carbohydrate, fat and protein metabolism resulting from defects in insulin secretion, insulin action, or both (Alberti and Zimmet, 1998). It develops when the pancreas does not produce enough insulin or when the body cannot effectively utilize the insulin it produces. In diabetes, glycosylation of hemoglobin and lipid peroxidation are important processes in the development and progression of complications of diabetes (Latha and Pari, 2004). Glycosylation is a spontaneous non-enzymatic reaction in which glucose binds covalently with hemoglobin to produce glycosylated (or glycated) hemoglobin (HbA1c). Enzymatic glycosylation of hemoglobin continues unabated in the presence of chronic hyperglycemia. This
Science Publications
AJPT ultimately leads to the accumulation of Advanced Glycation Endproducts (AGEs). The AGEs react with Receptors for Advanced Glycation Endproducts (RAGE) to promote the process that leads to endothelial dysfunction and then cardiovascular complications of diabetes (Yan et al., 2007). Glycosylated hemoglobin provides information about long-term blood glucose control because once a hemoglobin molecule is glycated, it remains in the red blood cell for the rest of its lifespan, which is about 120 days (Syed, 2011). Compared to Fasting Blood Glucose (FBG) and Oral Glucose Tolerance Test (OGTT), HbA1c is a better diagnostic parameter for diabetic complications. Therefore in 2009, HbA1c was recommended as a diagnostic test for diabetes mellitus with a threshold >6.5% (IEC, 2009).
During glycation and glucose and lipid oxidation, Reactive Oxygen Species (ROS) are generated. This results in an imbalance between free radical and antioxidant levels, which triggers oxidative stress in the cells. Lipid peroxidation is an important biomarker of oxidative stress and contributes to the development of atherosclerosis in diabetes mellitus (Giugliano et al., 1996). Both glycosylation of hemoglobin and lipid peroxidation are therefore important pathological indicators in diabetes mellitus.
Globally, the socio-economic impact of diabetes is enormous. This is especially the case in countries with limited resources. To successfully cope with this challenging situation, there is an urgent need to search for more treatment options that are readily available, safe and cost-effective. Medicinal plants have proved useful in the treatment of diabetes. They provide considerable economic benefit to rural and poor people who may not be able to afford the expensive synthetic drugs. They are also sources of lead molecules for the synthesis of new anti-diabetic drugs. For these reasons, World Health Organization (WHO) recommended and encouraged the use of herbal medicine especially in countries where access to the conventional treatment of diabetes is not adequate (Elujoba et al., 2005).
Newbouldia laevis (P. Beauv) is a medicinal plant that is employed in the management of diabetes across African countries. It is a medium sized angiosperm which belongs to the Bignoniaceae family. Its common names are 'African Border Tree' and 'Fertility Tree'. In Nigeria, it is known as 'Aduruku' in Hausa, 'Ogirisi' in Igbo and 'Akoko' in Yoruba languages. It is used by African traditional healers to treat various ailments. In Nigeria, a decoction of the bark is given to children to treat epilepsy and convulsions. The leaves are soaked in ethanol for the treatment of diabetes and sickle cell. Different parts of the plant have been reported to possess antimicrobial properties (Kuete et al., 2007;Ogunlana and Ogunlana, 2008). The leaf extract of the plant was also reported to lower blood glucose level in diabetic rats (Owolabi et al., 2011). However, the effects of the plant on glycosylation of hemoglobin and lipid peroxidation have not been studied. In this study, we investigated the effects of ethanolic extract of the leaves of Newbouldia laevis on glycosylation of hemoglobin and lipid peroxidation in diabetic rats.
Collection of Plant Material
Leaves of Newbouldia laevis were collected from the premises of College of Health Sciences, Ladoke Akintola University of Technology, Mercyland, Osogbo Campus, Nigeria. The plant sample was identified and authenticated by a taxonomist in Forestry Research Institute of Nigeria (FRIN), Ibadan, Nigeria. A voucher specimen was deposited in the herbarium of the institute (voucher specimen no: FHI 107753).
Preparation of Newbouldia Laevis Extract
The leaves were thoroughly washed with distilled water to remove soil and other debris that may contaminate the plant sample. The washed sample was then air-dried under shade in the laboratory for 5 days and the dry plant sample was pulverized using an electric grinding machine. The resultant powder sample weighing 500 g was then extracted with 80% ethanol at 70°C by continuous hot percolation using a Soxhlet apparatus. The extraction was carried out for 24 h and the resulting ethanolic extract was concentrated at 40°C in a rotary evaporator. The solid sample obtained weighed 47.5 g (yield = 9.5%). The crude ethanolic extract (NLet) was kept in air-tight container and stored in a refrigerator at 4°C until the time of use.
Experimental Animals
Wistar rats of both sexes weighing 180-200 g were obtained from the Animal Holding Unit of the Department of Biochemistry, University of Ilorin, Ilorin, Nigeria. All experimental procedures were conducted in accordance with National Institute of Health Guide for the Care and Use of Laboratory Animals (NIH, 1985) as well as Ethical Guidelines for the Use of Laboratory Animals in LAUTECH, Ogbomoso, Nigeria. The animals were housed in polypropylene cages inside a Science Publications AJPT well-ventilated room. A maximum of six animals were kept in one cage. The animals were maintained under standard laboratory conditions of temperature (22±2°C), relative humidity (55-65%) and 12 h light/dark cycle. During the whole experimental period, animals were fed with a standard balanced commercial pellet diet Ladokun Feeds Ltd. Ibadan, Nigeria.
Induction of Diabetes in Rats
Experimental diabetes was induced in rats which had fasted for 12 h by a single intravenous injection through the tail vein of a freshly prepared solution of Streptozotocin (STZ) (60 mg kg −1 b.wt) dissolved in 0.1 M cold citrate buffer, pH 4.5 (Chen et al., 2005). The rats were allowed to drink 5% glucose solution overnight to overcome drug-induced hypoglycemia. Estimation of Fasting Blood Glucose (FBG) was done 72 hours after injection of STZ to confirm induction of diabetes and then on the 7th day to investigate the stability of diabetic condition. Fasting blood glucose was estimated by One Touch ® glucometer (Lifescan, Inc. 1995 Milpas, California, USA). Blood sample for the FBG determination was obtained from the tail vein of the rats and those with blood glucose value ≥200 mg dl −1 were selected for the study.
Experimental Design
Rats were divided into a group of non-diabetic rats and five groups of STZ-diabetic rats. Each of the six groups consists of 6 rats. Group I = non-diabetic rats (control); Group II = STZ-diabetic rats; Group III = diabetic rats treated with NLet (150 mg kg −1 ); Group IV = diabetic rats treated with NLet (300 mg kg −1 ); Group V = diabetic rats treated with NLet (500 mg kg −1 ); Group VI = glibenclamide (5mg kg −1 ). The fasting blood glucose was measured on day 0 at 8.00 am. All the drugs were administered orally twice a day at 8.00 am and 8.00 pm for 28 days using a sterile syringe fitted with a sterile cannula. Rats in groups I and II were treated orally with distilled water for the four weeks. Blood glucose was measured on the 15th and 29th day after the animals have fasted for 12 h.
Estimation of Fasting Blood Glucose
All animals were fasted overnight and blood was collected from the tail vein on Day 0 and Day 15. On Day 29, the rats were euthanized under chloroform vapor. The jugular vein was exposed and cut with a sterile scalpel blade and the rats were bled into specimen bottles. Blood samples were transferred to sterilized centrifuge tubes and allowed to clot at room temperature. The blood samples were centrifuged for 10 min at 1500 rpm. The serum obtained was used for blood glucose analysis. Blood glucose was estimated using glucose assay kit (Randox Laboratory Ltd. UK). This is based on Glucose Oxidase/Peroxidase (GOD/POD) method (Trinder, 1969).
Oral glucose Tolerance Test (OGTT) in Normal and Diabetic Rats
Prior to OGTT, all the rats were fasted for 12 h. Distilled water (control), a reference drug glibenclamide (5 mg kg −1 b.wt.) or each of the three different doses of the ethanolic extract (500 mg kg −1 b.wt.) were then orally administered to respective groups of rats (n = 6). Thirty minutes later, glucose (3 g kg −1 ) was orally administered to each rat with a feeding syringe (Al-Awadi et al., 1985). Blood samples were collected from the tail vein by tail milking at -30 min (just before the administration of extract and glibenclamide), 0 (just before the oral administration of glucose), 60, 120 and 180 min. after glucose load. Blood glucose level was determined by One-Touch ® glucometer. OGTT was performed in STZ-diabetic and normal rats using the same procedure.
Estimation of Glycosylated Hemoglobin
Rats were divided into six groups and treated as described above but for 8 weeks. Glycosylated hemoglobin was determined by assay kit (Excel Diagnostics Pvt. Ltd. India) based on ion exchange method (Nathan et al., 1984). Briefly, whole blood was mixed with lysing agent for the preparation of hemolysate. Elimination of Schiff's base was achieved during hemolysis. The hemolysed preparation was mixed continuously for five minutes with a weak binding cation-exchange resin. During this time, nonglycosylated hemoglobin binds to the resin leaving glycosylated hemoglobin (HbA1c) free in the supernatant. After the mixing period, a filter was used to separate the supernatant containing the glycosylated hemoglobin from the resin. The glycosylated hemoglobin was determined by measuring the absorbance of the glycosylated hemoglobin fraction and the total hemoglobin fraction at 415 nm. The ratio of the two absorbances gives the percentage of glycosylated hemoglobin.
Preparation of Tissue Homogenates
After 28-day treatment with the plant extract (NLet), the rats were sacrificed and blood samples were collected into heparinized tubes. Segments of the liver, kidney and pancreas tissues were excised separately from rats in all the experimental groups. The tissues were washed with phosphate buffered saline (pH 7.4) containing 0.16 mg mL −1 of heparin to remove any red blood cells (erythrocytes) and clots (Prasad et al., 1992). The tissues were then homogenized with an ultrasonic homogenizer in cold phosphate buffer, pH 7.0 with Ethylenediaminetetra Acetic Acid (EDTA), for malondialdyhyde (MDA) measurement. The tissue homogenates obtained were centrifuged at 3,000 g for 10 min at 4°C and the supernatant was used for the assay.
Estimation of MDA Concentration
The product of the reaction between MDA and Thiobarbituric Acid Reactive Substances (TBARS) was estimated by a modified method of Ohkawa et al., (1979). A volume of 250 µL of liver, kidney or pancreas homogenate was mixed with 100 µL of 8.1% Sodium Dodecyl Sulfate (SDS), 750 µL of 20 % acetic acid and 750 µL of 0.8% aqueous solution of Thiobar Bituric Acid (TBA) were added. The volume was made up to 4 ml with distilled water, mixed thoroughly and incubated in boiling water for 45 min. After cooling, 4 ml of n-butanol was added to each tube and the contents were mixed thoroughly. It was thereafter centrifuged at 3000 rpm for 10 minutes. The absorbance of the clear, upper (n-butanol) layer was measured using Shimadzu (Japan) UV-1601 spectrophotometer at 532 nm. Protein concentration was determined by Lowry et al. (1951) method.
The MDA concentration was calculated using the extinction coefficient of MDA-TBA complex (1.56×10 5 cm −1 M −1 ) and was expressed as nmol MDA/mg protein.
Statistical Analysis
Data obtained from the experiments were expressed as mean ± Standard Error of Mean (SEM). For statistical analysis, data were subjected to oneway Analysis of Variance (ANOVA) followed by Student's t-test. A level of p<0.05 was considered significant. GraphPad Prism version 5.0 for windows was used for these statistical analyses (GraphPad software, San Diego California USA).
Effect of NLet on Fasting Blood Glucose
The hypoglycemic effect of repeated oral administration of N. laevis ethanolic extract in diabetic rats is shown in Table 1.
Before treatment schedule, fasting blood glucose was within the normal level in all the animals. Following induction of diabetes with STZ, there was a sharp increase in the average value of fasting blood glucose. After four weeks of treatment with NLet and glibenclamide, fasting blood glucose of streptozotocininduced diabetic rats significantly (p<0.05) decreased in a dose-dependent manner.
Effect of NLet on Glycosylation of Hemoglobin
The changes in the percentage of the total hemoglobin glycated in both test and control groups are presented in Fig. 1. Following the 8-week treatment of diabetic rats with the ethanolic extract of the leaves of Newbouldia laevis, the rate of glycosylation of hemoglobin was significantly (p<0.05) reduced in a dose-dependent manner.
Effect of NLet on Oral Glucose Tolerance Test
After oral glucose load, the blood glucose level of untreated non-diabetic rats (normal control) reached a peak at 60 min and gradually decreased to pre-glucose load level. NLet (500 mg kg −1 ) caused a significant attenuation in blood glucose level at 120 min (p<0.05) and 180 min (p<0.01) compared to the control (Fig. 2). In diabetic rats, NLet also caused significant decrease in blood glucose at 120 min (p<0.05) and 180 min (p<0.01) compared to the diabetic control. In both normal and diabetic rats, glibenclamide (5 mg kg −1 ) produced significant reduction in blood glucose level at 60, 120 and 180 min (p<0.01) compared with diabetic control (Fig. 3).
Effect of NLet on Lipid Peroxidation
In the liver, kidney and pancreas, MDA concentration in the diabetic control rats was significantly higher (p<0.05) compared to those obtained in the normal control and the treated groups. It was observed that lipid peroxidation was less in liver compared to pancreas and kidney. The MDA concentration of diabetic animals was dose-dependently reduced by the extract. At 300 and 500 mg kg −1 , the reduction was significant (p<0.05) and was comparable to that observed with glibenclamide (Fig. 4).
DISCUSSION
Globally, diabetes has become one of the leading causes of death. It is a disease that destroys the very engine of life. Therefore, more treatment options are needed to reduce the socio-economic impact and prevent the complications of diabetes. In poorly controlled diabetes, there is upsurge in the glycosylation of some proteins including hemoglobin. With time, glycosylated hemoglobin develops reduced affinity for oxygen which contributes to long-term complications of diabetes (Latha and Pari, 2004). Glycosylated hemoglobin is an excellent marker of glycemic control because it is formed irreversibly and is stable over the life span of the red blood cells (Daisy and Rajathi, 2009).
In this study, experimental diabetes was induced in rats by Streptozotocin (STZ). STZ-induced diabetes is a well-established animal model of diabetes mellitus. In STZ-induced animal models of diabetes, insulin is markedly depleted but not absent (Frode and Medeiros, 2008). Glycosylated hemoglobin level was elevated in diabetic rats. The significant decrease in the level of glycosylated hemoglobin in STZ-diabetic rats following treatment with NLet is an indication that the overall blood glucose level was controlled. The results of fasting blood glucose and oral glucose tolerance tests which agree with the report of Owolabi et al. (2011) also confirm the antidiabetic potentials of the extract. Some other medicinal plants have also been reported to have the ability to reduce glycosylated hemoglobin levels in diabetic rats (Venkateswaran and Pari, 2002).
Increased MDA concentration is an important indicator of lipid peroxidation (Ambali et al., 2011). MDA concentration in the diabetic control rats was significantly higher (p<0.05) compared to those obtained in the normal control and the treated groups. It was observed that lipid peroxidation was less in the liver compared to pancreas and kidney. This agrees with the observation of Oberley (1988) and Tatsuki et al. (1997) that the liver has higher capacity to cope with oxidative stress than other organs.
The role of biological membranes is very important for the survival of the cell. They serve as selectively impermeable barrier and are involved in cellular transport processes. Through the process of lipid peroxidation, the homeostasis and function of cell membrane are impaired leading to cellular dysfunction and damage (Dargel, 1992). Reactive Oxygen Species (ROS) such as hydroxyl radical and protonated form of superoxide anion commonly initiate the process of autocatalytic lipid peroxidation. This eventually results in the conversion of unsaturated lipids into polar lipid hydroperoxides. The net effects are increased membrane fluidity, efflux of cytosolic solutes and loss of membrane integrity (Avery, 2011). NLet significantly reduced lipid peroxidation in a dosedependent manner. The ability of NLet to suppress lipid peroxidation and glycation of hemoglobin in diabetic rats indicates that it can prevent cell injury and complications such as atherosclerosis and kidney damage.
CONCLUSION
The results of this study indicate that ethanolic extract of the leaves of Newbouldia laevis possesses antidiabetic properties. It can prevent the complications of diabetes that result from glycation of hemoglobin and lipid peroxidation. Further studies should be carried out on this plant in order to understand its mechanism of action. | 2019-03-10T13:03:26.651Z | 2013-11-13T00:00:00.000 | {
"year": 2013,
"sha1": "9fce5452bffd19ac6add13058620cf0758e47085",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajptsp.2013.179.186",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "340d203c1ce3190d0d5c1a72e6981bdfbef265a2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255842778 | pes2o/s2orc | v3-fos-license | A qualitative study of clinical narrative competence of medical personnel
Medicine practiced with narrative competence is called narrative medicine, which has been proposed and used as a model of humane and effective medical practice. Despite the in-depth discussions of narrative medicine, the study of narrative competence in literature is limited; therefore, this study aims to explore the dimensions and connotations of the clinical narrative competence of medical personnel. This qualitative study used in-depth interviews to collect participants’ experience and perspectives regarding narrative competence, followed by thematic analysis of the transcripts. Through purposive sampling, this study successfully recruited 15 participants (nine males and six females in 2018–2019) who were engaged in narrative medicine or medical humanity education from different medical schools and hospitals across Taiwan. The authors performed manual thematic analysis to identify the themes and concepts of narrative competence through a six-step theme generation process. There were four major themes of narrative competence generalized and conceptualized: narrative horizon, narrative construction (including narrative listening, narrative understanding, narrative thinking, and narrative representation), medical relationship (including empathy, communication, affiliation, and inter-subjectivity), and narrative medical care (including responsive care, balanced act, and medical reflection). These four themes were further integrated into a conceptual framework and presented in a diagram. Cultivating narrative competence in medical education can complement traditional biomedical orientation. Regardless of their treatment orientation, narrative medicine-informed health practitioners may take advantage of their multi-dimensional narrative competence, as presented in this article, to enhance their awareness and preparation in different areas of competence in medical services. In addition, the results of this study can be used as a framework for the development of the behavioral indicators of narrative competence, which can be taken as the basis for medical education curriculum design.
approach can be useful for not only mental health problems, but all medical encounters [7].
Studies of primary care generally show a strong correlation between quality relationships and patient satisfaction. In addition to medical skills, other physician qualities, such as listening, interpersonal skills, empathy, and commitment, are highly valued by patients [9]. As Roberts stated, the trend (so-called post-Balint) in medical services moving to more interpretive and empathic medical consultations is not just a shift to more psychosocial models, it is also a shift in interactional models. In accordance with Balint's work, Charon pointed out that narrative medicine provides the means to understand the personal connections between patient and physician, while simultaneously offering physicians the means to improve their effectiveness to work with patients, themselves, and others. Practicing with narrative competence enables physicians to form empathic engagements with patients and establish therapeutic alliances [10]. On the other hand, narrative medicine or narrative-based medicine enables patients to unburden themselves, and attentive listening is intrinsically therapeutic [6], while "the narrative provides meaning, context, and perspective for the patient's predicament" [11].
Western medicine in the post-1990 period presented an important trend of reflection. It was thought that the traditional medical treatment presented a method of reductionism. When collecting and understanding a patient's complaint, physicians used a logic-scientific method to analyze, classify, and develop a reasonable diagnosis and treatment strategy. However, in order to provide holistic medical care, it is necessary to take into account the psychosocial aspects, in addition to the disease, the family and social cultural context, the patient's spiritual beliefs, and the other impacts of medical care decisions. As the role of digital technology in clinical medicine becomes more important, less attention was paid to the personal narrative from the patient's subjective experience [12]. Therefore, medical personnel need to have appropriate "narrative competence" as well as medical-related knowledge. In addition to making correct diagnoses for patients, medical personnel need to understand the relevant background stories of the parties and be able to describe and explain their psychosocial intentions in order to develop a medical plan from the perspective of the patient. This trend of reflection emphasizes the importance of narrative thinking and has had a resounding effect in the field of medical education.
Charon suggested that physicians read literature to promote narrative thinking, and engage in reflective writing regarding the stories of patients, themselves, colleagues, and society, in order to enhance respectful, empathic, and nurturing medical care in narrative medicine and medical education [1]. When learners wrote and reflected on the events that had impact on them during their provision of medical care, they may describe and elucidate something not taught in school, find their professional identity from a good role model, and experience the art of empathy through patient-professional encounters [13]. The modes of reflection include taking a break from a difficult practice or action, reviewing this experience, and reassessing to find problem solving methods based on the reality and multiple possibilities [14].
The human experience has a storied nature [15]. Humans organize life experiences through narratives [16]; through self-defining life stories, they understand their lives, and provide a kind of identity, meaning, and coherence of individual lives by the self-narrative methods of reconstructing the past and expecting the future [17]. That is to say, the emphasis on the narrative nature of individual psychology and the discussion of human experience through narratives provide an inquiry orientation that is different from traditional positivism in the exploration of human life experience [18][19][20].
Narratives can be seen everywhere in medicine. For example, patients in the clinic and in the wards tell the doctor about their conditions; the physician listens to the patients' narrations and further integrates the information heard into the medical knowledge to explain the cause and treatment to the patient. Outside the clinic and the ward, physicians communicate frequently with colleagues in different professions such as dietitians, social workers, and especially nurses. In addition to the narrative behavior of interdisciplinary cooperation and mutual trust, these communications rely on the re-narration of the patient's disease story to provide a whole-person medical care that meets the patient's needs. More importantly, medical personnel with narrative competence can not only understand their emotional reactions to patients but also actively enter the patient's story, listen to and communicate with each other's concepts, and construct and implement mutually understandable health care behaviors and cultures.
Shared understanding is created through conversations, meaning that doctors and health professionals can allow patients to adequately express their stories in their own words, explore connections, differences, new choices, and possibilities, and deliberately probe and guide the dialogue to promote understanding without being controlling, interfering, or indifferent [6,7,21]. Exploratory questions, such as "What does this mean for you?" and questions and prompts that invite change, such as "What needs to happen for the situation to change?" can be used to facilitate communication and treatment. By being curious, doctors and other medical personnel, can listen to patients' narratives [2] and pay attention to the contexts and complexities of the medical encounters, as well as the circularity nature of the interactions or influences involved, in order to provide nonjudgmental and genuine care that is sensitive to patients' needs and displays a readiness for change [6,7].
What kind of narrative competence is demonstrated by the experience with such narrative nature? This can be seen from the two perspectives of narrative knowing and thinking. Generally speaking, formal science is more about category knowledge. It defines categories by operational definitions and understands the relationship between categories through attribute relationships. Narrative knowing is a form of constructing knowledge based on the schematic format. According to Mandler [22], schematically organized knowledge may constitute: 1) spatial schema, which is the schema of the "scene", to understand from the partial-total spatial relationship, and to think that part is not an independent part, but a part of the whole space; 2) temporal understanding, which is the schema of the "plot" and the understanding adopted by many story narratives, such as biographies. Based on this perspective, narrative knowing means that concepts or ideas are organized into systematic relationships through a whole or theme. From the perspective of narrative thinking, when people face a stimulus, they do not directly respond to it. Instead, they absorb and compare the stimulus with the existing psychological patterns and engage in various cognitive activities to form meaningful messages. This is an assimilation process. When people face textual stimuli, they are assimilated into the internal representation of the narrative grammar that has been acquired [19]. Moreover, narrative thinking involves how individuals understand the actions of others and how individuals are related to others, because the weaving of a story is a collaborative activity that explains why the narrative produces mutual understanding [23]. Rita Charon, a pioneer of narrative medicine, has long been committed to the promotion of narrative medicine and the training of practical work. Charon believes that narrative medicine is a narrative science with a theoretical basis that includes: narrative theory, biographical theory, phenomenology, theory of mind, traumatic research, and aesthetics [24]. Inspired by the field of literature and linguistics, she believes that the narrative competence is the acknowledgement, absorption, and interpretation of the human disease story and being moved to action by it, which can be applied to the clinical work of medical care [1,24]. However, in different literature, narrative competence has various connotations, such as practitioners with narrative competence showing empathy, reflection, professionalism, trustworthiness and other characteristics in clinical circumstances [1], or narrative competence as a combination of textual skills, creative skills, and affective skills [2]. Textual skills involve the confirmation of story structure, the adoption of multiple perspectives, and the understanding of metaphors; creative skills involve a variety of interpretations of imagination, curiosity, or the creation of multiple endings, etc.; affective skills refer to the tolerance of the uncertainty of the story or the emotions of entering the story, etc. In practice, narrative medicine emphasizes three elements, namely: attention, representation, and affiliation [25]. Attention means not judging and criticizing, being able to have an open mind to listen to patients, and being able to receive messages from patients and their families. Representation is the further reflection and integration of what is heard and then reorganizing and interpreting the message by means of words or images to summon the testimony and hope. Affiliation is a combination of attention and representation, which can form close relationships, cross the gap between doctors and patients, and connect with patients through narratives to create links and promote medical relationships. Charon's view of narrative medicine is a perspective closer to literature, aesthetics, and philosophy. However, how to turn these connotations into specific measurable narrative competence indicators needs to be further explored.
This paper suggests that the evaluation of the effectiveness of narrative medicine education and training is not only based on the change of attitudes or perceptions of learners/participants but also the display and enhancement of narrative competence. Therefore, the main purpose of this study was to explore the conceptual connotations and items of the narrative competence of medical personnel from the perspective of narrative medicine in order to provide a reference for the improvement of the quality of narrative medicine education in the future.
Method
This qualitative study used an in-depth interview method to collect the experiences and perspectives of the narrative competences of Taiwanese medical personnel engaged in narrative medicine (including medical humanities), followed by thematic analysis of the transcripts. The purposive sampling method was adopted for the interviewees. There were 15 research participants, including nine males and six females. Of the participants, nine were doctors and nursing staff responsible for medical education and six were general or humanities teachers in medical education (see Table 1).
The interview protocol (interview guide) used in this study was developed for this research (see Additional file 1 for the online version). The authors initially drafted an interview protocol, as based on the research purpose and literature review in the study plan. After obtaining approval from the institutional review board at the Baddish Tzu Chi General Hospital (IRB106-141-B), we engaged in a pilot interview with a medical teacher, and then, revised the interview protocol according to question-answer clarification and feedback.
The interviews were conducted by the authors of this study. Each interview began with an explanation of the purpose of the study and an invitation for the respondent to fill in the interviewee's consent form, followed by a semistructured in-depth interviewing. The order of the questions was not fixed until all questions from the interview outlines were covered. The interview outlines included: Before data analysis, the interview recordings were transcribed into text, all identities were hidden, and any unclear parts were repeatedly listened to and clarified. Furthermore, thematic analysis was manually conducted by four coders (three authors and an external professional colleague), who independently reviewed the materials and jointly conducted a six-step theme generation process [26]. In Phase 1, repeated reading was conducted to gain an overall understanding of the text. In Phase 2, the initial codes were generated from the entire data set for each individual interviewee. In Phase 3, the initial codes were collated into themes, where the main overarching themes and subthemes were identified. In Phase 4, the validity of the initial themes and subthemes in relation to the data set were reviewed. In Phase 5, the themes and subthemes were further defined, refined, and named. In Phase 6, vivid examples were chosen for each theme, and the relations between themes were conceptualized into a diagram. The credibility of this study was established through triangulation [27,28]. Specifically, this study took participants from a wide range of educational and clinical backgrounds, including medical humanities and different medical specialties, to achieve "source triangulation". In addition, in order to establish an "analyst triangulation" that requires external reviewers or multiple analysts, our team (the three authors, and a professional colleague who served as an external coanalyst) worked together to analyze the transcripts until consensus was reached, as this analysis process helps to facilitate discussion and clarify possible blind spots.
Results
The main results of this study are shown in Table 2. The concept connotations of narrative competence included four main themes and 12 subthemes.
Narrative horizon
The narrative horizon/perspective refers to the narrative medicine views and beliefs held by medical personnel. The narrative perspective is patient-centered, and places emphasis on individualized interventions and patient subjective experiences: meaning medical intervention is not just the elimination or relief of symptoms, but rather the further pursuit of peace and wellness, that is, caring for the patient's physical and mental care needs, as well as personal and family adjustments related to such medical care. Patient-centered care is: "When the patient is taken as the center, there must be an illness history. The illness history is a process of the patient's illness. This is the beginning of the narrative. You have to understand his entire process.
(YW-007) The illness history includes the parts of his stress, his family, his profession. (YW-009) Narrative medicine is also training us to understand a patient's more complete story, not just some situations, such as when he had a fever, when he had a headache, what was the situation when he had a fever or why did that happen at that time?" (YW-019).
In the perspective of narrative medical care, medical professionals do not limit medical treatment to just the facts or events that have occurred; they see patients and peers as teachers, and examine the course of treatment, the medical relationship, and the patient stories to find moving emotion and inspiration. Interviewee SJ mentioned a sense of authenticity: "The authenticity of our lives is a kind of sense of reality, not truth. Truth is cruel, unacceptable, and there is no way to change it, [ …] if you go to see a person who has been injured from a fall at a very young age, and has paralysis of half the body, what should he do? How would he spend his life? It is hopeless for many people, but, if we have narrative medicine, we have many ways to help him change his..." (SJ-025). • Medical reflection Self-awareness; tolerance for uncertainty; self-reflection/in action/for action/of action Finally, medical care works do not end at a cure, but extend to healing, including empathy, support, and responsive care; moreover, it assists clients in perceiving, understanding, accepting, discovering meaning, and enhancing self-motivation. Interviewee SJ mentioned a case where even patients with end-stage cancer can experience peace of mind: "There is a patient who also has the kind of hydrovarium ovarian cancer. The hydrovarium is already full, [ …] Her eyes are very determined, and she uses a brush to write, and then, draws the Guanyin [a religious fugure] image. She painted it very well, [ …] She can still move like this when she is dying, [ …] She said that she is breathless, but can remain calm, so spirituality is still very important! Isn't it? That is the ultimate goal of our narrative medicine, the hope to be able to achieve spiritual peace." (SJ-023).
Narrative construction
Narrative construction refers to medical care, as based on the spirit of the narratives. The concept of this theme is rich in content and can be divided into four subthemes, namely:
Narrative listening
The narrative spirit is constructed with emphasis on "the interest in the patient and attention to the person", meaning being able to listen to the key words or implications of the patient's words, while simultaneously listening to the patient's nonverbal narrative, and being adept at observation. As an interviewee mentioned, "There are some doctors [ …] If he had not cut off the interview so quickly, if he had given a little more time for the patient to talk, the narrative would have appeared sooner." (MH-006) It is important to have humanistic sensitivity, that is, to be able to perceive the patient's feelings and thoughts, notice the patient's key narratives (including differential narratives and special narratives), discover the relevance of the disease to the individual, learn about their localized life experience related to the disease, and be able to remain sensitive under unreasonable circumstances. For example, interviewee HR believes that, whether medical staff "have interest in patients or not" (HR-037) is very important, and narrative competence is nothing more than a skill." "You have to be sensitive first. Sensitivity is to read, listen, and feel, and there should be training for it, so that you can see more and deeper than others…, and then think more and in more detail, and listen more… We need to be trained for this kind of sensitivity, actually this is the sensitivity of the humanities… The so-called sensitivity is to be able to identify, like Rita Charon, it is to acknowledge, and to acknowledge means that you want to pick up those things, those implications." (CH-019).
Narrative listening is like having a literary soul: "You can absorb his story, and then you can understand, and then you can be touched, and then you can actually pay attention! Did the first patient come in and feel that he caught your attention, that is…did he think he was noticed by you! That, that is more than the first… the first moment. He is also judging whether you, as a doctor is… caring" (HR-053).
Narrative understanding
Narrative understanding focuses on the patient's perception and interpretation of the disease, that is, "the patient's model of interpreting his disease, how does he think he is sick? What kind of disease does he think he has? Then how long before he feels he will be better? [ …] What does he think of the disease? Or how does he feel?"(LJ-029); and "What kind of network does the disease occur in, and what kind of patient world does this network construct for the patient? …" (FZ-007), thus, understanding the key events of the patient's life history and their relevance to the disease, and especially, intergenerational understanding, are important.
"It may be difficult for young nursing staff to understand the course of the past life of veterans. There were so many critical events in their lives. In fact, the critical events are when he was fighting or when he was away from home. The point is, in fact, when we are taking care of him, at the end of his life, when he is dying, or in the process of his hospitalization, and those moments will actually cause him to be in a certain state of mind. So, there are a lot of critical events for patients that are actually very important." (DW-008).
Rather than focusing on physiological conditions, narrative understanding focuses on the psychological, social, and cultural context of the disease, interviewee MH said: "You need to use biophysical orientation to connect to the psychosocial, [ …] Suppose the patient just now, the patient who got the headache, [ …] I asked how long this headache has been hurting, and then asked, do you think there is a reason that caused your headache today [ …] He might say how much pressure the company has given him, [ …] This is what we are coding, when it comes to this, it means when you see he has a headache, you should proceed like this, then generally, you will get such psychosocial information." (MH-020).
Narrative thinking
Narrative thinking is not meant to arrange events in chronological order, but to place the disease/hospitalization in the context of life to construct the temporality of the disease.
"A person's life is lived in decades. In fact, his hospitalization took place only at a certain point of time, then how do you connect this point to his past and his future. His hospitalization, our hospitalization preparation plan, his life in the future, and his life in the past, how do you connect all these in series…" (DW-086).
In addition to biomedicine orientation seeking the truth and facts, narrative thinking focuses on the authenticity of patients and diseases, which refers to touching stories and revelations in medical care, including the value of life, the beauty of human nature, selfless dedication, resistance to disease, family emotions, loss, rebirth, etc. As Interviewee FZ stated, "You see this patient as glorious, or the kind of attitude when a person is facing life ... maybe it is also a kind of wrestling match to some extent, there are some people who wrote articles like the Old Man and the Sea." (FZ-060).
The traditional biomedical orientation focuses on the provision of information or lab tests, while narrative medical orientation emphasizes creating meaning, including concatenation and interpretation. Narrative medicine emphasizes narrative inquiry, including open-ended questions and exploratory findings, other than regular questions.
Narrative representation
Narrative representation focuses on phenomenological description, which can specifically describe the details, while creating a narrative reproduction that is close to the phenomenon. Interviewee SJ used an example to illustrate: "Just like we see diabetic feet, we dare not describe them, that is, narrative medicine emphasizes that you are going to face that wound, and then go on to describe the appearance, not just to name it indirectly as a bad bacterium, like this is one, this is just some kind of bacterium. It is not like this. Instead, you should tell what a wound looks like, describe it in detail, and then, let everyone discuss it..." (SJ-017).
Narrative representation comprises interpretation and decoding by medical personnel, as well as making good use of metaphors and the imagination to connect the disease with the life aspect. Imagination is not illusion; instead, it is a method " … that can be used to make a more reasonable statement of this entire event. It doesn't necessarily fully conform to the facts, but they can collide, and it can really test the imagination.
[ …] On the other hand, a narrative can actually give him a little more guidance, to view this matter in either a plane or a 3D stereoscopic way. " (XY-055).
Interviewee SJ further illustrated the competence of representation: "For representation, some people will think that it seems to be just to say it! Actually, there are a lot of skills in it. Those skills include your life experience. If you can connect a lot of things happening to the patient with your life experience, and then, use another more specific thing to explain this thing that is happening now, it often results in a kind of great, great shock..." (SJ-045).
SJ went on to explain the use of the metaphor:
Medical relationship
The medical relationship based on narrative medicine includes three subthemes: Empathy A narrative-oriented medical relationship is based on empathic listening. Medical professionals must put themselves in the position of the patient, treat patients as family, be able to judge another person's feelings by their own, have the rapport ability to be emotionally moved, and be able to establish a link with the patient. However, empathy and sympathy are different. "Sometimes we all mistakenly think that it is sympathy [ …] In fact, most patients do not want sympathy, [ …] Why should you sympathize with me, I have lung cancer, not you … You come to see me now, why is it that you want to come and see me, some of them will be negative..." (JD-126).
It is important to be able to judge the other person's feelings by one's own: "Every one of us is a book. You can understand the logic of that book only with the narrative, and with the narrative to understand the logic of that book, you will find that you have the emotional link of judging the other person's feelings by one's own, and you can understand everyone's unique logical reasoning... "(DJ-138).
Communication
The core of the medical relationship is communication, and medical personnel should be able to facilitate patient narratives and have the skills to simultaneously empower them to respond and provide feedback appropriately to patients. Interviewee MH mentioned an elderly case: "If you think about him in his 80s, and he comes alone for a hospital visit [ …] If this is a first-time visit, this situation is really rare, so you are a little sensitive, and say, "Oh, you came here alone today", then some people may cry soon after hearing this, while some people may talk […] "the children do not have the time". Then you will say, "Oh, you are great, you can take care of yourself". Look, if you say one or two sentences at the beginning like this, that relationship will become very different..." (MH-016).
Medical relationships include communication between doctors and patients, as well as across medical professions.
The exchange of life experiences and dialogue between doctors and patients is one of the steps in the promotion of good medical care.
"When the next treatment step is made, his views will be considered. Then, the establishment of such communication relationship between patient and the doctor must be more than saying how many times he takes the medicine per day or asking if he is smoking. Then, he will tell the truth, because if he continues to smoke, you have to understand what his environment is, and when it started. He may be smoking because everyone in his work environment is smoking..." (CH-008).
Affiliation
Affiliation in the medical relationship means that doctors and patients can be mutually empowered, establish a good congenial relationship, share the burden of the disease, and have the ability of feeling affection, in order to find acceptance, be grateful, etc. Interviewee SJ further explained the meaning of affiliation, as follows: "Affiliation is to achieve a kind of tacit understanding with the family, that is, understand the experience by sitting down together and sharing together [ …] Two people bearing a burden is better than one, because if he accepts you, he will be more willing to allow you to share his burden, and his burden will be lighter, just like a best friend [ …] When you reach such affiliation with the patient, that thing can be put aside, you don't have to worry about what kind of thing it is, so that allows the patient to reach a peace, at least temporarily..." (SJ-027).
Inter-subjectivity
Compared with the construction of expert subjects centered on medical personnel, the construction of narratives emphasizes interaction with patients, promotes inter-subjectivity, and constructs meanings from impressions into contexts.
"Subjectivity is constructed with each other. If there is a patient constructing with a doctor throughout this entire process, I think the career of this doctor will be pretty good. (XY-014) Narrative medicine can make the doctor himself a reader. It is like reading to the patient, [ …] like when we describe the story, it is the impression points, and these impression points are connected in series to become a narrative, then, how does he interpret this hospital as treating his patients, how does he interpret how his entire life processes are treated, including interns and nursing staff, [ …] he is constructing, so he actually treats himself as a text, to a certain extent, if his text is in the direction of cooperating with you for construction, I think this is a positive direction." (XY-064).
Ethical thinking and action are important parts of narrative subjectivity, including being able to understand others and transcending differences, being able to understand and reflect on values, making trade-offs and balances between different values, and having crossdisciplinary cooperation with different professions in the end. It is important "not to let the subjectivity of different objects be erased by you." (HR-012).
Narrative medical care
Narrative-centered medical care has three core subthemes:
Responsive care
The medical treatment of narrative medicine is individualized patient-centered care, as based on working with patients to build a trusted medical relationship and developing care that responds to patient needs, and this kind of care is different from the reductionist attitude: "The reductionist's attitude is to be able to see how many patients are seen during the morning. This is how reductionism is related to this. But, to teach students, what we want is individualism, we want to make their patients different from ordinary patients. The most important thing is if you can see what the patient's need is." (CW-073).
Medical personnel must actively listen to the issues of concern from patients, invite patients to join the medical team and become team members, and give patients the opportunity to express what they want and the medical model they are looking for. Most importantly, patients should have the opportunity to participate in medical decision-making together, in order that the patients' real needs are considered.
Balanced act
Narrative-oriented care not only focuses on physiological issues, such as characterization of the disease, new diagnostic techniques, empirical treatment outcomes, side effects, and complications, it also considers psychological and emotional symptoms, such as patient anxiety and worry, fear and the expectation of medical care, behaviors adapted to the disease and treatment, and related issues of family, cultural system architectures, and social contexts. Interviewee LJ told a related case story that illustrates the importance of seeing the patient's feelings during clinical care: "The patient is conscious of his appearance; though he wears patient clothes, he puts on different scarves every day [ …] Why would he resist the bladder training [ …] the urine bag is so obvious, will the urine bag smell bad? Will there be obstacles to his wish to look nice [ …] So, in the process, they thought a lot about how to communicate with him, and finally, the patient was willing to accept it, and then, even very grateful, because they helped him, knowing that he is still keen on looking nice, they made his urine bag smaller, meaning he could neatly put in his pants, and the patient was so happy!" (LJ-035).
Medical professionals take into account the psychosocial issues that affect the treatment of diseases, such as the impact of the disease on individuals and families, the role changes, financial problems, how patients are treated by relatives and friends, the nature of their work, and even spiritual issues, such as faith and spiritual support, which are parts of balanced medical acts.
"Because like us, whether as physicians, nurses, or medical staff, we all have a clinical goal, and that is to assist this patient in this care, [ …] For example, they want to emphasize the body and mind, but our clinical part is only about the body. While the psychological part is also there, it may just be we have not asked about it (LJ-014) or we may not be able to handle it. At most I ask the psychologist, I look for social workers, or I talk to him, [ …] When I am about to enter the ward, I don't know what story I will encounter, but then, I would slowly teach them how to identify the stories related to their care..." (LJ-015).
Furthermore, healthcare providers must pay attention to the balance of personal life, mind, and body, and seek reconciliation and responses in their various medical works and personal life factors (including resistance, hesitation, and challenges). Maintaining appropriate medical relationships, while avoiding burn out, is also a practical and important balancing method.
Medical reflection
Narrative medicine focuses on medical reflection and includes: recognizing and tolerating medical uncertainties and individualized medical needs; the self-reflection and self-consciousness of medical personnel (who can feel a sense of their own feelings and can self-monitor their physical and mental state); reflection in action (such as, how to effectively complete medical tasks, how to use evidence-based medical care, and how to pay attention to and evaluate medical quality); reflection after action (such as, how to identify a problem, identify a root cause, or make future improvement plans); and reflection on future actions (such as, how to continue self-directed learning and progress and reflect on the nature of life). Interviewee SJ mentioned a good reflection metaphor: "It's like a trout in the rapid current. She suddenly finds some stones to hide behind and stays there, because the vortex there is not that big, and then, there is the purest oxygen, aren't we like this? There are not so many hours of sobriety, they are taken away by others, we are just looking for them, and we suggest (medical) to students every day that they need to find time to think about what happened today, [ …] that is, you can review in this short times of three or five minutes, what happened to you today, what enlightenment you got..." (SJ-025).
Interviewee FZ also offered a good explanation for inspiration from a patient's disease story: "Understand the patient as a book, and then, when you are reading this book, you can actually understand him from many angles, or you can actually have a lot of life conversations with him, including citing from your own life experience. In fact, there may be some kind of communication or exchange between you and a patient, and this will let you know about life or death and disease in your clinical work [ …] He has a certain experience of life, and in fact, he is standing in one of the richest fields, and then, an observation and introspection offered in this area can help him continue to go on, and this is a very good way for doctors to practice medicine." (FZ-007).
This study further integrated the above concept connotations, and formed descriptions and a diagram of the conceptual framework (see Fig. 1). When a patient enters the medical system, guided by the patient-centered core horizon, medical personnel can pay attention to the narrative framework of the patient and the authenticity of the disease, and begin to form a narrative construction. First, they must actively listen to the patient's disease story, and carefully observe and understand the physiological, psychological, social, and cultural factors in the patient's illness story, as well as the key events in their life course. Furthermore, medical personnel can use narrative thinking, pay attention to the touching stories to achieve enlightenment for medical treatment, link the disease to life through such inquiry, and find connections and interpretations of the narratives. Furthermore, through such inquiries, they can reach across differences, treat the patients as family members with empathy, and establish a medical relationship of inter-subjectivity and affiliation through the communication skills of facilitation and empowerment. In clinical care, medical personnel should provide patients with responsive care and patients should participate in shared medical decision-making. Moreover, medical personnel must be able to engage in self-reflection from time to time throughout this process.
Discussion
The results of this research were constructed based on the experience and practice of medical education experts and scholars. The research results could be important for the development of narrative-oriented medical education.
Narrative based medical knowledge, starts from the patient's point of view and invites the patient to describe incidents that he or she cares about, which are not necessarily related to the disease itself but could be about the illness, the impact of the disease, or the patient's perception and emotion of the disease. These are all subjective expressions and are associated with the social and cultural background, personality, growth history of the patient, and values. Patients often use metaphors or symbols to explain their conditions. Each patient has diverse intentions and issues that are non-generalizable, and each patient has a unique narrative. Physicians and medical personnel with narrative competence can appreciate the patients' illness stories and their fight against disease, and they will be more consciously able to help their patients reshape their life stories through medical relationships and patientcentered medical interventions. Under the efforts of cooperative medical care, patients' illness stories can be injected with more positive factors such as understanding, support, and hope, thus achieving a healing transformation. On the other hand, the medical personnel involved in the medical experience and the story co-constructed with their patients will also have an enriched experience of human nature and life and deepened their medical beliefs, Fig. 1 Integrated diagram of narrative competence conceptual framework thus nourishing their practical ability to care for and attend to their patients.
Although some comparisons could be made between conventional medicine (as well as modern technically enhanced biomedical approaches) and narrative medicine, they should not be viewed as an absolute dichotomy. As well summarized by Milota, van Thiel, and van Delden [12], "better attention to and appreciation of narratives in clinical settings can help doctors bridge gaps between their mediopathological knowledge and the experiential knowledge contained in their patients' stories", thus, the application of narrative medicine with narrative competence could be advantageous to meet the growing demand for patient-centered services and shared decision-making in a more diverse modern medical environment.
Health professionals who value patient-centered medical services, and take note of the psychosocial issues of patients, may prefer narrative medicine and consider this method meaningful and approachable. In other words, they can relate to this emerging model by sharing the commonalities between traditional medicine and narrative medicine, while drawing on or referring to their experience and expertise.
Regardless of their treatment orientation, narrative medicine-informed health practitioners may take advantage of the multi-dimensional narrative competence of this model, as presented in this article, in order to strengthen their awareness and readiness in different competence areas. Despite criticism, and even some tension, the ongoing dialogue between biomedical tradition and narrative medicine will enrich our understanding and reflections of patients, diseases, treatments, and human life, and thereafter, as medicine advances, we can anticipate more integrated and complementary medical services.
Compared with the narrative competence mentioned by Rita Charon, the findings of this study were more specific. For example, Charon's narrative competence refers to the ability to acknowledge, absorb, interpret and be moved, but it is a major challenge how to translate these words to make them the content for the medical education. This study found that part of the connotation of the narrative construction subtheme generally includes Charon's definition of narrative, but at the same time, it is richer and more layered in the connotation theme. In addition, Charon believes that a practitioner with narrative competence will show such characteristics as empathy, reflection, professionalism, and trustworthiness in clinical practice [1]. In this study, it was also found that these are important traits in medical relationships. However, this study found that responsive and balanced medical care that addresses to patient needs can complement the deficiency of the past literature.
For patients with chronic and major diseases, feeling ill means not only symptoms and duration but also a series of life changes that are full of stories of sadness and sporadic encouragement. Through illness story-telling and reading, medical students can learn more about the overall phenomenon of falling ill, perceive the patient's subject experience and situation, and identify key psychosocial issues. Focusing on the narrative and reflective process of disease helps develop the ability of narrative listening and narrative understanding to construct a patient-centered perspective.
Narrative frames can be further used as an epistemology of narrative medicine education. With such ability, medical personnel can understand that to heal is to help patients to find peace and wellness while fighting for recovery from their illnesses. This understanding and frame require medical students and medical professionals to be aware of how patients go through and cope with illness. In order to achieve peace and wellness, interdisciplinary collaboration that attends to patients' different needs is essential from a narrative perspective. An indepth understanding and appreciation for the function of different medical personnel and active dialogues among them will facilitate such collaboration, which should be cultivated in the early stage of medical education [29].
Often, treatment requires alliances and cooperation among patients and physicians as well as other medical personnel. Building sufficient understanding and trust among all players is relevant to the course and outcome of the treatment. Medical education programs that value medical relationships and provide practical training should develop students' relationship competence to demonstrate abilities for empathy, communication, affiliation, and inter-subjectivity in the treatment context. More detailed case observations and reflections and an emphasis on medical interaction will not only help students internalize relationship competence but also enhance their confidence to work with different patients and colleagues [30]. It will be useful for medical teachers to provide feedback to reinforce such learning processes.
Healthcare teamwork and interprofessional collaborations involve a great deal of quality communication. Likewise, as communication is the basis of the doctor-patient relationship and treatment, the importance of communication has been widely recognized by medical education and medical services. Incorporating narrative competence into the medical school curriculum could make a unique contribution to facilitating communication among patients, doctors, and medical personnel. By combining storytelling (stories of patients and students), diary writing, and reflection activities, medical teachers can help students in an experiential manner to identify, absorb, and interpret the illness and treatment stories constructed by patients and medical staff, and be moved [24,31]. This process will help students develop empathy, different perspectives, and mutual respect, which will ultimately enhance medical communications and relationships. The improvement of the overall narrative competence, especially in the areas of narrative construction (i.e., narrative listening, narrative understanding, etc.) and medical relations (i.e., affiliation, intersubjectivity, etc.), will empower medical staff to communicate and interact more effectively with patients and colleagues.
Narrative medicine-reinforced medical practice assists medical work in meeting the growing trend of placing emphasis on medical ethics and patient autonomy. Medical professionals with narrative care are committed to responsive care through the efforts of balanced acts and face ambiguity and uncertainty in the medical process through reflection. According to the results of this study, the medical reflection and responsive care found in the narrative perspective emphasize the patient's subjectivity, participation, related connections, the value and adoption of their opinions, and assist in balancing uncertainty and creating individualized medical care in a rapidly changing medical environment. In fact, this responds to practice-based learning and improvement (PBLI), which is one of the six core competencies highlighted in the current medical education field [32,33] and the spirit of participatory medicine [34]. The establishment of the narrative competence will lead medical members to reflect on the past from the perspective of their patients, implement self-learning, and find continuous motivation for patients to make continuous progress.
Conclusion
Narrative medicine can enhance the professionalism of medical personnel, and finding the kind of narrative competence medical personnel need to have in order to clinically approach their patients' illness experience was the main purpose of this study. Using the method of qualitative inquiry, this study conceptualized the four dimensions/ themes of narrative horizon, narrative construction, medical relationship, and medical care, as well as 12 subthemes of narrative compatence. Among these subthemes, the patient-centered frame was found to be the core of the epistemology of narrative medicine education. On this basis, medical personnel should have the ability to listen and understand, and at the same time have the narrative thinking and representation ability needed to approach the patient and construct a patient-based illness story. In clinical practice, medical personnel must be able to establish a relationship of mutual communication with the patient so that they can reflect from time to time and construct medical care that responds to the patient's needs.
Previous related literature has found that narrative medicine education programs generally have a positive impact [12], but there is little research on the measuring the outcome of medical personnel's narrative competence. This study initially generalized the conceptual framework of medical personnel's narrative competence, which can be applied in clinical and medical education, especially in the three-step reading-reflection-responding process of narrative medicine education and training, to further evaluate the effect of narrative medicine training programs on improving medical personnel's narrative competence.
Additional file 1. Interview Protocol (Interview Guide) Abbreviations NC: Narrative competence; NM: Narrative medicine | 2023-01-16T15:01:44.699Z | 2020-11-10T00:00:00.000 | {
"year": 2020,
"sha1": "4a0947c671415a789e5f0747ed9ceffd628671be",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12909-020-02336-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "4a0947c671415a789e5f0747ed9ceffd628671be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
214612918 | pes2o/s2orc | v3-fos-license | RIPK3 Promotes JEV Replication in Neurons via Downregulation of IFI44L
Japanese encephalitis virus (JEV), the leading cause of viral encephalitis in Asia, is neurovirulent and neuroinvasive. Neurons are the main target of JEV infection and propagation. Receptor interacting serine/threonine-protein kinase 3 (RIPK3) has been reported to contribute to neuroinflammation and neuronal death in many central nervous system diseases. In this study, we found that the progression of JE was alleviated in RIPK3-knockout (RIPK3–/–) mice in both peripheral and intracerebral infection. RIPK3-knockdown (RIPK3-RNAi) neuro2a cells showed higher cell viability during JEV infection. Moreover, the JEV load was significantly decreased in RIPK3–/– mouse-derived primary neurons and RIPK3-RNAi neuro2a cells compared with wild-type neurons, but this was not observed in microglia. Furthermore, RNA sequencing of brain tissues showed that the level of the interferon (IFN)-induced protein 44-like gene (IFI44L) was significantly increased in JEV-infected RIPK3–/– mouse brains, RIPK3–/– neurons, and RIPK3-RNAi-neuro2a cells. Then, it was demonstrated that the propagation of JEV was inhibited in IFI44L-overexpressing neuro2a cells and enhanced in IFI44L and RIPK3 double knockdown neuro2a cells. Taken together, our results showed that the increased expression of RIPK3 following JEV infection played complicated roles. On the one hand, RIPK3 participated in neuroinflammation and neuronal death during JEV infection. On the other hand, RIPK3 inhibited the expression of IFI44L to some extent, leading to the propagation of JEV in neurons, which might be a strategy for JEV to evade the cellular innate immune response.
INTRODUCTION
Japanese encephalitis virus (JEV) is a positive-sense, single-stranded RNA virus belonging to the genus Flavivirus in the family Flaviviridae. JEV is both neurovirulent and neuroinvasive and can lead to severe encephalitis (Lannes et al., 2017). Glycoprotein E mediates JEV entry through attachment and endocytosis, followed by membrane fusion and uncoating (Wang et al., 2017;Yun and Lee, 2018). Pattern recognition receptors (PRRs), such as retinoic acidinducible gene 1-like receptors (RIG-I) and Toll-like receptor 3 (TLR3), in infected cells can recognize viral components and induce the production of interferons (IFNs), which then drive the expression of various IFN-stimulated genes (ISGs) through the IFN receptor (IFNR)/Janus kinase (Jak1)/tyrosine kinase (Tyk)2/signal transducer and activator of transcription (STAT)1/STAT2 pathway to fight against virus invasion (Liu et al., 2013;Han et al., 2014). As a result of such interactions, JEV has developed many strategies to counteract the host innate immune response Zhou et al., 2018).
Receptor interacting serine/threonine-protein kinase 3 (RIPK3) has been shown to participate in several biological or pathological processes and play complicated and even controversial roles in different host cells during various viral infections (He and Wang, 2018). The activation of RIPK3 and subsequent mixed lineage kinase domain-like pseudokinase (MLKL) phosphorylation can lead to cellular necroptosis and damage-associated molecular pattern (DAMP) production (Pasparakis and Vandenabeele, 2015). It has been reported that RIPK3-mediated necroptosis destroys host cells and limits the propagation of viruses such as herpes simplex virus (HSV), influenza virus (IAV), and vaccinia virus (VV) (Wang et al., 2014;Huang et al., 2015;Nogusa et al., 2016;Koehler et al., 2017). RIPK3 also promoted or inhibited the propagation of virus in a cell death-independent manner during coxsackievirus B3 (CVB), IAV, and Zika virus (ZIKV) infections (Harris et al., 2015;Downey et al., 2017;Daniels et al., 2019). Additionally, it has been reported that RIPK3 contributes to the production of chemokines CXCL10 and CCL2 in West Nile virus (WNV)-infected neurons to recruit T lymphocytes and inflammatory myeloid cells to the central nervous system (CNS) (Daniels et al., 2017). In a previous study, we found that JEV infection induced the expression of MLKL, leading to necroptosis of neurons and neuroinflammation, which was shown to be alleviated in JEV-infected MLKL-knockout mice (Bian et al., 2017). However, the role of RIPK3 in JEV infection is unknown.
In this study, we found that the survival rate of RIPK3knockout (RIPK3 −/− ) mice was significantly increased after JEV infection compared to that of wild-type (WT) mice. The expression of RIPK3 in neurons was increased after JEV infection, and cell viability was improved after RIPK3 knockdown. We also found that the replication of JEV in RIPK3 −/− mice and neurons was inhibited to some extent. Comparison of the RNA-sequencing results in JEV-infected brain tissues between WT and RIPK3 −/− mice showed that a series of IFN-stimulated genes (ISGs) were upregulated in RIPK3 −/− mice, especially the IFN-induced protein 44like gene (IFI44L). Then, it was demonstrated that IFI44L inhibited JEV propagation in neuronal cells, and the increased expression of IFI44L contributed to the inhibition of JEV in RIPK3 −/− neuronal cells. Thus, we speculated that the slightly increased RIPK3 might be a strategy for JEV to evade cellular immunity in neurons.
Ethics Statement
All animal experiments were reviewed and approved by the Animal Care and Use Committee of the Laboratory Animal Center, Air Force Medical University. The number of Animal Experimental Ethical Inspection is 20160112. And all experiments were carried out complying with the recommendations in the Guide for the Care and Use of Laboratory Animals.
Receptor Interacting Serine/Threonine-Protein Kinase 3-Knockout Mice
The RIPK3 ± C57BL/6 mice were a gift from the lab of Dr. Yazhou Wang (Department of Neurobiology and Collaborative Innovation Center for Brain Science, School of Basic Medicine, Air Force Medical University) and were kept in a specific pathogen-free (SPF) facility. Toe DNA from newborn mice was extracted and amplified with PrimeStar (Takara, Japan). Then, the products were analyzed by agarose gel electrophoresis to screen WT, RIPK3 ± , RIPK3 −/− descendants. WT and RIPK3 −/− mice (6-8 weeks) were infected with 5 × 10 6 JEV plaqueforming units (PFUs) in 20 µl phosphate-buffered saline (PBS) by footpad injection or 100 PFU in 2 µl via intracerebral injection. The weight, behavior score, and death cases of each group were recorded twice a day at 8:00-9:00 and 16:00-17:00 for 20 days until all the groups were totally stable. The scoring criteria were as follows: 0: no significant abnormal behaviors, piloerection, restriction of movement, body stiffening, or hind limb paralysis; 1: piloerection, no restriction of movement, body stiffening, or hind limb paralysis; 2: piloerection, restriction of movement, no body stiffening or hind limb paralysis; 3: piloerection, restriction of movement, body stiffening, no hind limb paralysis; 4: piloerection, restriction of movement, body stiffening, and hind limb paralysis; 5: piloerection, restriction of movement, body stiffening, hind limb paralysis, sometimes tremor and even death.
Cells and Virus
The JEV-P3 strain was propagated in the brains of 3day-old inbred BALB/C suckling mice and titrated by conventional plaque assay.
Immunohistochemical Staining
Mice were administered propidium iodide (PI; 4 mg/ml, Sigma, in 0.9% NaCl) intraperitoneally (100 µl/20 g weight) and euthanized 1 h later. Brains were harvested and protected from light. Brain sections of 10 µm were prepared with a vibratome. The slides were incubated with primary anti-RIPK3 antibody (Abcam, Cambridge, MA, United States) in PBS containing 0.1% Triton X-100 and 1% bovine serum albumin (BSA) at 4 • C for 16 h. After washing, the sections were incubated with the secondary antibodies for 1 h at room temperature. The nuclei were counterstained with 4 ,6-diamidino-2-phenylindole (DAPI), and coverslips were placed on the samples with 50% glycerol in PBS.
RNA Sequencing Analysis
Wild-type and RIPK3 −/− mice (4-6 weeks) were injected intracerebrally with PBS or 100 PFU JEV in 2 µl. Brains were harvested at 3 days post infection (dpi) and washed with 4 • C PBS three times and then stored in liquid nitrogen. Then, the total RNA was extracted for RNA sequencing. The expression values [reads per kilobase million (RPKM)] were normalized per gene over all samples, the mean and standard deviation (SD) of expression over all samples were calculated for each gene, and the expression value was linearly transformed using the formula (RPKM-mean)/SD. The results were analyzed using the Dr. Tom network platform of BGI 1 and GraphPad Prism 7.
DNA Construction
To inhibit the expression of RIPK3, the shRNA targeting mouse RIPK3 (5 -GCTGGAGTTTGTGGGTAAAGG-3 ) was constructed. The mouse IFI44L gene segment with sites for the restriction endonucleases BglII and MluI was generated through PCR (primer sequence in Supplementary Table S1) with Q5 High-Fidelity DNA Polymerase (NEB, United States) and cloned into the Lenti-GFP-zeocin plasmid (pLenti-GZ) (via BamHI and MluI restriction digests). The plasmids from positive clones were extracted and sequenced. Then, the recombinant IFI44L overexpression plasmid was obtained. Three oligos targeting IFI44L (sequences in Supplementary Table S1) with the restriction endonucleases AgeI and EcoRI were annealed and inserted into the pLK0.1-puro plasmid. The plasmids from positive clones were extracted and sequenced to obtain the correct recombinant interfering plasmid.
Generation and Purification of Recombinant Lentiviral Particles
Lentiviral pseudoparticles were generated by cotransfecting 293T cells in T75 flasks with the plasmids pLenti-IFI44L-GFP-zeocin, pLenti-shRNAi-RIPK3-puro, or pLenti-shRNAi-IFI44L-puro (1, 2, and 3) proviral DNA (12 µg); envelope plasmid (pMD2. G, 6 µg); and packing plasmid (psPAX2, 9 µg). Before transfection, 9 ml DMEM was added to each T75 flask. For each transfection, 108 µl transfection regent LipoFectMAX (ABP Biosciences, United States) was mixed with 27 µg total DNA in 2 ml DMEM for 30 min and then added to the T75 flask. The cells were maintained at 37 • C for 6 h, after which the medium was changed to DMEM with 2% FBS. The supernatants were harvested at 48 and 72 h. The cell debris was removed by centrifugation at 1,000 × g for 10 min and then 10,000 × g for 35 min. Subsequently, the viral suspension was concentrated at 165,000 × g for 4 h at 4 • C, and the virus particles were harvested in 500 µl DMEM and stored at −80 • C.
Lentivirus Infection and Positive Cell Screening
Neuro2a cells and N9 cells were seeded into six-well plates at 4 × 10 5 overnight. The supernatant was removed, and RIPK3-shRNA lentiviral particles mixed with polybrene (1 µg/ml) were added. After infection for 4 h, 1 ml DMEM with 10% FBS was added. Then, 48 h later, DMEM containing puromycin was added to neuro2a cells (2 µg/ml) and N9 cells (10 µg/ml) to screen the positive cells. Neuro2a cells with IFI44L overexpression or downregulation by shRNA were also constructed as described above.
Plasmid Transfection
Neuro2a cells and RIPK3-RNAi neuro2a cells were plated in six-well plates at 4 × 10 5 overnight. The supernatant was discarded, and the mixture of pCMV-GFPSpark or pCMV-RIPK3-OFPSpark (Sino Biological, China) (2 µg) with LipoFectMAX (6 µl) in 1 ml DMEM was added to each well. After incubation for 6 h, the medium was changed to 10% FBScontaining DMEM. Then, 24 h after transduction, the cells were infected with JEV-p3 at a multiplicity of infection (MOI) of 0.1. At 12 and 24 h post infection (hpi), cells and supernatant were harvested for qRT-PCR and conventional plaque assay.
Cellular Viability Assay
Neuro2a cells were inoculated into opaque-walled 96-well plates at 10,000/well and maintained overnight. After JEV infection, viability was tested with a CellTiter-Glo Assay kit (Promega, United States). According to the protocol, the substrate and buffer were mixed thoroughly to obtain the detection reagent, and the plates were equilibrated at room temperature for approximately 30 min before the experiments. Then, 100 µl of detection reagent was added to the plates containing 100 µl of medium and mixed on an orbital shaker for 2 min to induce cell lysis. Then, the plates were incubated at room temperature for 10 min. The luminescence signal was recorded with a Bio-Tek Synergy HT Multi-Detection Microplate Reader and analyzed with GraphPad Prism 7.
Isolation and Culture of Primary Neurons
Mice pregnant for 16-17 days were sacrificed, and the embryos were excised. The embryonic brains were harvested, and the meninges were removed completely. Then, the cerebral cortices were dissected and treated with papain (2 mg/ml) for 15 min, and 2 ml FBS was added to terminate the digestion. The liquid was removed, and the tissues were gently dissociated in DMEM with 10% FBS by pipetting. Then, the tissue suspension was filtered through a 70-µm cell strainer (Falcon, BD, United States). The isolated cells were seeded onto poly-Llysine (100 µg/ml; Sigma, United States)-coated 60-mm plates and cultured in a humidified atmosphere at 37 • C. After 24 h, the medium was changed to serum-free neurobasal medium (Gibco, United States) containing B27 (Gibco, United States) and L-glutamine (Gibco, United States).
Virus Infection
Neuro2a cells or modified neuro2a cells were seeded in sixwell or 96-well plates at a density of 2 × 10 5 /well or 10,000/well overnight. Then, the cells were infected with JEV (MOI = 0.1). After incubation for 1 h, the virus suspension was removed, and fresh DMEM was added. The cells and supernatant were harvested at different time points (24, 48, 72 h after infection) for qRT-PCR, Western blotting (WB), and conventional plaque assay.
N9 cells and RIPK3-shRNA N9 cells were seeded in sixwell plates at a density of 4 × 10 5 /well overnight. Then, the cells were infected with JEV (MOI = 1). After incubation for 1 h, the virus suspension was removed, and fresh DMEM was added. The cells and supernatant were harvested at different time points (24, 48, 72 h after infection) for qRT-PCR, WB, and conventional plaque assay.
qRT-PCR
WT and RIPK3 −/− mice were euthanized and perfused with PBS, and then the whole brain of each mouse was harvested and stored at −80 • C. Total RNA from mouse brains and cells was extracted with RNAfast1000 (PIONEER, China). cDNA was prepared by reverse transcription with total RNA as the template using the PrimeScript RT reagent Kit (TaKaRa, Japan). qRT-PCR experiments were carried out using SYBR Green Real-Time PCR Master Mix (TaKaRa, Japan) according to the manufacturer's instructions (for the primers used in this study, see Supplementary Table S1). The mRNA expression was normalized to β-actin expression, and the data are shown as the relative change to the corresponding reference for each group.
Western Blotting
Total protein from the brain of each mouse or from cells was extracted with radioimmunoprecipitation assay (RIPA) buffer containing phenylmethanesulfonyl fluoride (PMSF) and phosphatase inhibitors and then quantified using a Protein Reagent Assay BCA Kit (Thermo, Waltham, MA, United States). Thirty micrograms of protein from each sample was loaded and electrophoresed using 12% sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) gels and then transferred onto polyvinylidene difluoride (PVDF) membranes (Millipore, Billerica, MA, United States). After being blocked with 3% BSA at room temperature for 60 min, the membranes were incubated with primary antibodies (see Supplementary Table S1) overnight at 4 • C. Then, the blots were incubated with the corresponding DyLight 800/700-labeled secondary antibodies for 2 h at room temperature. The blots were visualized using an infrared imaging system (Odyssey, LI-COR, Lincoln, NE, United States).
Plaque Assay
BHK cells were seeded in six-well plates at 4 × 10 5 /well overnight. The supernatant of the cells was removed, and the cells were washed with 1 × PBS twice. Then, serial 10-fold diluted samples with DMEM were added and incubated at 37 • C for 2 h. The viral supernatant was replaced with 4 ml overlay media (25 ml 4 × DMEM, 50 ml 4% methylcellulose, 2 ml FBS, 23 ml ddH 2 O) for 5 days. The overlay medium was washed off with 1 × PBS, and the cells were fixed with 4% paraformaldehyde (PFA) for 30 min. Crystal violet dye was added at 2 ml per well for 15 min and washed off with running tap water. Finally, the plaques were counted.
Statistical Analysis
All statistical analyses were performed using GraphPad Prism version 7.01 software. Statistical differences were determined using Student's t-test or two-way analysis of variance (ANOVA). P-values < 0.05 were considered significant.
Receptor Interacting Serine/Threonine-Protein Kinase 3-Knockout Mice Showed Decreased Morbidity and Mortality After Japanese Encephalitis Virus Infection
In our previous study, MLKL −/− mice showed alleviated JE progression compared to WT mice to some extent. RIPK3, as the upstream signaling molecule of MLKL phosphorylation in classical necroptosis, has more complicated roles in apoptosis, inflammation, cytokine, and IFN production and the immunometabolic state (He and Wang, 2018). To determine the role of RIPK3 in JEV infection, RIPK3 −/− mice were infected with JEV by footpad injection and monitored daily for survival, weight, and behavioral score. The results showed that RIPK3 −/− mice had an increased survival rate compared with WT mice after JEV infection ( Figure 1A). The average behavior score of RIPK3 −/− mice was lower than that of WT mice ( Figure 1B). The weight in RIPK3 −/− mice was more stable ( Figure 1C). Generally, RIPK3 deficiency led to decreased morbidity and mortality during JEV infection in vivo. In the early phase of infection, RIPK3 −/− mice showed more aggressive onset of JE than WT mice. We speculated that RIPK3 −/− monocytes and dendritic cells contributed to the propagation of JEV in the peripheral organs. Then, the RIPK3 −/− and WT mice were infected with JEV by intracerebral (IC) injection to avoid the peripheral immune system. The RIPK3 −/− mice were also more resistant to JEV infection than the WT mice ( Figure 1D). Thus, the absence of RIPK3 in the CNS alleviated JE progression.
Japanese Encephalitis Virus Infection Induced Receptor Interacting Serine/Threonine-Protein Kinase 3 Expression Which Contributed to Neuronal Death
To explore the changes in RIPK3 during JEV infection in vivo and in vitro, the expression of RIPK3 was detected. After The mean behavior score of each mouse measured at 8:00-9:00 and 16:00-17:00 was calculated and analyzed. Data are shown as the mean ± SEM of all mice in each group. (C) The mean weight of each mouse at 8:00-9:00 and 16:00-17:00 was calculated and analyzed. Data are shown as the mean ± SEM of all mice in each group. (D) RIPK3 −/− (n = 9) and WT (n = 13) C57BL/6 mice (8-10 weeks) were infected with JEV-P3 at 100 PFU in 2 µl PBS via intracerebral injection. The death cases of each group were recorded every day, and then the data were analyzed and shown as Kaplan-Meier survival curves.
JEV infection, the expression of RIPK3 was increased in the CNS (Figure 2A). Moreover, the expression of RIPK3 was also increased in neurons and neuro2a cells following JEV infection (Figures 2B,C). The phosphorylation of RIPK3 led to classical MLKL-mediated necroptosis. In the JEV-infected mouse brains, PI-labeled necrotic cells were found to have increased expression of RIPK3 (Supplementary Figure S1). To identify the role of RIPK3 in neuronal survival, RIPK3-RNAi-neuro2a cells were constructed ( Figure 2D). Knockdown of RIPK3 increased the survival rate of neuro2a cells after JEV infection with different PFUs and infection times (Figures 2E,F). Thus, the expression of RIPK3 in neuro2a cells contributed to JEV-induced neuronal death. Viral Loads Were Lower in the Brains of Receptor Interacting Serine/ Threonine-Protein Kinase 3-Knockout Mice After Japanese Encephalitis Virus Infection via Intracerebral Injection RIPK3 participated in the regulation of inflammation and cell survival, which directly or indirectly affected the propagation of virus during virus infection. To determine the role of RIPK3 in JEV propagation in the CNS, we tested the viral loads in the brains after JEV infection via IC injection at 3, 4, and 5 days. Surprisingly, the JEV RNA copy number in RIPK3 −/− mice was significantly less than that in WT mice at 3 and 4 dpi (Figures 3A,B). At 5 dpi, the viral load in most of the RIPK3 −/− mice was still lower than that in the WT mice ( Figure 3C). This result was different from the infections with ZIKV and WNV, in which the viral load was increased in the CNS of RIPK3 −/− mice after virus infection because of the changed immunometabolism or decreased expression of chemokines in the neurons (Daniels et al., 2017(Daniels et al., , 2019.
Receptor Interacting Serine/Threonine-Protein Kinase 3 (RIPK3) Promoted the Propagation of Japanese Encephalitis Virus in Neurons
Neurons are the main target cells of JEV infection in the CNS. To observe the effect of RIPK3 on JEV propagation in neurons, we infected neuro2a cells and RIPK3-RNAi-neuro2a cells with JEV at an MOI of 0.1 and detected the viral load using qPCR and WB. The mRNA levels of JEV decreased significantly in RIPK3-RNAi-neuro2a cells compared to vehicle neuro2a cells at different times of infection (Figure 4A), which was consistent with the JEV-E protein levels ( Figure 4B). Then, the viral particles in the supernatants from different infection groups were assessed by plaque assay at a dilution of 1:100 ( Figure 4C). There were many more infectious JEV particles in the supernatant from vehicle neuro2a cells than in the supernatant from RIPK3-RNAi-neuro2a cells. The results were further confirmed in primary neurons isolated from RIPK3 −/− and WT prenatal mice. The viral RNA levels (Supplementary Figure S2A), viral protein levels (Supplementary Figure S2B), and number of particles in the supernatant (Supplementary Figure S2C) from the RIPK3 −/− neurons were decreased compared with those from WT neurons. Thus, the propagation of JEV in RIPK3-deleted neurons was inhibited. Furthermore, to identify the role of RIPK3 in JEV replication, transient overexpression of RIPK3 in neuro2a cells and RIPK3-RNAi-neuro2a cells was conducted. The expression of RIPK3 in neuro2a cells was increased (Supplementary Figure S3A), and the cells were infected with JEV at 24 h after plasmid transduction. The viral copy number was increased in the RIPK3-overexpressing neuro2a cells ( Figure 4D) as well as the infectious viral particles in the supernatant, as determined by plaque assay (Figure 4E) at 12 and 24 hpi. Moreover, RIPK3 supplementation in RIPK3-RNAi-neuro2a cells was performed (Supplementary Figure S3B), and viral copy numbers ( Figure 4F) and infectious particles ( Figure 4G) were also increased in RIPK3-RNAi-neuro2a cells complemented with RIPK3. In total, RIPK3 promoted the propagation of JEV in neuro2a cells.
Receptor Interacting Serine/Threonine-Protein Kinase 3 Knockdown Had a Limited Effect on Japanese Encephalitis Virus Replication but Inhibited the Activation of Microglia
Microglia, as the main resident immune defensive cells in the CNS, play important roles during JEV infection (Thongtan et al., 2012). After being exposed to JEV, microglia can be activated as innate immune cells and release a series of cytokines to recruit immune cells that contribute to immune defense as well as neuroinflammation. To explore whether knockdown of RIPK3 affected the level of JEV replication in microglia, RIPK3-RNAi-N9 cells were constructed. The expression of RIPK3 was decreased significantly in RIPK3-RNAi-N9 cells compared to the vehicle control cells (Figure 5A). Then, the viral load was detected at 24 and 48 h after JEV infection by qPCR and WB (Figures 5B,D). There was no significant difference in viral expression between RIPK3-RNAi-N9 and vehicle-N9 cells at 24 h. However, the expression of JEV RNA and protein was increased slightly in RIPK3-RNAi-N9 cells at 48 h. The amount of infectious JEV particles in the supernatant of RIPK3-RNAi-N9 cells was comparable to that of vehicle N9 cells ( Figure 5C). Thus, RIPK3 had little effect on the propagation of JEV in N9 cells. Furthermore, the level of activated caspase-1 ( Figure 5D) and the production of IL-1β after JEV infection ( Figure 5E) in RIPK3-RNAi-N9 cells were demonstrated to decrease. Thus, the activation of microglia during JEV infection was inhibited in the absence of RIPK3, which was consistent with reports that RIPK3 participated in the formation of the inflammasome in microglia (Lawlor et al., 2015).
Interferon (IFN)-Stimulated Genes, Especially IFN-Induced Protein 44-Like Gene, Were Upregulated in RIPK3 −/− Mouse Brains and Neurons After Japanese Encephalitis Virus Infection
In contrast with previous reports that RIPK3 mediated the suppression of viruses in the CNS and neurons, the propagation of JEV in the CNS of RIPK3 −/− mice and RIPK3 −/− neurons was inhibited. To explore the mechanism involved, RNA-sequencing of brain tissues from RIPK3 −/− mice and WT mice treated with JEV or PBS via IC injection at 3 dpi was performed. According to the volcano plots of differentially expressed genes, ifi44l was the most significantly upregulated gene in RIPK3 −/− mouse brains compared to WT mouse brains after JEV infection ( Figure 6A). Moreover, a number of ISGs in the brains also increased between RIPK3 −/− and WT mice after JEV infection and were more significant in RIPK3 −/− mice ( Figure 6B). To clarify the expression of IFI44L mRNA, WT and RIPK3 −/− mice were injected with JEV via IC injection again. Brains were harvested at 3 dpi, and the levels of JEV RNA and IFI44L mRNA were evaluated by qPCR. Consistent with the above results, the level of JEV was relatively lower in RIPK3 −/− mice than in WT mice (Figure 6C), and the mRNA level of IFI44L increased significantly in RIPK3 −/− mice compared with WT mice ( Figure 6D). Furthermore, the expression of IFI44L in WT and RIPK3 −/− primary neurons was detected by qPCR. The level of IFI44L increased significantly in RIPK3 −/− neurons after JEV infection ( Figure 6E) as well as in RIPK3 knockdown neuro2a cells ( Figure 6F). Thus, the absence of RIPK3 promoted the expression of IFI44L in neurons and inhibited viral replication during JEV infection.
The Increase of Interferon-Induced Protein 44-Like Gene Was Independent of the Phosphorylation of Receptor Interacting Serine/Threonine-Protein Kinase 3 or Mixed Lineage Kinase Domain-Like Pseudokinase
The phosphorylation of RIPK3 and subsequent MLKL activation are key to the classical necroptosis pathway (Weinlich et al., 2017). To explore whether the inhibition of IFI44L was dependent on the phosphorylation of RIPK3 or MLKL, Neuro2a cells were treated with inhibitors of RIPK3 or MLKL phosphorylation (Supplementary Figure S4). The expression of IFI44L was tested by qPCR at 24 and 48 hpi. The level of IFI44L mRNA increased significantly in RIPK3-RNAi-neuro2a cells but not in inhibitor-treated groups ( Figure 7A). Furthermore, the viral loads tested by qPCR and WB were decreased in RIPK3-RNAi-neuro2a cells (Figures 7B,C), while virus replication was increased slightly after treatment with inhibitors, especially in pMLKL inhibitor-treated neuro2a cells, at 48 h. Moreover, the infectious viral particles in the RIPK3-RNAi-neuro2a cells were decreased but not in the neuro2a cells treated with inhibitors ( Figure 7D). Thus, the inhibition of JEV replication in RIPK3-RNAi-neuro2a cells did not rely on the phosphorylation of RIPK3 or MLKL.
Interferon-Induced Protein 44-Like Gene (IFI44L) Inhibited Japanese Encephalitis Virus Propagation in RIPK3-RNAi Neuro2a Cells
To identify the effect of IFI44L on JEV propagation, IFI44Loverexpressing neuro2a cells (IFI44L-neuro2a) were constructed. The mRNA level of IFI44L increased significantly in IFI44L-neuro2a cells (Figure 8A). The viral RNA level decreased significantly in IFI44L-neuro2a cells compared with GZ-neuro2a cells after JEV infection at 24 and 48 h ( Figure 8B). The expression of IFI44L was slightly increased in neuro2a cells at 48 hpi ( Figure 8C). Then, IFI44L was downregulated in neuro2a cells at 48 hpi using three IFI44L-targeting shRNAs ( Figure 8D and Supplementary Figure S5A). Viral RNA copy numbers increased in IFI44L-RNAi neuro2a cells compared to vehicle neuro2a cells (Figure 8E). This result indicated that IFI44L in neuro2a cells inhibited JEV propagation. Furthermore, IFI44L in RIPK3-i-neuro2a cells was downregulated via shRNAs ( Figure 8F and Supplementary Figure S5B). The viral RNA level increased in IFI44L/RIPK3 double knockdown neuro2a cells compared with RIPK3-RNAi-neuro2a cells after JEV infection ( Figure 8G). Thus, the upregulation of IFI44L in RIPK3-RNAi-neuro2a cells contributed to the inhibition of JEV propagation.
DISCUSSION
Recently, a number of studies found that RIPK3 mediated complicated roles in cell death, inflammation, and immune defense during virus infection depending on different host cells and viruses (Orozco and Oberst, 2017). In this study, we found that RIPK3 −/− mice were more resistant to JEV infection during peripheral and intracerebral infection than WT mice. The expression of RIPK3 was increased in neuronal cells following JEV infection, and the increased RIPK3 promoted JEV propagation. Moreover, the viral load was decreased in RIPK3deleted neuronal cells because of the increased expression of IFI44L. Thus, we speculated that the induced expression of RIPK3 in virus-infected neurons might be a strategy for JEV to evade cellular innate immunity.
The phosphorylation of RIPK1, RIPK3, and subsequently MLKL induces canonical necroptosis followed by DAMP FIGURE 7 | The increase in interferon-induced protein 44-like gene (IFI44L) was independent of the phosphorylation of receptor interacting serine/threonine-protein kinase 3 (RIPK3) or mixed lineage kinase domain-like pseudokinase (MLKL). The phosphorylation of RIPK3 and subsequently MLKL formed the classical signal of necroptosis. To explore whether the inhibition of ifi44l was dependent on the phosphorylation of RIPK3 or MLKL, Neuro2a cells were treated with 1.5 nM RIPK3 kinase inhibitor GSK872 (RD, United States) or 1 µM MLKL inhibitor necrosulfonamide (RD, United States) 2 h before Japanese encephalitis virus (JEV) infection, and the inhibitors remained until 48 h post infection (hpi). The experiments were repeated three times. (A) RNA from vehicle-neuro2a cells, RIPK3-RNAi-neuro2a cells, and inhibitor-treated neuro2a cells was extracted at 24 and 48 hpi, and the expression of IFI44L was evaluated by qPCR. Data are presented as the mean ± SD. (B) The viral load in vehicle-neuro2a cells, RIPK3-RNAi-neuro2a cells, and inhibitor-treated neuro2a cells was detected by qPCR. Data are presented as the mean ± SD. (C) Protein from RIPK3-RNAi-neuro2a cells, vehicle-neuro2a cells, and inhibitor-treated neuro2a cells was extracted, and the JEV E protein was tested by Western blotting (WB). (D) Supernatants from RIPK3-RNAi-neuro2a cells, vehicle-neuro2a cells, and inhibitor-treated neuro2a cells were collected after JEV infection for 24 and 48 h. The infectious JEV particles in the supernatant were detected by plaque assay at a dilution of 1:100. production and inflammation (Pasparakis and Vandenabeele, 2015). In our previous study, we demonstrated that MLKL-mediated necroptosis accelerated JEV-induced neuroinflammation in mice and that MLKL −/− mice showed alleviated JE progression. In this study, we found that morbidity and mortality were decreased in RIPK3 −/− mice compared to WT mice after peripheral JEV infection, and JE progression was alleviated in RIPK3 −/− mice after intracerebral infection. Thus, RIPK3 accelerated JE progression in mice. The expression of RIPK3 was increased in neurons after JEV infection. RIPK3-silenced neuro2a cells showed increased cell viability during JEV infection compared with vehicle neuro2a cells. Thus, RIPK3 promoted neuronal death during JEV infection. It has been shown that RIPK3/MLKL-mediated necroptosis has antiviral function in fibroblast and epithelial cells during lytic virus infection by destroying viral reservoirs (Nogusa et al., 2016). Additionally, RIPK3 has been demonstrated to play complicated roles in virus propagation in cell death-independent ways. During IAV infection in macrophages, on the one hand, the virus induced RIPK3 accumulation in mitochondria and interfered with RIPK1/MAVS interactions to decrease IFN-β expression, which might be an immune evasion strategy adopted by IAV. On the other hand, the increased RIPK3 could activate protein kinase R (PKR), which stabilized IFN-β mRNA, leading to the increased protein level of IFN-β, which might be the response of the host cells to counteract viral evasion (Downey et al., 2017). During CVB infection in intestinal epithelial cells (IECs), RIPK3 promoted CVB infection via the positive regulation of autophagic flux (Harris et al., 2015). In neurons, the tug-ofwar between cellular immune defense and viral evasion is more complex. Daniels et al. (2019) found that the activation of RIPK1 and RIPK3 in neurons induced the upregulation of IRG1 and the metabolite itaconate to restrict viral replication through an immune-metabolism mechanism during ZIKV infection. In our study, the propagation of JEV was inhibited in RIPK3-deleted neurons and was promoted in RIPK3-overexpressing neuro2a cells. The differences might be explained by the fact that RIPK3 exerts different functions depending on the virus and the host cell. Moreover, we speculated that the increased expression of RIPK3 following JEV infection might be a strategy for JEV to evade cellular innate immunity. The components of JEV particles will be explored in subsequent studies to identify the exact mechanism by which JEV infection promotes RIPK3 expression in neuronal cells.
ISGs are the cellular factors induced by type I IFN in host cells to suppress viral replication. Hundreds of ISGs have been identified, some of which are broad-spectrum antivirals, while others are specific for viruses and cells. Moreover, the antiviral activity of ISGs can be enhanced through synergistic effects (Schoggins, 2019). IFI44L has been found to inhibit the replication of HCV, ZIKV, and DENV. It has been reported that IFI44L inhibited the replication of HCV in Huh-7 cells (Schoggins et al., 2011). In addition, the low levels of IFI44L, IFI27, and STAT1 contributed to the high viral load because of the impaired IFN production caused by HCV NS3-4A protease in HCV patients (Bellecave et al., 2010). Recently, Robinson et al. (2018) also showed that the failure to induce IFI44L contributed to the long-term propagation of ZIKV in germ cells. The expression of ISGs, including IFI44L, OAS1, and IFIT3, was downregulated by the NS4B protein of dengue virus (DENV) in human cells and thus resulted in high viral replication, which was an immune evasion strategy for DENV (Bui et al., 2018). In this study, a series of ISGs were increased in RIPK3 −/− mouse brains after JEV infection, among which IFI44L was increased most significantly compared with that in the WT. The antiviral role of IFI44L in neuronal cells during JEV infection was demonstrated by both the overexpression and knockdown of IFI44L. However, IFI44L did not completely inhibit JEV replication in RIPK3 −/− neurons. This did not rule out the role of other molecules. In addition to IFI44L, other ISGs, such as ZBP1, OAS1, and Gbp2b, were also upregulated in RIPK3 knockout mice and neuronal cells and might defend against JEV synergistically.
Neurons were the main host cells of JEV propagation. The expression of IFI44L was higher in RIPK3 −/− neurons during JEV infection. However, the relationship between IFI44L expression and RIPK3 was unclear. We further compared the levels of the main cytokines in WT and RIPK3 −/− primary neurons after JEV infection. The mRNA level of CXCL10 increased significantly in WT neurons upon JEV infection compared to RIPK3 −/− neurons, which was consistent with previous reports that the production of CXCL10 was impaired in RIPK3 −/− neurons during WNV infection (Supplementary Figure S6A). The level of tumor necrosis factor (TNF)α was also higher in WT neurons than RIPK3 −/− neurons after JEV infection, which might be the result of different viral loads (Supplementary Figure S6B). Then, we detected the levels of IFNs, including IFNα, IFNβ, and IFNγ. Compared to those in WT neurons without JEV infection, IFNs in both WT and RIPK3 −/− neurons increased after JEV stimulation (Supplementary Figures S6C-E). Overall, the total expression levels of IFNα in WT neurons were higher during JEV infection than those in RIPK3 −/− neurons, while the expression of IFNβ and IFNγ were comparable. In terms of relative changes, the increase in magnitude of IFNα relative to that in the corresponding control neurons was comparable between WT and RIPK3 −/− neurons, but the increase in magnitude of IFNβ and IFNγ was higher in RIPK3 −/− neurons ( Supplementary Figures S6F-H). Taken together, these results indicated that the baseline IFN expression in RIPK3 −/− neurons was lower than that in WT neurons. Upon JEV stimulation, the higher increase in the magnitude of IFN in RIPK3 −/− neurons might partly contribute to the production of IFI44L. However, more studies are needed to explore the mechanism by which RIPK3 regulates ifi44l expression.
In summary, RIPK3 has complicated roles in neuroinflammation and virus propagation during viral infection. In our study, we found a novel role of RIPK3 in JEV propagation in neurons, which is different from the role of RIPK3 in CNS infected with WNV of the same genus Flavivirus. Our findings further reinforce the intricate and subtle nature of the game between host and virus. We believe that RIPK3 may be a new therapeutic target for the development of virus replication inhibitors to treat JEV-induced encephalitis.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by the Animal Care and Use Committee of the Laboratory Animal Center, Air Force Medical University. The number of Animal Experimental Ethical Inspection is 20160112.
AUTHOR CONTRIBUTIONS
PB contributed to the conception and design, data collection and assembly, data analysis and interpretation, and manuscript writing. CY contributed to the data collection and assembly. XZ contributed to the data analysis and manuscript writing. CL, JiaY, ML, YW, JinY, and YuZ contributed to the data collection. FZ, JL, and YiZ contributed to the administrative support and provision of study material. ZJ contributed to the conception and design, administrative support, and final approval of the manuscript. YL contributed to the conception and design, financial support, and manuscript writing.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmicb. 2020.00368/full#supplementary-material FIGURE S1 | The colocalization of PI and RIPK3 in the brains of JEV-infected mice. WT C57BL/6 mice infected with JEV via footpad injection were administered PI intraperitoneally at 5 dpi and euthanized 1 h later. The expression of RIPK3 (green) was detected, and the colocalization of RIPK3 and PI (red) was recorded. Cells with both RIPK3 and PI positivity (white row) and RIPK3 positivity without PI (blue row) were all detected.
FIGURE S2 | The propagation of JEV was inhibited in RIPK3 −/− primary neurons. Primary neurons from WT and RIPK3 −/− mice were isolated and cultured for 1 week and then infected with JEV at an MOI of 0.1. Data are presented as the mean ± SD. The experiments were repeated three times. (A) RNA was extracted at 24, 48, and 72 h after JEV infection, and the level of JEV was detected by qPCR. The expression of JEV mRNA in each group was normalized to actin-β expression. Then, the relative fold change in each group was calculated based on the normalized mean expression of WT at 24 h. (B) Protein from WT and RIPK3 −/− neurons was extracted at 24, 48, and 72 h after JEV infection, and the E protein of JEV was tested by WB. (C) Supernatants from WT and RIPK3 −/− neurons were collected at 24, 48, and 72 h post JEV infection. The infectious JEV particles in the supernatant were detected by plaque assay with double wells at a dilution of 1:1000.
FIGURE S4 | The expression of pRIPK3 and pMLKL in each group. Vehicle-neuro2a cells, RIPK3-RNAi-neuro2a cells, and inhibitor-treated neuro2a cells were collected for protein extraction at 48 hpi. The protein levels of pRIPK3 and pMLKL were detected by WB. TABLE S1 | shRNA targeting sequences, PCR primers and antibodies used in this study. | 2020-03-24T13:13:52.614Z | 2020-03-24T00:00:00.000 | {
"year": 2020,
"sha1": "b67226e39d934faabebb9a21905b770be09788b7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.00368/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b67226e39d934faabebb9a21905b770be09788b7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
155561345 | pes2o/s2orc | v3-fos-license | Correlation Between Corneal Topographic, Densitometry, and Biomechanical Parameters in Keratoconus Eyes
Purpose To investigate the correlation between corneal densitometry, corneal topographic parameters, and corneal biomechanical properties in keratoconus. Methods A total of 76 eyes of 76 keratoconus patients were enrolled in this cross-sectional study. Corneal densitometry and topography were measured using Pentacam HR. Corneal biomechanical properties were measured using CorVis ST. Results The corneal densitometry values of the anterior 0 to 2 and 2 to 6 mm layers significantly correlated with the maximum keratometry values (R = 0.373, P = 0.001 and R = 0.276, P = 0.016, respectively), thinnest corneal thickness values (R = −0.331, P = 0.003 and R = −0.234, P = 0.042, respectively), anterior corneal elevation (R = 0.392, P < 0.001 and R = 0.323, P = 0.004, respectively), and posterior corneal elevation (R = 0.450, P < 0.001 and R = 0.367, P = 0.001, respectively). The stiffness parameter-applanation time 1 (SP-A1) significantly correlated with the corneal densitometry values for the anterior 0 to 2 mm (R = −0.397, P < 0.001), anterior 2 to 6 mm (R = −0.331, P = 0.004), central 0 to 2 mm (R = −0.306, P = 0.007), central 2 to 6 mm (R = −0.228, P = 0.048), posterior 2 to 6 mm (R = −0.243, P = 0.035), total 0 to 2 mm (R = −0.291, P = 0.011), and total 2 to 6 mm (R = −0.295, P = 0.010) layers. Conclusions The corneal densitometry values correlated with the severity of keratoconus and the SP-A1 values. Translational Relevance Corneal densitometry values may serve as markers to predict the severity of keratoconus.
Introduction
Diagnosis of keratoconus remains a challenge and a significant area of interest. Corneal tomography remains the diagnostic modality of choice. The sensitivity and specificity of screening for keratoconus has improved significantly with the advances in corneal imaging. [1][2][3] Pentacam HR (Oculus, Wetzlar, Germany), an anterior segment analyzer, is based on the Scheimpflug principle. The attached Scheimpflug camera rotates 2.5 circuits to yield 25 tomographic images. Subsequently, it reconstructs the anterior segment of the target eye, thereby yielding measure-ments of the dimensions of the anterior segment. Pentacam HR also performs corneal densitometry by evaluating gray scale units, which reflect the corneal transparency (range, 0-100). 4 Corneal Visualization Scheimpflug Technology (CorVis ST) also uses a high-speed Scheimpflug camera and provides a series of deformation parameters, such as applanation time, applanation length, applanation velocity, deformation and deflection amplitude, peak distance, stiffness parameter-applanation time 1 (SP-A1), Corvis Biomechanical index, and tomography and biomechanical index. 5,6 These parameters serve as markers for the corneal biomechanical properties. Studies have indicated significant differences in the corneal biome-chanical parameters between keratoconic and normal eyes. [7][8][9] A Corvis biomechanical index of .0.50 is able to classify 98.8% of the cases of keratoconus correctly with a sensitivity of 98.4% and specificity of 100%. 10 A tomography and biomechanical index of .0.79 has 100% sensitivity and specificity for detecting clinical ectasia. 6,11 The extent of decrease in the biomechanical strength correlates with the severity of keratoconus. 12 Likewise, the corneal densitometry has been shown to be significantly increased in keratoconic eyes. 4 We investigated the correlations between the corneal densitometry and biomechanics in patients with keratoconus.
Subjects and Methods
In this cross-sectional study, keratoconus was diagnosed using an anterior segment analyzer (Pentacam HR; Oculus) based on the Amsler-Krumeich grading system. Patients with keratoconus of stages 1 to 3 or forme fruste keratoconus (cornea with no abnormal findings on slit-lamp examinations and corneal topography, with keratoconus of the fellow eye) were enrolled from the Eye and ENT Hospital of Fudan University. In total, 76 eyes of 76 participants (50 men, 26 women; mean age, 23.93 6 6.81 years) were included. This study adhered to the tenets of the Declaration of Helsinki and was approved by the ethics committee of the hospital. Informed consent was obtained from all the participants.
Ophthalmologic Examination
Each patient underwent corneal tomography examination using the anterior segment analyzer Pentacam HR. The corneal biomechanical parameters were assessed using CorVis ST (Oculus). All measurements were obtained by a single examiner (YS). The corneal tomography images were acquired in the sitting position. Participants were required to keep their eyes wide open and to place their chins on the chin rest during the examination. The examiner maneuvered the joystick based on the image on the monitor. When the camera was aimed at the corneal apex, the images were captured automatically.
Statistical Analyses
Statistical analyses were performed using SPSS Version 20 (IBM, Armonk, NY). All data were tested for normality using the Kolmogorov-Smirnov test. A mixed linear model with Bonferroni-adjusted post hoc comparisons was used to analyze the differences in the corneal densitometry values in different locations. Pearson's correlation tests were performed to examine the correlations between the scale values, which fit a normal distribution. Spearman's correlation tests were used to determine the correlations between data with a skewed distribution or ranked ordinal data. P , 0.05 was considered statistically significant.
Results
The corneal densitometry values over the annulus of 2 to 6 mm followed a skewed distribution. The mean corneal densitometry values of each layer over the 0 to 2 and 2 to 6 mm annulus are listed in Tables 1 and 2. The main corneal tomographic data and corneal deformation parameters are listed in Tables 3 and 4, respectively.
Discussion
Corneal densitometry, which also is known as corneal backscatter, relates to corneal transparency and is influenced by changes in corneal histology. 13 It was first measured using a slit-lamp photometer with a pin-light attachment. 14 Scheimpflug cameras allow for objective evaluation of the densitometry. 15 It is noteworthy that for normal eyes, the corneal densitometry decreases from the anterior to posterior layers of the cornea. However, it does not show any relationship with the corneal keratometry. 16 We observed that the distribution of the corneal densitometry values was similar to that of normal eyes. However, unlike normal eyes, the densitometry values of the anterior 0 to 2 and 2 to 6 mm layers significantly correlated with the Kmax values. 16 In addition, we also noticed that the densitometry values of the anterior 0 to 2 mm, anterior 2 to 6 mm, and total 0 to 2 mm layers correlated with the thinnest corneal thickness, anterior corneal elevation, and posterior corneal elevation. This indicated that the severity of keratoconus may be correlated with the elevation of the corneal densitometry values, especially in the anterior layer. Elevated corneal densitometry also has been reported in the pathogenesis of various ocular surface disorders, which may compromise the corneal transparency, including keratitis, 17 endothelial abnormality, 18 and pseudoexfoliation syndrome. 19 Misalignment of the corneal collagen has been noted in keratoconus. 4 Further, periodic acidÀSchiff-positive nodules, Z-shaped cracks caused by ruptures in the Bowman's layer, 20 and wound healing reactions, which triggers fibronectin degeneration in the extracellular matrix, 21 may be the key causes related to the compromised corneal transparency, leading to an increase in the densitometry values.
SP-A1 is a parameter related to corneal rigidity. It is defined as the ratio of the pressure loading (imposed by the air pulse) on the cornea to the displacement of the corneal apex (from the undeformed state to the first applanation). The SP-A1 value has been reported to be lower in thin than in normal corneas. 10 In our study, the SP-A1 values were negatively correlated with the corneal densitometry values. This implies that, among patients with keratoconus, increased corneal densitometry values may indicate compromised corneal stiffness. Molecular biology studies have reported that enzyme activation has a key role in the degradation of the corneal stroma and in corneal thinning, thus affecting corneal stiffness. 22 An increased anterior and posterior surface elevation at the thinnest point of the cornea leads to the formation of a cone as well as an increase in the corneal keratometry. 23,24 The progressive increase in corneal irregularities, decrease in corneal thickness, and steeping of corneal curvature might underlie the correlations between corneal densitometry, SP-A1, and Kmax.
The limitations of our study are as follows. Firstly, the sample size was small. Secondly, a comparative group with normal eyes was absent. However, our main purpose was to investigate the potential correlations between the densitometric and biomechanical parameters in keratoconic eyes rather than to compare the difference in the densitometry values between patients with keratoconus and the normal population. Thirdly, the randomly enrolled patients with keratoconus were not classified based on the location of the cone apex. This might have affected Figure 3. The correlations between the stiffness parameterapplanation time 1 values and the corneal densitometry values obtained in the anterior layer over the annuli of 0 to 2 and 2 to 6 mm, the central layer over the annuli of 0 to 2 and 2 to 6 mm, the posterior layer over the annuli of 2 to 6 mm, and the total layer over the annuli of 0 to 2 and 2 to 6 mm (anterior 0-2 mm, R ¼ À0.397, P , 0.001; anterior layer 2-6 mm, R ¼ À0.331, P ¼ 0.004; central layer 0-2 mm, R ¼À0.306, P ¼ 0.007; central layer 2-6 mm, R ¼ À0.228, P ¼ 0.048; posterior layer 2-6 mm, R ¼ À0.243, R ¼ 0.035; total layer 0-2 mm, R ¼À0.291, P ¼ 0.011, and total layer 2-6 mm, R ¼ À0.295, P ¼ 0.010, respectively). the distribution of the corneal densitometry over the entire cornea. Further studies are needed to determine the difference in distribution of the corneal densitometry among patients with keratoconus with different types of cones.
In conclusion, we showed that the corneal densitometry values may correlate with the severity of keratoconus and the SP-A1 values in keratoconus eyes. The increased corneal densitometry values may allow the compromised corneal biomechanics in keratoconus to be predicted. | 2019-05-10T23:25:31.022Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "97b4f85b8fc0bd88f362939fec691e0b2e567c7f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1167/tvst.8.3.12",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97b4f85b8fc0bd88f362939fec691e0b2e567c7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255543488 | pes2o/s2orc | v3-fos-license | Autoimmune Hemolytic Anemia After Cord Blood Transplantation: A Retrospective Single-Center Experience
Objective To describe the incidence, possible risk factors, and treatment options of autoimmune hemolytic anemia (AIHA) occurring after cord blood transplantation (CBT). Methods We retrospectively analyzed the patients who underwent CBT at Peking University First Hospital between January 2004 and July 2022. Results We totally identified thirty-six patients who received CBT. Median age was 27.5 years (range, 1.6–52). With a median 6 (range 0.6–10.0) years survivor follow-up, six patients developed AIHA (2 Evans syndrome included) at a median of 168 (range, 122–264) days post-CBT for 8% cumulative incidence density 3 years. Its mortality was 50% and mainly associated with concomitant infections (CMV reactivation rate nearly 100%). The possible risk factors for developing AIHA are CMV reactivation, GvHD and HLA mismatch. Conclusion AIHA is a clinically significant common complication in recipients post-CBT. Corticosteroids combined with intravenous immunoglobulin (IvIg) is recommended for the treatment of warm antibody AIHA after CBT.
Patients and Methods Patients
A total of 36 patients with hematologic disorders and non-hematologic diseases undergoing CBT from unrelated donors or matched related donors at Peking University First Hospital between January 2004 and July 2022 were included in this study. This study was conducted in accordance with the declaration of Helsinki. The institutional review board approved the protocol and written informed consent was obtained from all patients or their guardians. The treatment plans including graft selection, conditioning regimen, immune suppression and supportive care have been reported in detail previously. [11][12][13] Conditioning Regimen The pre-transplantation conditioning regimens varied according to the patient's diagnosis, previous treatment, and disease status. Twenty-two patients were treated with modified busulfan/cyclophosphamide (Bu/CY) regimen. Eleven of them received antithymocyte globulin (ATG) at a total dose of 10 mg/kg. Two patients received a CY/total body irradiation (TBI) regimen. Twelve patients received non-myeloablative regimen. These conditioning protocols were described in detail previously. [11][12][13] Prophylaxis and Treatment of GvHD Twenty-nine patients received the combination of mycophenolate mofetil (MMF), cyclosporine A (CsA), and a short course of methotrexate (MTX) as prophylaxis of GvHD. Only seven patients received CsA and MMF as GvHD prophylaxis. [11][12][13] Serologic Tests ABO group typing and antibody screening tests were performed on donor and recipient samples before transplantation and whenever patients required blood components transfusion. The direct antiglobulin test (DAT) was performed as part of the routine pre-transfusion compatibility testing. If positive, further testing with specific anti-IgG and anti-C3d reagents was carried out.
Definitions
AIHA was diagnosed in patients fulfilling all of the following criteria: positive DAT, positive indirect antiglobulin test with broad reactivity to RBC in serum and eluate, clinical and laboratory evidence of hemolysis (increased lactate dehydrogenase and bilirubin levels, decreased Hb and haptoglobin levels and increased transfusion requirements) and exclusion of other causes of hemolysis. 14 ITP was a diagnosis of exclusion, defined as isolated thrombocytopenia in the absence of other causes that may be associated with a low platelet count. 15 The response to therapy was assessed according to previous criteria. 14,16 Hematopoietic Recovery and Engraftment Hematopoietic recovery was defined as time to ANC ≥ 0.5×10 9 /L (first of the 3 consecutive days) and platelet count ≥ 20×10 9 /L (first of the 7 days without transfusion).
Hematopoiesis by donor cells was ascertained by testing for cells with the donor's ABO type, HLA antigen, sex chromosome, or a combination, in the recipient's PB or BM. Donor chimerism was determined serially on BM and/or PB at days 30, 60, 100, 180, and 360 after transplantation, with additional time points as needed.
Patient and Graft Characteristics
Thirty-six patients with leukemia, malignant lymphoma, aplastic anemia, metachromatic leukodystrophy (MLD), pyruvate kinase deficiency (PKD) and inflammatory bowel disease (IBD) underwent CBT. Patients and graft characteristics are summarized in Table 1
Hematopoietic Recovery
Fifteen patients died at the end of follow-up (3/6 of the patients with AIHA, while 12/30 of the other patients). Ten patients failed to hematopoietic recovery after CBT. The other twenty-six evaluable patients had neutrophil engraftment at a median of 19 (range, 11-34) days, and platelet engraftment at a median of 36 (range, 16-209) days. Table 2. Five patients were in complete remission (CR) and had complete donor chimerism at time of diagnosis, while the other one relapsed. Three patients with early AIHA had acute GvHD (grade II in two patient and grade IV in the other one). Only one of these patients had chronic GvHD. Concomitant infection at the time of AIHA were present in all patients. Six patients had a cytomegalovirus (CMV) reactivation, including three cases of pulmonary polymicrobial infection (Pseudomonas aeruginosa, Klebsiella pneumoniae) and probable invasive fungal disease (IFD). One patient had pulmonary tuberculosis.
Serological Data
Major and bidirectional ABO mismatch between donor and recipient was present in five patients, whereas minor mismatch was present in one patient. All of the six patients in this study developed AIHA caused by warm antibodies (IgG/C3d). Three out of four patients had concomitant antinuclear antibody (ANA) and platelet associated immunoglobulin (PAIg), while one patient had multiple anti-rheumatoid (Rh) antibodies (including ANA, anti-nucleosome antibody, and anti-histone antibody). No antibodies against the ABO system were found in these patients.
Treatment and Outcome
All patients with AIHA and Evans syndrome were treated except for one patient who subsequently relapsed and died of acute severe hemolysis in two days. Intravenous immunoglobulin (IvIg, 0.4 g/kg/day for 5 days) was the first treatment administered in the remaining three patients. At the same time, prednisone or methylprednisolone (at doses ranging from 1-2 mg/kg/day) were administered. Only three of them achieved partial remission (PR), and the AIHA did not relapse at the end of follow-up.
Discussion
AIHA is a relatively common complication and may occur after any type of allogeneic HSCT, especially after CBT. Few cases on AIHA after CBT have been reported in China. The present research indicated its incidence was high to 16.7%. Meanwhile, the 3-year cumulative incidence density was 7.1%. AIHA after CBT occurred in approximately 5% of patients according to previous reports. 7,8 The reasons for higher incidence may be as follows. Our study included 25% of pediatric patients, predominantly HLA mismatch donors, with a CMV reactivation rate up to 70%. González-Vicent et al found that patients less than 15-year-old, and patients using CB or an HLA mismatch donor were more likely to develop AIHA. 17 The development of AC was strongly associated with the presence of chronic GvHD. 8 The chronic GvHD was more frequently extensive after CBT, which led to the higher incidence. Our research mainly focused on adults that were transplanted for hematologic malignant diseases. In this regard, a retrospective single-center study may more reliably reflect the true incidence of the complications in China. AIHA is also closely related to the presence of various infections. Concomitant infections were frequently observed at the time of diagnosis of AIHA. The CMV reactivation seems to be extremely common in this study. An intriguing possibility is that some of these infections could have triggered an abnormal immune response. In this aspect, the danger model of autoimmunity suggests that signals of damaged cells after exposure to infectious agents can bind to antigenpresenting cells (APCs) and activate a systemic immune response.
Most of patients with AIHA were ABO-mismatch between donor and recipient, which indicated blood group incompatibility was associated with hemolysis. Regarding serological data, all patients had IgG mediated warm antibodies directed against antigens of the rhesus system. This is similar with what was previously reported. 18,19 No association between AIHA diagnosis and whether autoantibodies positive was noticed.
AIHA is a complication of allogeneic HSCT associated with poor prognosis. However, an optimal therapeutic approach is lacking. This study included a small group of pediatrics and adults with hematological and nonhematological disorders that received CBT at a single institution using a relatively homogeneous conditioning strategy, GvHD prophylaxis and supportive care, as well as monitoring for autoimmune complications. Response to therapy was disappointing and overall mortality was high. One patient died of concomitant infection, massive uncontrolled hemolysis was the cause of death in two patients who did not respond to first-line of treatment. Despite aggressive therapy, a similar clinical behavior was recently reported. 20 Earlier identification and diagnosis of AIHA is key to improving efficacy and survival. In cases of sudden drop in Hb or an increase in transfusion requirements, early diagnosis of AIHA should be considered. All cases of warm antibodies AIHA may impact the choice of therapy. In particular, it is advisable to define the best choice, sequence and combination of drugs during different phases of disease. Corticosteroids combined with IvIg are preferred for early treatment. Our findings could help to increase awareness toward AIHA after CBT and guide therapy in these autoimmune complications. Most patients with AIHA failed to respond to corticosteroids or IvIg and needed further treatment. Rituximab and sirolimus are effective options, especially for patients with cold agglutinin disease. 21 In conclusion, AIHA is a clinically significant common complication in recipients post-CBT. CMV reactivation, GvHD and HLA mismatch seem to increase the risk of developing AIHA. Earlier identification and diagnosis of AIHA is critical to improving efficacy and survival. Its prognosis was poor and mainly associated with concomitant infections. Corticosteroids combined with IvIg is recommended for the treatment of warm antibody AIHA after CBT. If ineffective, adjustment of immunosuppressant therapy should be initiated early.
Disclosure
The authors report no conflicts of interest in this work. | 2023-01-10T05:06:15.177Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "e4264970c65e7196cf2f1e57ad7bc06b09428c36",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=86563",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4264970c65e7196cf2f1e57ad7bc06b09428c36",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
135459025 | pes2o/s2orc | v3-fos-license | Microscopic investigation of subsurface initiated damage of wind turbine gearbox bearings
Wind turbine gearbox bearings experience premature failures by White Structure Flaking (WSF), often occurs much earlier than their designed life of 20 to 25 years. This results in increased operational and maintenance costs due to unplanned maintenance and early replacement. The main causes and damage initiation mechanism of this premature failure are not fully understood; despite extensive research and investigation in recent years. In this paper, two planetary bearings from a failed gearbox of a multi-megawatt wind turbine are destructively investigated to characterize the subsurface microstructural damage and to understand damage initiation mechanism leading to surface WSF. The results show that the non-metallic inclusions are not the only initiator of subsurface damage. The microcracks are also initiated in the subsurface to form macrocracks which then propagate or connect to other macrocracks to reach the rolling contact surface causing WSF. The characterization of different forms of subsurface microstructural damage shows a close correlation of the maximum shear stress with the damage initiation. Butterfly wings are found to initiate from the compound type of non-metallic inclusions with low aspect ratio and to be associated with inclusion internal cracking in a direction approximately parallel to the axis of the maximum wing length.
Introduction
The designed life of wind turbines (WTs) is 20 to 25 years however the premature failure of wind turbine gearbox (WTG) bearings by White Structure Flacking (WSF) has often been reported [1]. This failure is associated with microstructural alterations beneath the rolling contact surface s of bearing raceways and rolling elements [1][2] [3]. Despite extensive research effort investigating the bearing premature failure by WSF, the main causes and damage initiation and propagation mechanisms are still a scientific and debateable subject [4] [5]. Microscopic investigation of samples obtained from failed bearings of field operating WTGs provides an insight into the microstructural alterations and various forms of damage occurred. Energy Dispersive X-ray Analysis (EDX) technique is commonly used to specify the chemical compositions of the bearing materials and other microstructural defects such as non-metallic inclusions [6].
Different forms of microstructural damage, especially butterfly wings, were investigated by a considerable number of studies [7][8] [9]. The results found that the formation of microstructural alterations occurred at a specific depth range from the rolling contact surface. The butterfly wings approximately looked like shear stress distributions induced by contact pressure, which led to suggestions that the shear stress had an important effect on the initiation of this damage [10][11] [12]. Non-metallic inclusions and material cleanliness had an important effect on damage initiation [13] [14][15] [16]. The inclusion type, size and distribution were the main parameters effecting 2 1234567890 ''"" [18]. Destructive investigation by sectioning the failed region of WTG bearings provided a two-dimensional view of investigated plane while serial sectioning technique allowed approximately a three-dimensional observation of cracking network [19].
In this study, a destructive investigation of two failed WTG planetary bearings is conducted to study the severely damaged regions of the bearing raceways by microscopic examination and to characterize different forms of damage such as butterfly wings, microcracks and damaged inclusions. The results show that the subsurface microcracks are another initiator of WSF initiation, in addition to the WSF resulted from non-metallic inclusions. Subsurface maximum shear stress has an important effect on the initiation of different forms of damage such as butterfly wings, inclusions with internal cracking and inclusions separations at inclusion-steel matrix boundaries.
Microscopic investigation
In this study, the severely damaged regions of two planetary bearings are destructively investigated. The samples for microscopic investigation are cut and examined in both axial and circumferential directions of the bearing raceway, as illustrated in figure 1. Samples are mounted using conductive resins to show the investigated surfaces. By using this cutting procedure, the entire axial plane located in the centreline of the severely damaged zone as well as six equally spaced circumferential sections of the raceway can be microscopically investigated. Samples are grinned, polished then etched with 2% Nital (2% Nitric acid and 98% ethanol). Optical microscope and Scanning Electron Microscope (SEM) are used in parallel with EDX technique to investigate the microstructure alterations and different forms of damage. Damage forms such as inclusion-initiated cracks and butterfly wings are characterised according to their damage features defined by dimensions, depth beneath the rolling contact surface, inclination angle relative to the rolling surface and inclusion Aspect Ratio (AR). The subsurface damage initiation is analysed by correlating the damage features with possible loading conditions experienced by the bearings during their operation. Subsurface stress distributions beneath the contact surface and the stress variation due to various loading levels are determined using Hertzian contact theory. Stresses distributions under the effect of surface traction are also calculated to analyse its expected role on damage initiation.
Results and discussions
In this study, 149 damaged inclusions are found, of which 55 inclusions (37%) are in the two axial sectioned samples and 94 inclusions (63%) are in the two circumferential-sectioned samples. These samples are chosen from the middle of axial and circumferential-sectioned samples, i.e. sample 3 and 4 as shown in Figure 1. The depths of theses samples investigated are within 1mm beneath the rolling contact surface, since the effect of maximum contact stresses do not expect to go beyond this depth[1] [12]. Damaged inclusions either have separation damage, i.e. inclusions deboned at their boundaries from steel matrix, cracking damage or mixed damage of separation and cracking. table 1 illustrates these damage forms. Four different types of inclusion damage by separation are identified: upper separation, lower separation, upper and lower separation, and side separations.
Most distinctive microstructural alterations found are butterfly wings. Therefore, the investigation and analysis of this paper focus on this damage form. Butterfly wings are classified into single and double winged butterflies, and single winged butterflies are further classified into upper and lower single winged respectively. The upper winged butterflies have the butterfly wing above the damage initiating inclusion i.e. from the inclusion towards the rolling contact surface. The lower winged butterflies crack away from the rolling contact surface. The characterization parameters, including wing length, inclusion angle and depth of 49 butterflies, are analysed. the contact surface. The maximum shear stress zone will be compared with the locations where various forms of damage are found in the following sections. numbers and percentages of cracked inclusions.
Butterfly wings
The depth distribution of the 49 observed butterfly wings is shown in figure 5(a). All butterflies are in the circumferential-sectioned samples, approximately located at the depth of the maximum shear stress. Number of butterflies observed increases with increasing depth, however no butterfly is found in depth greater than 700 m. Only one small butterfly wing is found parallel to the contact surface. Considerable butterfly wings have an inclined angle to the rolling contact surface at 25 o or 40 o respectively, counted as 29% and 21% of all butterflies, as shown in figure 5(b). figure 5(c) presents the distribution of butterfly wings according to the lengths of their wings. Because the investigated samples are cut from the severely damaged region of the bearing raceways where some parts of the contact surface are removed by severe spalling therefore no butterflies are found in shallow depths of the rolling contact surface. Around 71% of the butterflies found have double wings while 57% of the remaining butterflies (29% of all butterflies found) have single wing at the upper side of the initiation inclusion and the rest butterflies have single wing at the lower side. Single winged butterflies have shorter lengths when compared with the double winged butterflies. This leads to a hypothesis that butterfly wings may have initiated as a single wing first and the other wing appeared later, then they propagated together to grow into longer wings. There is no evidence to support whether the upper or the lower wing has initiated first however the point of wing initiation may depend on the location of the damage initiating inclusion in relation to the location of maximum shear stress. These observations support the suggestion that the maximum shear stress is an influence factor on the initiation of butterfly microstructure damage [22]. It is observed that cracks associated with butterfly wings are probably not a part of macrocrack network that linked to rolling contact surfaces because they disappear after regrinding the sample surfaces where the butterfly wings are observed. However, the butterfly cracks may not play an important role on subsurface damage propagation. All forms of damage are probably produced due to the shear stress levels have exceeded a critical limit of the bearing material.
There is no clear correlation between the depths of damaged inclusions with the occurrence of butterflies with upper or lower wing. figure 6(a) shows two inclusions located approximately at the same depth (~320 m); however, one inclusion has an upper single wing while another has a lower single wing. figure 6(b) shows two inclusions with butterfly wings, the inclusion with an upper wing is located at depth of ~250 m; however, the inclusion with a lower wing is located deeper at ~148 m.
Damaged inclusions, microcracks, and butterfly wings are located deeper than the region of the maximum shear stress calculated, which indicates that the contact surfaces may have been subjected to much higher loading levels than designed stress level specified by the international standards, possibly exceeded the yield strength of the bearing material. To confirm this, hardness of the bearing raceway contact surface inside and outside the loading zones are measured at 25 points then averaged. The surface hardness outside and inside the loading zones are 746 HV and 788 HV respectively and this indicates that the surface inside the loading zone has been hardened due to overloading. One of the largest butterflies found in this study is shown in figure 7(a). It is located at the end region of the severely spalling area away from rim of the downwind bearing (sample No.6 in figure 1). This butterfly has a depth of ~470 µm beneath the rolling contact surface, within the maximum shear stress zone as shown in figure has a darker colour compared to that of MnS inclusions which have light grey colour [13]. Energy Dispersive X-ray (EDX) analysis is used to confirm the inclusion's chemical compositions, as shown in figure 7(c). The analysis shows that it is a compound inclusion of MnS, aluminium oxide and silica in addition to other chemical compositions. Thus it is the inclusion type D Dup according to the International Standard ISO-4967:2013 [13]. Aspect Ratio (AR) is defined as the ratio of inclusion lengths along the major to minor axes and it is used to evaluate the inclusion shape. It is found that the majority of the butterflies are initiated at inclusions having low aspect ratio of around 2:1. It is also observed that the butterflies are likely associated with inclusions with internal cracks in a direction approximately parallel to the direction of maximum length of the butterfly wing. Microcracks are found near the sides and ends of the macrocracks where the number of microcracks is much higher than the number of the damaged inclusions. This leads to a postulation that both inclusion and microcracks are the initiators of subsurface microstructural damage leading to WSF. However, the weak boundaries and the residual stress around inclusions play an important role in subsurface damage initiation. The evidence of the role of subsurface microcracks on damage initiation is also shown in an axial sectioned sample in figure 9. A relatively large inclusion located close to the contact surface does not connect to the macrocracks network around it, despite the inclusion is connected to another small inclusion with a crack (marked with a red ring), the two connected inclusions are not connected to the macrocrack network. A considerable number of microcracks around these two inclusions can be seen (marking by white arrows). This leads to a postulation that subsurface microcracks and/or the cracks initiated from the separation of inclusion boundaries probably propagate towards each other, depending on the direction of the maximum shear stress in high stress locations, they then propagate towards the contact surface causing WSF.
Conclusions
This destructive investigation of two failed WTG planetary bearings by microscopic examination ha s found different forms of damage including butterfly wings, microcracks and damaged inclusions. The following conclusion may be drawn: Subsurface microcracks are another damage initiator in addition to non-metallic inclusions to produce subsurface microstructural damage leading to WSF. Butterfly wings are associated with the compound type of non-metallic inclusions with low aspect ratios. They are associated with inclusions with internal cracking in a direction approximately parallel to the axis of the maximum wing length. Butterfly wings and associated cracks may not be a part of the macrocrack network; the butterflies may firstly have a single wing and the other wing may appear later. Characterization of different microstructural damage forms confirms that the maximum shear stress is closely associated with the location of the subsurface microstructural damage. | 2019-04-27T13:12:14.144Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "85d7e2b60dc834dc982465a842c4a7423c203669",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1106/1/012029/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bb838746bd0c288dc812929688efe0ea9e9225ce",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
} |
225693033 | pes2o/s2orc | v3-fos-license | Descartes on Mathematical Reasoning and the Truth Principle
The main purpose of this paper is to examine and assess the plausibility of Descartes’ thesis that it is by mathematical reasoning that we ultimately justify the Truth Principle, which is a metaphysical claim. Drawing upon contemporary logic and philosophy of mathematics, the paper argues that Descartes’ understanding of mathematical reasoning, especially concerning infinity, enables him to justifiably conclude that the Truth Principle is indubitable.
Introduction
During 1648, in his conversation with Burman 1 , Descartes concludes "So proofs in Metaphysics are more certain than those in Mathematics" (Burman, 1648: p. 47f.). Although proofs and hence truths of metaphysics are more "certain" than those of mathematics, the discipline of mathematical reasoning defines correct reasoning everywhere, even in metaphysics. I argue that valid mathematic reasoning for Descartes essentially follows the model established by Euclid, which Descartes amplifies in the algebra of exponential functions (i.e., what we call analytic geometry). According to Descartes' theory of knowledge, every clear and distinct idea perceived by the light of nature is true. Knowledge is the result of "clearly and distinctly perceiving by the light of nature", and mathematical reasoning is clearly perceiving that each step in a sound argument is either intuited directly by clearly and distinctly perceiving it by the light of nature or else follows by clearly and distinctly perceiving by the light of nature that it follows from previous steps. According to the present interpretation, Descartes relies upon mathematical reasoning to explicate the concept of infinity, which is essentially mathematical. He relies upon the concept of infinity to define the nature of God and goes on to claim that without the notion of infinity, God cannot be conceived because God's nature includes infinite goodness (as well as infinite wisdom and power). Now, the only possible explanation for the fact that we have an idea of God is that God exists; that is because only God has attributes that are sufficiently rich to give us the idea of God. As Descartes explains, the cause of an idea must have as much reality as the content of the idea, and only God could have sufficient reality (i.e., infinite perfections) to give us the idea of an infinite, metaphysical thing. Because God exists, and is infinitely good, powerful, and wise, we know that it is impossible for us to be systematically deceived by a "malicious, evil demon". That implies the so-called "Truth Principle", which is that whatever we clearly and distinctly perceive by the light of nature is true. Now, according to the received semantics of the time, the content of a proposition is not in the least altered if it is prefaced by "it is true that", or "it is affirmed that" or "it is denied that". It follows that if one asserts a proposition p, it is the same as asserting that the proposition p is true. Following this (admittedly objectionable principle of Port Royal logic), Descartes concludes from the Truth Principle that the Truth Principle itself is true.
It is at this point that Descartes reasoning is questioned on grounds of circularity. Circularity cannot be avoided because the reasoning that justifies the Truth Principle must itself be clear and distinct reasoning by the light of nature, but we know that this bit of reasoning will yield the truth only if the Truth Principle is itself true, and we certainly cannot reasonably rely upon the Port-Royalist view that just because we have concluded that we are justified in asserting that the Truth Principle is true just because we are justified in asserting the Truth Principle. Admittedly, this is a flaw in Descartes argument; however, three important qualifications must be kept in mind. First, we know that no theory stated in ordinary, natural languages may coherently state its own truth conditions, although conditions on formal languages developed in the 20 th century have been more successful, if not entirely uncontroversial 2 . Yet, Descartes did not have the benefit of contemporary logic and semantics; he relied upon the semantics of his own time. Secondly, the mistake that Descartes made is not a mistake that is unique to his theory; it is rather a mistake that is inevitably made by any theory that is set forth in a natural language that states its own truth conditions. Thirdly, the fact that Descartes did err in a way that introduces circularity into his account does not mean that he actually failed to carry out his program. Descartes begins his seminal works Discourse on Method, Meditations on First Philoso-2 The problem of stating truth conditions within a formal language is manageable because there is are clear restrictions on legitimate objects of reference. This result is established by formal structures that distinguish between "the object language" from "the meta-language". The semantics of formal languages were initially developed by Tarski. Detailed discussion of truth in formal languages is beyond the scope of this paper, and therefore whether or not Descartes' arguments could be faithfully and fully represented in a formal structure is also beyond the scope of this paper. For a very helpful introductory discussion of this issue see (Beth, 1965: pp. 510-513). phy, and Principles of Philosophy by resolving not to accept any proposition that can be doubted and to accept propositions that cannot be doubted. Although Descartes correctly believes that the Truth Principle is indubitable, he nevertheless can doubt the proposition that the Truth Principle is true. What he cannot conceive is that the Truth Principle is true and that its truth can be doubted. In relying upon the Truth Principle, Descartes is remaining faithful to his project, which is to accept only that which is beyond doubt. The crucial caveat is that the fact that Truth Principle is beyond doubt does not prove that it is true.
Clear and Distinct Ideas Perceived by the Natural Light
Descartes begins Principles of Philosophy with the observation that "there are many preconceived opinions that keep us from knowledge of the truth". He ventures the thought that the only way to "free ourselves" from those "preconceived opinions" is to doubt "everything which we find to contain even the smallest suspicion of uncertainty" (Descartes, 1644: p. 193). His strategy is to try to reconstruct his knowledge by accepting only that which is absolutely certain and beyond doubt. Famously, the starting point of his grand project is the proposition "Cogito ergo sum". The 1644 program of the Principles of Philosophy is a restatement of the strategy that Descartes defined in Discourse on Method.
There Descartes reflects upon his strategy, in effect acknowledging that someone might ask him whether or not his strategy is itself beyond doubt. Descartes' response is the concession that the method that he takes "for gold and diamonds" may be "nothing by a bit of copper and glass". He invites his readers to judge his method for themselves (Descartes, 1637: p. 112).
In Proposition 30 of the Principles of Philosophy Descartes asserts, Descartes reveals basic argument for the Truth Principle is comparatively straightforward: It is that the "light of nature" or "faculty of knowledge" is given to us by God and can never include "any object" that is not true insofar as it is "is encompassed by the faculty of knowledge", which is to say: insofar as it is "clearly and distinctly perceived" (Descartes, 1644: p. 203). This paper begins by attempting to analyze just what Descartes means by "clearly and distinctly perceives", which forces us at this point to take a step back and to deal directly with the possibility that clear and distinct perception, however powerful it supposedly is, just might not be sufficient to remove all doubt, not about everyday perceptions, not about conflating dreams with reality, not about doubting our powers of mind, not even about doubting the paradigm of all knowledge, of mathematics itself.
There are two reasons for all those doubts. The first is that we have "seen people make mistakes in such matters and accept as most certain and self-evident things which seemed false to us". Beyond that Descartes acknowledges that we cannot simply rule out the possibility that the supposedly omnipotent God who created us "may have wished to make us beings of the sort who are always deceived even in those matters which seem to us supremely evident" (Descartes, 1644: p. 194 Descartes believes that he can dispose of the first doubt because there are explanations for the fact that others incorrectly believe that they have made discoveries by reason. For example, Descartes argues that the lack of success hitherto enjoyed in subjects like physics is not due to a "defect in (the) power of reasoning", but rather due to our reliance upon imagination rather than upon reason 3 .
Presumably, Descartes is here referring to some of the ancient and embarrassing false beliefs that were corrected by the physics of his own time. For example, it is easier to imagine that Earth is stationary, and that the Sun and planets move around it, than it is to imagine that Earth and planets move around the Sun; it is easier to envisage that Sun as setting than that Earth as rising; it is easier to imagine that a heavier cannonball will fall to the ground faster than a lighter cannon ball; it is easier to think that objects that are moving across a flat surface would inevitably stop even if there were no force to stop them than it is to believe that those objects would continue to move across a flat surface forever unless there were a force to stop them. All these ancient beliefs that arose from uncritical imagination were dismantled by the physics that developed in the light of reason during the seventeenth century, the century that ushered in the "Enligh- On the other hand, Descartes explicitly denies that the Cogito is a syllogism with a suppressed major premise like "Everything that thinks is or exists". On the contrary, we know "Cogito ergo sum" by a "simple act of introspection", by which the proposition becomes self-evident (Descartes, 1641a: p. 100). Even so, the standard of introspection raises yet another question: Precisely what is revealed to be self-evident by introspection? It would appear to be impossible to define, but on the other hand, it seems obvious as it is exemplified by the connection that becomes apparent when we carefully consider the proposition "Cogito ergo sum". There is a vast literature on the logical analysis of the Cogito, and some of it implausibly rejects the idea that anything at all is self-evident when we affirm the Cogito. Some, in fact, deny that the Cogito even is a claim, much less an argument. In his famous essay "Cogito, Ergo Sum, Inference or Performance", Jaakko Hintikka claims that the "function of the word cogito in Descartes' dictum is to refer to the thought-act through which the existential of self-verifiability of 'I exist' manifests itself" (Hintikka, 1962: p. 129). The fact of one's existence is supposedly exhibited by the fact of one's thinking; in other words, one's existence is revealed in the act of thinking that one exists. As Hintikka claims, "the indubitability of this sentence is not strictly speaking perceived by means of thinking (in the way the indubitability of a demonstrable truth may be said to be); it is indubitable because and in so far as it is actively thought of". In Descartes's argument, the relation of cogito to sum is not of premise to conclusion (Hintikka, 1962(Hintikka, : p. 1298. Alfred Ayer's account is definitely in sympathy with the gist of Hintikka's analysis. For Ayer, "the Cogito" is "degenerate" in the way "in which every statement" that is expressed by the sentence "this exists" is degenerate. Here the demonstrative "points to" the very object whose existence is affirmed. So, one might just as well point to the object affirmed as affirm its existence. The two acts, one of pointing, the other of affirming (or asserting), would convey exactly the same information. The upshot of all this is that "the Cogito" really does nothing more than reveal the information that is conveyed by affirmation of its truth (Ayer, 1956: p. 85f.). Somewhat later in 1978, Bernard Williams offers a completely different analysis that is supported by several texts from Descartes collected works. According to Williams, "the Cogito" should be understood as a "bare statement of necessity" which can, on Descartes view be intuitively grasped. It derives from the general statement that it is impossible to think without existing, or as Williams understands it: "In order to think it is necessary to exist". According to Williams, the Cogito is the affirmation of one's own existence that is validated by the fact of one's thinking (Williams, 1978: pp. 90-107).
A possible objection to Williams' view derives from Descartes' own denial that "the Cogito" should be affirmed as a syllogism that relies upon a general premise like "Everything that thinks, exists". This would appear to introduce an element of circularity in the argument, if only because the major premise of the syllogism would be an existential claim, and the very point of the "argument" is to establish the irrefragable existential claim that he (Descartes), exists. Williams emphasizes that the intuition that it is impossible to think without existence does not entail or even involve an affirmation of the existence of anything (Williams, 1978: pp. 103-105). By the Cogito Descartes means only to affirm a necessary relation between thinking and existence (Descartes 1641a: p. 100). I believe that it is right to say that subsequent analyses of the Cogito more or less have taken the side of Williams in the grand debate. It is true that one possible difficulty with Williams' approach is that it appears to invoke the assumption that the necessary connection between thinking and existence is just intuited, which may appear to some to introduce an unwelcome subjective element into Descartes' reconstruction of knowledge. Descartes himself resolved not to affirm anything that can be doubted, even in the slightest degree, but perhaps the intuition that it is necessary to exist in order to think is so obvious that it is beyond even the slightest doubt?
In a subsequent essay, E. M. Curley approaches this issue via Descartes' conception of analysis. Curley suggests that Cartesian analysis begins with the affirmation of the simple and the progression from it to the more complicated. This suggests that Descartes may have begun his analysis by seizing upon the intuition that in order for me to think, I must exist (Curley, 1986: pp. 153-176). Assuming that my thinking necessarily presupposes my existence leads (naturally? reasonably?) to the generalization that in order to think it is necessary to exist. This is a nice result because it takes a natural reading of the Cogito as the starting point, rather than the result of previous analysis. Moreover, the recognition that Descartes' analysis begins with an intuition raises the most important question, which is: What could make an intuition beyond doubt? Descartes answer is that an intuition is beyond doubt if and only if it is clearly and distinctly perceived by the natural light. where they think they have the most perfect knowledge, may I not similarly go wrong every time I add two and three or count the sides of a square, or in some even simpler matter" (Descartes, 1641: p. 14). At the very same point, Descartes wonders whether his doubt might be removed by the consideration that God, Open Journal of Philosophy who is supremely good, would not allow us to be deceived in a simple calcula-
Clear and Distinct Reasoning Perceived by the Natural Light
tion. Yet, Descartes also worries that we might well wonder why it is that God allows us to go wrong in any calculation (Descartes, 1641: p. 14). It should be emphasized here that Descartes does not claim to have a reason for doubting that the sum of two and three is five, but only for doubting that what appears to us to be forever indubitable might not actually be true. That reason is that we are not yet quite certain that God exists and hence cannot be sure that there is a guarantor who will ensure that what we clearly and distinctly perceive by the light of nature actually is true. The distinction between having a reason for doubting a particular belief and having a reason to doubt our capacity to form true beliefs is crucial. The most important text concerning this issue occurs in Frans Burman's account of his conversation with Descartes, which occurred between April 16 and April 20, 1648 at Egmondae (Cottingham, 1976: p. ixf.). The conversation took up issues from the Discourse on Method, the Meditations on First Philosophy, the Objections and Replies, and finally from the Principles of Philosophy. In the conversation that pertains to the Discourse, Burman refers to a famous paragraph in which Descartes decides upon the method that he should choose in his attempt to determine exactly what it is that we can clearly and distinctly perceive by the. light of nature. He writes that "mathematicians alone have been able to find any demonstrations-that is to say, certain and evident reasonings" (Descartes, 1637: p. 120). Descartes goes on to explain to Burman that mathematical "intelligence, … is not to be gleaned from books, but rather from practice and skill". As we become more accomplished in mathematical reasoning, we shall become better equipped to investigate other studies (like physics), "since reasoning is the same in every subject" (Burman, 1648: p. 47f.). In this remarkable passage, Descartes is claiming that mathematical reasoning is the paradigm of all clear and distinct reasoning. He further attributes his own success in metaphysics to relentless practice in algebra or what is now called "analytic geometry".
The above passage invites us to distinguish demonstrations or proofs from clear and distinct reasonings. Indeed, in the first part of the passage above, Descartes appears to be claiming that our knowledge of mathematical truths depends upon clear and distinct "reasonings". Here we must carefully distinguish between the acts of demonstration or "clear and distinct reasonings" from the propositions that are demonstrated, which is to say, produced by clear and dis-
The Natural Light: Representation and Truth
We began the previous section, §3, by asking the question: What is clear and dis-Open Journal of Philosophy tinct reasoning? Descartes immediately turns our attention to the Cogito, which appears to be or to embody clear and distinct reasoning of some sort. Yet, even a cursory review of the massive literature on Descartes reveals that Descartes' idea of clear and distinct reasoning has been viewed by many distinguished readers as anything but clear and distinct. In fact, Hintikka and Ayer deny that the Cogito is a piece of reasoning at all. Williams affirms that the Cogito is clear and distinct reasoning, but he bases his affirmation on the theory that a general proposition, which is that it is impossible to think without existing, is presupposed by the Cogito. I believe that resorting to a general principle can be a convincing reconstruction of the Cogito, but it hardly seems to remove the Cogito from all doubt.
Williams' suggested reading is broadly general, and therefore must claim that more information is contained within it than is contained in the Cogito itself. At the beginning of the third meditation Descartes explicitly considers the nature of thought and the classification of thoughts. He announces that his main concern will be thought of the kind "that can be properly said to be the bearers of truth and falsity". He distinguishes those thoughts (that can be the bearers of truth) from those that are "as it were, the images of things". Thoughts that are the images of things need to be distinguished from thoughts that "induce something more than the likeness of that thing"; which are therefore called "emotions or volitions". Having an image of an ice cream cone becomes something more than a mere thought when we crave the ice cream or, contrarywise, when we are repulsed by it because it is spoiled and sour. Exactly how emotions are to be distinguished from volitions is not taken up at this point, but it is obvious that there is a difference between, say, loving something and intending to do something about the object that is loved, for example, by pursuing or possessing it. The important point is that neither mere images, nor emotions and volitions, are bearers of truth or falsity. Bearers of truth and falsity are thoughts that are called "judgements".
According to Descartes, "ideas, considered in and of themselves", are neither true nor false. If we imagine a "goat or chimera", we imagine each regardless of its actual existence. Mere ideas, which we conceive, are not unlike emotions and volitions, for what we merely conceive need not represent anything. Only judgements are subject to error by misrepresentation; that is thinking that Open Journal of Philosophy something which is false actually is true, or else thinking that something which is true actually is false. Some ideas are ideas of our own invention, but others are forced upon us For example, when sitting by a fire "I feel the heat whether or I want to or not, and this is why I think that this sensation or idea of heat comes to me from something other than myself" (Descartes, 1641: p. 13).
We now have arrived at the central point. When we think that an idea has come to us whether or not we want it, we may rightly say that "nature has taught me to think this". In this case, a "spontaneous impulse" has led me to the belief in the existence of something other than myself that has "transmitted its own likeness to me". This is not to say that its truth has yet been revealed to me by some natural light. In the case of my own existence, what is revealed to me by the natural light is merely that "from the fact that I am doubting, I am certain that I exist". Descartes proclaims that there cannot be an epiphany more certain than one that arises from a "faculty as trustworthy as the natural light".
Having "established" and celebrated the natural light, Descartes goes on to explain how it is that we come to know that there actually are things apart from us that cause ideas of them within us. This undertaking is immensely difficult in as much as we constantly find ourselves befuddled by errors deriving from contrary beliefs about what lies outside us. Contradictory inputs imply that blind impulses that result in ideas cannot be the basis of reliable judgment. Reliable judgment must depend upon a guarantor of the accuracy of representation, that is of truth, and that guarantor can only be God! Having concluded that actual knowledge depends upon the beneficence of God, Descartes proceeds to advance his famous argument for the existence of God, which boils down to the claim that the mere fact that he has an idea of God implies that God exists.
This bit of reasoning depends upon nothing but "the natural light" because it is "manifest by the natural light that there must be as much <reality> in the efficient and total cause as in the effect of that cause" Since the reality in the idea of the object of God, which is its "objective reality", actually is God, and since the only entity that has reality so great that it is capable of causing the idea of God is God, it inexorably follows that God exists (Descartes, 1641: pp. 24-28).
I believe that this argument is much stronger than many philosophers have thought. It is plausible to think that a cause must be sufficiently strong to account for both the existence and identity of its effect. Thus, the cause of the idea of God must be sufficiently strong to account for the object of the idea, which is God. This, however, does not mean that the argument is beyond criticism, and great philosophers like Gassendi (Gassendi, 1641: pp. 199ff., 251-257) and Hobbes (Hobbes, 1641: p. 127) were quick to focus on its weakest point, which is the claim that we really do have an idea of God. Indeed, many people have thought that "God" is actually a name for something that cannot be conceived, but which is nevertheless sufficiently powerful to account for all that is beyond the pale of human cognition. This of course looks like nonsense and many con-Open Journal of Philosophy temporary philosophers have followed Wittgenstein in his claim that anything that cannot be designated in a straightforward way really cannot have been designated at all, and, to indulge a neologism: the prime example of an "undesignatable" is God (Wittgenstein, 1929: p. 85).
The Natural Light: The Idea of the Infinite
In any case, we may fairly (though admittedly "creatively") try to capture the essence of Descartes' famous argument for the existence of God argument without directly referring to God. For Descartes, as for Pascal (Pascal, 1670: p. 44), God is "infinite", which raises the obvious question: What could it be within the scope of our finite minds that could possibly give us an idea of "the infinite". Relying upon the principle that the cause must contain as much "reality" as the effect, Descartes would undoubtedly reply that only the infinite can give us the idea of the infinite. Now, since for Descartes, all reasoning is essentially mathematical, it is right at this point to ask how haw we might we come to have the idea of mathematical infinities. According to mathematicians and logicians, there are many levels of infinity: those that are "denumerable" (like the integers and rational numbers) and those that are a level up, like the real numbers, and those that are yet even higher order infinities that are constructed from the reals. We shall begin at the beginning with a denumerable infinity, the positive integers, and then move on to the more complicated case of the real numbers.
Consider the positive integers: the series 1, 2, 3, 4, 5, 6, …, and on to "infinity". Now, some philosophers, notably empiricists, will say that it is easy to account for the idea of the denumerable infinity of the positive integers. The trick just to continue the series above and proceed onward; but onward from 6 to what? Well, obviously to 7 and 8 and so on. Of course, some might not "get it" and perhaps for good reason. The "and so on" may be a snare and delusion.
Suppose the actual series under consideration is 1, 2, 3,4,5,6,7,14,15,16,17,18,19 and onward, without end. Cheating! Someone might proclaim, that is not how the series of positive integers go: Your left some out, specifically those from 8 through 13. Ah! That objection presupposes that you already know the series of positive integers, and how, it will be demanded, can you know what is left out of the series without knowing the entire infinite series of the positive integers.
But don't be silly, you do not need to know the whole infinite series, you only need to know how to proceed onward from 19. Yet, that cannot be sufficient because the pattern above that ends with 19 might not be repeated or indicate in any way what is to follow. In other words, the very same problem might re-emerge. Perhaps the next fragment of the series unpredictably begins with 48, skipping the numbers from 20 through 47? The point is that we cannot construct an infinite series from any finite sub-series because however far along we get, there still would be infinitely many unpredictable sub-series that are consistent with the initial finite series. Descartes would surely say at just this point that Open Journal of Philosophy knowing the complete series of integers must be to know the complete denumerable infinity, the infinity of positive integers, and that knowledge cannot be derived from the idea of a finite source. Hence the idea of infinity cannot be constructed by the operations of a finite mind; it must be innate, and furthermore, only something that is infinite, in some sense or other, could possibly have an idea of the infinite to give us.
Someone might argue that all we really need to construct the infinite from the finite is the concept of "going on forever", or more modestly, going on "without limit". From the Cartesian perspective, this move just raises the same old problem. How do we get the idea of "going on forever", or "going on forever in the same manner", or "going on without limit"? All these operations involve infinitely many steps; so, the mental calculation cannot be properly defined without referring to an infinity, which is the idea to explain.
Even so, perhaps it will be insisted that after all, we all somehow "get" the idea of the positive integers, and therefore the arguments about constructing an idea of the infinite from a finite series by "going on forever" cannot be so far wrong.
Yet even if this desperate argument is countenanced, it immediately falls apart in the far more difficult cases involving the real numbers, which include the transcendental numbers. These are numbers like √2 and π. Both of these numbers have infinite decimal expansions that are indeterminable, which is essentially to say that their decimal expansions are infinitely long and random. In set theories that are designed to axiomatize arithmetic, the transcendental numbers are accommodated by the Axiom of Choice, which simply posits the existence of the expansions of certain reals like √2 and π 4 .
The existence of transcendental numbers poses a significant epistemological problem from the Cartesian point of view. How can it be that we have knowledge by the "light of nature" of a transcendental number that is designated by an infinitely long random sequence? Surely transcendental numbers are not revealed by anything that could plausibly be described as the natural light by which we understand the Cogito; in fact, transcendental numbers appear to be utterly incomprehensible. However, I believe that Descartes would argue that we do perceive both π and √2 by the natural light. First, it is by the natural light of mathematical reasoning that we perceive that it follows from the Pythagorean Theorem (which we also perceive by the natural light), that the diagonal of an isosceles triangle with a side of one unit has a diagonal of √2 units. It is therefore by the natural light that we draw the conclusion that the 4 In Prior Analytics, Aristotle refers to a proof of the incommensurability of the diagonal of a right isosceles triangle and its side. The proof depends upon a reductio ad absurdum that purports to show that the opposite supposition that the diagonal is commensurable with its side entails the contradiction that an odd number is even (Jenkinson, 1966: I-23, p. 80 on correctly as far as we need to go. This will give us a good enough idea to pursue our legitimate scientific interests. So, π is approximately 3.14. If that is not close enough for certain purposes, we can continue the calculation until our estimation falls within the margin of acceptable error. We can do the same for √2.
Obviously, however, that will not do for Descartes, because Descartes wants to know how it is possible for any idea to represent accurately, It would seem that the infinite must be grasped all at once, as a completed whole, and it would seem that only an infinite mind could have a conception of the infinite, and that infinite mind, Descartes thinks, can only be God. 5 Russell himself argues that the idea that are infinite collections in this world is a mere assumption.
There is no logical reason to think this axiom, which asserts the existence of infinite collections, is true, but neither is there a logical reason to think that it is false (Russell, 1919: p. 77). Note here the phrase "infinite collections in this world". Descartes might well agree that we cannot, for example, prove that there are infinitely many particles in this world. For Descartes, however, numbers are not in this world. There are collections of things in this world that are numbered, but that does not prove or presuppose that numbers are things in this world. It is true that there is some textual evidence for a different reading. In the fifth meditation, Descartes asserts that "it is possible for me to achieve full and certain knowledge of countless matters, both concerning God himself and other things, whose nature is intellectual, and also concerning the whole of that corporeal nature which is the subject matter of pure mathematics". The last part of the quotation seems to imply that pure mathematics includes the study of the nature of corporeal objects. It is certainly true that Descartes thinks that pure mathematics can be applied to the material world because the essence of matter is extension, and pure mathematics describes extension, which is an attribute of matter, but which is not itself corporeal. As Cottingham et al. note, the French version of the Meditations clearly states that the objects of "geometrical demonstration have no concern" with the "existence of corporeal objects" (Cottingham et al., 1984: p. 49).
Clear and Distinct Ideas and the Fruit of Mathematical Reasoning
As we have concluded, for Descartes the "natural light" or "light of nature" guides us through the process of mathematical reasoning. The results are clear and distinct ideas; they are perceptions that are clearly and distinctly perceived by the natural light. In other words, by the natural light we clearly and distinctly perceive clear and distinct ideas. Descartes' formula ultimately leads to a comparison of the natural light by which we reason with the visual light that enables sight. As Plato reminds us, when we emerge from the cave of conventional, blind ignorance, we are able to perceive objects accurately in the brilliant sunlight, which to say perceive them as they actually are. Descartes explains the point as follows: I call a perception "clear" when it is present and accessible to the attentive mind-just as we say that we see something clearly when it is present to the eye's gaze and stimulates it with a sufficient degree of strength and accessibility. I call a perception "distinct" if, as well as being clear, it is so sharply separated from all other perceptions that it contains within itself only what is clear (Descartes, 1644: p. 207f.).
To be sure not every "perception" is clear and distinct. We might glimpse a splash of yellow in a far-off bush. It might not be clear to us whether yellow is the color of a bit of foliage or perhaps of a bird's breast or the tail of a small mammal. Sometimes we need to move closer to see precisely what is present. Even so, as Descartes reminds us, even if we are sitting by a fire (rather than in the bright sunlight), there are beliefs that are validated by perception 6 .
To be sure, Descartes goes on to raise doubts about our capacity to distinguish sleeping from waking states, about the effects of diseases that disorder the mind, and crucially about the possibility of systematic doubt. The point is, however, that apart for hyperbolic doubt, there is scarce reason to doubt the testimony of the senses, provided that observation occurs under suitable conditions, for example in the sunlight or by a fire. It is hardly surprising then that Descartes thinks that our knowledge of mathematics is beyond doubt. Consider, for example, the following example of a straightforward demonstration that yields mathematical knowledge.
How shall this inference be described? For Descartes we clearly and distinctly perceive each step from (2) to (5) by the natural light by clearly and distinctly perceiving by the natural light that each step from (2) to (5) follows from the preceding step or steps. This is Descartes' model of clear and distinct mathematical reasoning that leads to clear and distinct perceptions by the natural light.
The Port Royalist Theory of the Redundancy of Truth
It is tempting just to attribute our contemporary understanding of mathematical logic and semantics to Descartes. After all, if Descartes were a contemporary, he 7 It seems to me that the above proof is exactly the sort of "algebraic" proof that Descartes believes is perceived clearly and distinctly by the natural light and therefore is a good example of the product of clear and distinct reasoning. However, there are other arguments in mathematics that may be sound but are not so easily validated by the natural light. Descartes acknowledges that many arguments require detailed critical analysis and are open to doubt. Only after extensive criticism and revision are finally perceived clearly and distinctly by the natural light. It is not part of Descartes theory that every mathematical question can be easily or definitively settled. He repeatedly acknowledges the need for "practice". Perhaps the following is an example of what would appear to be a simple problem, but which actually is puzzling. It is the problem of accounting for the exponent 0 and, in particular, the problem of finding the value of positive integers that are raised to the exponent 0. Suppose that we reason as follows. Any number raised to the first power is equal to that number itself, if only because a number that is raised to an integral exponent x is multiplied by itself x times. For example, N * N * N = N 3 . Therefore, it is generally supposed that N 1 = N, which raises a very interesting question: What is the value of N 0 ? Indeed, is the expression "N 0 " even coherent? Generally, any positive integer raised to the power of zero is deemed to be 1; but why? Perhaps we might argue along the following lines, which emphasize the role of fractional exponents. Observe that for all positive integers N, x: N 1/x > N 1/x+1 . Let us illustrate this truth as follows. Let N be 27. So, 27 1/1 = 27 (as previously implied). 27 1/2 = 5.1961. 27 1/3 = 3.0000. 27 1/4 = 2.2795. 27 1/5 = 1.93318. Notice that as the denominator of the exponent becomes greater and greater, the exponent itself becomes smaller and smaller. Therefore, it appears that as x becomes larger and larger, 27 1/x will become closer and closer to 1, but also that it can never go below 1. It will not go below 1 because any number less than 1 that is multiplied by itself will be less than 1, and therefore the original positive integer will not be recoverable by the multiplication of its putative roots. Furthermore, the larger x becomes, the closer 1/x is to 0 or more precisely, 1/x approaches 0 as a limit, which appears to be 1. Finally, every root of 1 is 1. For example, the fifth root of 1 is 1 because 1 × 1 × 1 × 1 × 1 × 1 = 1. Hence, for all x, 1 1/x = 1. Thus, as the exponent 1/x approaches 0, the only possible value of 1 1/x is 1. So, we might conclude from all this that any positive integer, N, raised to the power zero is 1; in other words, for all positive integers N, N 0 = 1. Although this "proof" may be convincing and derive from plausible intuitions, I suggest that it hardly comes up to Descartes' standard of clearly and distinctly perceiving by the natural light. Even if the proof is correct, it relies upon mere intuition rather than solid argument. This is especially true of the extrapolation that is covertly assumed as the value of x in the exponent 1/x becomes greater and greater. Intuited extrapolations from examples are not clear and distinct perceptions; they are more like guesses. Furthermore, the above argument totally ignores the fact that positive integers also have roots that are negative. For example, −2 is a square root of 4, since (−2 × −2) = 4. On the other hand, −2 is not a square root of −4 since −2 × −2 = 4; furthermore 2 is not a square root of −4, since 2 × 2 = 4. Nevertheless, −2 is a cube root of −8, since (−2 × −2) × −2 = −8, but −2 is not a fourth root of −16 because (((−2 × −2) × −2) × −2) = ((4 × −2) × -2) = (−8 × −2) = 16. Even so, when it comes to positive integers, roots are relatively well-behaved-meaning, for example, that 2 is a root of 4, 8 and 16. The point of these examples is that even if the conjecture above concerning positive integers is correct, it does not account for the behavior of other integers. Clear and distinct perception by the natural light requires that we have a clear and distinct perception of all objects of a given type (like the integers). Unless we have a fully integrated theory, even plausible first steps cannot be counted as clear and distinct perceptions. Open Journal of Philosophy surely would embrace those formal disciplines and be one of their leading lights.
However, we must remember that at the time Descartes wrote, the model of formal logic was the Aristotelian syllogism. Syllogistic logic is virtually useless in any serious proof. For example, syllogistic logic does not include reasoning that depends upon truth-functional operators and connectives. It cannot deal with relations or even identity. In fact, the only model of mathematical reasoning that Descartes had was Euclid. Perhaps the paradigm of clear and distinct mathematical reasoning would have been the Pythagorean Theorem, but Descartes was concerned with even more difficult issues, especially the algebraic representations of conic sections by exponential functions. As we shall see, an attempt at something like formal logic and semantics was made by Arnauld and Nicole, but it was not until the 1680s that their work, Logic or the Art of Thinking, was published-approximately twenty years after Descartes' death. On the other hand, Logic or the Art of Thinking was inspired by Descartes, and much of it is a straightforward attempt to explicate Descartes' own intuitions about logic and semantics.
It is clear that Descartes meant to hold himself to the highest standards in mathematical reasoning and in reasoning about metaphysics, especially about our relation to nature, to each other and above all to God. Because Descartes did not have a robust system of what we call mathematical logic and semantics at his disposal, it is not surprising that he ran into objections about truth and the circular reasoning it appears to encounter, particularly as it stumbles over ancient semantic paradoxes 8 Even Descartes' most faithful and ardent admirer, Arnauld, is troubled and complains that Descartes has become trapped in a circle of his own making 9 .
Many philosophers have complained that Descartes' response to Arnauld simply ignores the force of his objection. Descartes' response is that if we clearly and distinctly perceive an idea by the natural light, then we can no longer doubt it. There is, however, one possible exception to this principle. Descartes refers Arnauld to his response to a similar objection found in the second set of replies.
There Descartes explains that those who have relied solely on the intellect in their quest for clarity of their "perceptions" are "incapable of doubting them" as long as they "attend to the arguments on which our knowledge of them depends" (Descartes, 1641a: p. 104). This response obviously raises the issue about the certainty of reflections when one is no longer attending to the arguments on which knowledge of them is based. Descartes concedes that until we are certain 8 These are paradoxes that arise when we concoct sentences that try to state their own truth conditions, like "This is sentence is false", which if true is false, and if false, then true, because what it asserts is the truth that it is false. 9 Indeed, Arnauld was not one to mince words. He comes directly to the point where he writes: "I have one further worry, namely how the author avoids reasoning in a circle when he says that we are sure that what we clearly and distinctly perceive is true only because God exists. But we can be sure that God exists only because we clearly and distinctly perceive this. Hence before we can be sure that God exists, we ought to be able to be sure that whatever we clearly and distinctly perceive is true" (Arnauld, 1641: p. 150). Open Journal of Philosophy that God exists, we cannot be certain of our memory, which is surely a reasonable claim, but it does not appear to address Arnauld's criticism, which is that we cannot be certain that our conviction that God exists is true until we are certain that whatever we clearly and distinctly perceive by the natural light is true, and we cannot be sure of that until we are sure that God actually does exist. In other words, Arnauld is demanding that Descartes show that whatever we clearly and distinctly perceive is true, but Descartes responds by saying that whatever we clearly and distinctly perceive is indubitable.
Despite appearances to the contrary, I shall argue that Descartes' response to Arnauld is adequate. It is Arnauld who has misconstrued the issue at stake, and it is very significant that after learning Descartes' response, Arnauld appears to have dropped the circularity objection. What is Descartes' defense? The objects of clear and distinct perception by the natural light are clear and distinct ideas.
However here we need to be especially careful to distinguish the ideas of things and their qualities and the semantically higher order judgment that certain of those ideas are true. Unfortunately, seventeenth century logic does not always draw a clear distinction between the affirmation of an idea and the affirmation of truth an idea. The logic of the Cartesians of the seventeenth century is attributed mainly to Descartes by Arnauld and Nicole and are laid out in detail by them in their Logic or the Art of Thinking 10 : In the following crucial lines they carefully explain: Besides propositions whose subject or attribute is a complex or abstract term, others are complex because they contain terms or subordinate propositions that affect only the form of the proposition, that is, the affirmation or negation that is expressed by the verb… (Arnauld and Nicole,p. 94f.).
The point, as Arnauld and Nicole illustrate, is that if we say that it is true that the earth is round, the comprehensive part of the proposition "It is true that", changes nothing in the meaning of the subordinate part, that is expressed by the verb that occurs in "the earth is round". The same is true if I deny that the earth is round. The work of the subordinate clause is the same, which is expressed by "the earth is round". It follows that if I say that I clearly and distinctly perceive by the natural light that the earth is round, the meaning of "the earth is round remains precisely the same" (Arnauld and Nicole, 1683: p. 95). I shall argue that Descartes is the one who clearly distinguishes between the indubitability of a proposition and its truth, and does acknowledge that what he can demonstrate is not that everything clearly and distinctly perceived by the natural light is true, but rather that nothing that is clearly and distinctly perceived by the natural light can be reasonably doubted because God's existence cannot be reasonably doubted. Of course, that will not be sufficient to satisfy those who doubt God's existence, but it will be enough to satisfy those who are convinced of God's exis- 10 The influence of the logic of the Cartesians, the so-called Port Royal logic, extended even to British philosophers, but an exploration of this issue is far beyond the scope of this paper. I mention it here only because the troubles caused by equating the affirmation of an idea and the affirmation of its truth plagues all seventeenth philosophy. Open Journal of Philosophy tence, and Descartes insists that the existence of God cannot be reasonably doubted.
The Truth Principle
Descartes thinks that the ideal of correct mathematical reasoning is unassailable.
A judgment that is a product of clear and distinct reasoning by the natural light is beyond doubt. That, however, raises a "meta-question": Can we demonstrate by clear and distinct reasoning by the natural light that whatever is "proved" by the clear and distinct reasoning of the natural light is true? Toward the beginning of the Discourse on Method, Descartes contemplates this very problem, and he in effect concedes that he may not be able to answer it to everyone's satisfaction, which, indeed has been proved to have been an understatement. In short, Descartes concedes that he does not have a proof that his method guarantees that every idea clearly and distinctly perceived by the natural light is true. But just what is it that stands in his way. What doubt can there be about the matter?
The answer, of course, is that we might be systematically deceived. That is why it is that Descartes concludes: "Yet I may be wrong: Perhaps what I take for gold and diamonds is nothing but copper and glass" (Descartes, 1637: p. 112).
What Descartes takes for gold and diamonds is mathematical reasoning, In the Mediations on First Philosophy Descartes famously produces his argument for the existence of God which, if correct, disarms all worries about a malicious demon who deceives us at every step. Descartes offers an argument for the existence of God, but how can that argument be conclusive and rescue the Truth Principle unless we already know that the deliverances that are clear and distinctly perceived by the natural light really are true, which of course is just what is in question. Even so, as I have argued elsewhere, Descartes should not be accused of circularity. Descartes acknowledges that some may reject his reasoning and/or the standards by which he judges reasoning. So, his argument should be charitably viewed as hypothetical: If we can assume that we are not systematically misled, then we can be sure by mathematical reasoning that there is a God who is responsible for what otherwise would appear to be mere good fortune (Dreher, 2017: pp. 202-216).
I believe, just as Descartes concedes, there is not any way to prove that clear and distinct mathematical reasoning will yield truth. But that concession does not end the argument. That is because for Descartes, what makes mathematical reasoning correct is not that it yields the truth, but rather that it yields what cannot be doubted. This fact is often lost in discussions of the Truth Principle in part because of a crucial passage from Meditations. … since I sometimes believe that others go astray in cases where they think that they have the most prefect knowledge, may I not similarly go wrong every time I add two and three or count the sides of a square, in some simpler matter if that is imaginable (Descartes, 1641: p. 14). Now, if I doubt that 2 + 3 = 5, the proposition that I doubt is just that 2 + 3 = Open Journal of Philosophy 5, which is the very same proposition that I doubt if I doubt that 2 + 3 = 5 is true. But in doubting that 2 + 3 = 5, I do not thereby consider some other proposition to replace it, for example, that 2 + 3 really = 6. The only reason, according to Descartes, for doubting that 2 + 3 = 5 is that we are systematically misled by an evil demon or perhaps by a freakish tendency woven deeply in the nature of things that reinforces false beliefs. As Descartes emphasizes, doubts about clear and distinct perceptions are different in type from ordinary doubts about visual or tactile perception. If I hear a rustling noise at night in my garden, but doubt that it is due merely to the wind, I immediately think of alternatives, for example, that it is an animal, or a thief. Now, Descartes does concede that sometimes it is possible to doubt the product of mathematical reasoning in this sense; that is, it might be that there is an alternative judgment that is more plausible than the initial judgment, but that does not mean that it is possible to doubt just any mathematical proposition. I cannot doubt that 2 + 3 = 5 if only because I firmly believe it and cannot conceive an alternative. Of course, that does not mean that there isn't an alternative; it only means that I cannot conceive it no matter how hard I try.
Now, let us return to the Truth Principle. It cannot be that I both represent myself as clearly and distinctly perceiving a proposition by the natural light and yet doubt that it is true. That is because to doubt that something is true does not in any way change what is doubted. Similarly, the Truth Principle cannot be doubted, but that, of course, does not prove that it is true-it only proves that I cannot doubt its truth. Descartes insists that it will do no good to object to the indubitability of the Truth Principle on the grounds that we have sometimes been mistaken in thinking that we clearly and distinctly perceive. Descartes clearly states that when we come to recognize that we have erred in forming a belief, we also come to see that we did not clearly and distinctly perceive the belief by the natural light in the first place. Indeed, if I come to think that a perception that I once deemed to be clear and distinct may be false, I must also conclude that I did not clearly and distinctly perceive it to be true. What we cannot do is to represent ourselves as having clearly and distinctly perceived a false proposition. Nor can we deem another to have clearly and distinctly perceived a false proposition. In order to do that we would have to represent that proposition to ourselves as both clearly and distinctly perceived and nonetheless false.
The mistake I would make in that case would have been to judge knowledge of the truth to be something weaker than clear and distinct perception. The idea that we can doubt what we clearly and distinctly perceive is a delusion.
Descartes very stringent standard by which truth is judged suggests that we ought to reconsider the question as to whether there really are any propositions that we clearly and distinctly perceive to be true by the natural light. Descartes' unequivocal answer is that we clearly and distinctly perceive the cogency of mathematical reasoning by the natural light, and for Descartes mathematical reasoning is ultimately the form of all reasoning, including what Descartes calls metaphysical reasoning. That brings us to the metaphysical proof of the existence of God in the third meditation. The essence of Descartes' proof is that only God could give us the idea of God because only God could be the source of the idea of infinite, and as we have already seen, we grasp the idea of infinity by mathematical reasoning. According to Descartes, once we know that God exists, we know that there cannot be systematic doubt because it cannot be both that what we clearly and distinctly perceive is indubitable and yet that we doubt its true. It is only then that we know that what we clearly and distinctly perceive by the natural light is not only indubitable but also that the claim that it is true is also indubitable, which emphatically is not to say that the claim that it is true is itself true.
Summary of the Main Argument
We know that if God exists, then the Truth Principle is true. And we cannot doubt the existence of God because we cannot doubt that we have an idea of infinity which is derived from clear and distinct mathematical reasoning, for example, that there is an infinity of positive integers and that there also are "infinitely expansible" numbers that are transcendental, like √2 11 . It is true that some empiricists will complain that we really do not even have an idea of infinity; all that we really mean by an infinite series is finite series "that continues forever". Yet as we have discovered, there are important objections to this empiricist line of thought. In the first place, we really cannot tell from any series of finite integers just how to continue them, which is to say that an initial finite series does not determine a unique successor series. Moreover, even if a finite series did determine a unique successor series, we could not carry it out. The best we could do is to say that we would need to carry the series on forever, meaning without end.
But the concept of carrying on indefinitely, without end, obviously requires the concept of infinity itself. So, ultimately the empiricist view must be that although we do not have a "positive" idea of infinity, we do have a "negative" idea of infinity. Yet, at least according to Descartes, infinity is not a negative concept. Indeed, in the case of the transcendental numbers we grasp the concept of infinity directly, for example, as the ratio of the circumference of a circle to its diameter or as the ratio of the diagonal of a right isosceles triangle to its side.
None of this actually overcomes the circularity objection to the Truth Principle, but it does defang it, and therefore it no longer undermines Descartes' project. While it is true that Descartes' putative argument for the Truth Principle 11 In mathematics the existence of denumerable and higher order infinities are asserted by two infinity axioms, one that Axiom of Infinity, which asserts the existence of the rational numbers, and the second, which we call the Axiom of Choice and which asserts the existence of denotation the real numbers (including the transcendentals). Higher order infinities are generated from the infinity of the reals by the powers set operation (Russell, 1919: pp. 63-88). One may claim that these axioms are not clearly and distinctly perceived by the natural light, but I do not think Descartes would agree. That is because these axioms are presupposed by the truth of familiar mathematical principles (for example by the algebra of the conic sections without which we could not even state the principles that describe the motion of the planets). Open Journal of Philosophy is circular, it is also true that no Cartesian can reasonably doubt the Truth Principle, and that, remember, is Descartes self-imposed standard for dealing with the construction of knowledge. Recall how the Meditations on First Philosophy begins. Descartes resolves to "devote himself sincerely", to the demolition of his previous opinions, but he immediately cautions that it is not "necessary for me to show that all my opinions are false". What reason demands, he continues, "is to hold back my assent from opinion which are not completely certain and indubitable". That means that assent is to be withheld when there is "reason for doubt" (Descartes, 1641: p. 12).
The key point is that Descartes does not find a reason to doubt mathematical reasoning, which presupposes the concept of infinity 12 He does not doubt that in order to have the concept of infinity (from mathematical reasoning) it must be that God exists. Further he finds it to be impossible to doubt that whatever he clearly and distinctly perceives is true provided that God exists. So, he finds it impossible to doubt the Truth Principle, which therefore is justified by his own standard, which is to affirm all and only what he clearly and distinctly perceives by the natural light. The key phrase here is "by his own standard"-not by the standard of Hobbes or Gassendi, or even by the standard of Arnauld, but rather by his own standard, which Descartes concedes, may be deemed by others to be mere copper and glass, but which for him surely is gold and diamonds.
Conclusion, Limitations and Suggestions for Further Research
The conclusion of this paper is that Descartes did successfully finish his project for the reconstruction of his knowledge and that he therefore was in a position to sort out just what is commended by reason and what he had been taught by unreliable authorities. His project depends upon his view that mathematical reasoning is indubitable, and that mathematical reasoning presupposes the idea of infinity, which could only be derived from an infinite mind, which is to say the divine mind. This is does not mean that Descartes has proved his thesis and forced agreement by those who disagree with him about what is dubitable. But then again, Descartes does not claim to satisfy anyone else, he seeks only to satisfy himself and to share his good epistemological fortune with those who care 12 It might be objected here that Descartes should have distinguished a "qualitative" from a "quantitative" conception of infinity. Indeed, if we think of something as infinitely good or powerful or wise, we do not seem to be making quantitative judgments. I believe, however, that Descartes would insist that at bottom, all references to infinity must be reduced to quantitative judgments. That is essentially what it means to say that metaphysical reasoning is subordinate to mathematical reasoning. Perhaps it will be readily granted that this view is plausible when it comes to space and time. Space and time are measured quantitatively; so, if we say that Euclidean space is infinite, we must mean that its measuring stick must contain infinitely many marks. But what are the marks by which we measure infinite goodness or wisdom or power? Aren't those "infinities" qualitative? Even so, good deeds can be counted, both with respect to frequency and comparative value, as can the number and comparative significance of truths that are known and finally as well as by the comparative potency and frequency of acts of will. Without some form of measurement, goodness, knowledge, and power are essentially incomparable. It is right to attribute this type of view to scientifically-minded early moderns like Descartes. Open Journal of Philosophy to take notice of it. The centrality of mathematical reasoning and its indubitability drives Descartes' arguments through his Meditations on First Philosophy and his replies to his critics. However, it is only in his final conversation with Burman that he flatly and unequivocally insists that mathematical reasoning is the foundation of all reasoning, including metaphysical reasoning 13 .
There are at least two ways in which this contribution is limited. First, it is limited because we ourselves find the concept of infinity difficult to grasp and consequently ever slipping away from our conceptual grip. On the other hand, it was the clear-headed, the tough-minded Bertrand Russell who demonstrated that arithmetic as we know it (basically what is central to Newtonian physics), can be axiomatized within ordinary set theory and logic with two additional principles: the Axiom of Infinity and the Axiom of Choice (Russell, 1919: pp. 63-88). The second way in which this discussion is limited concerns Descartes' text. In the replies to his critics, he often seems to become impatient with criticism. There are far too many examples of this to detail here, but in this connection, it will perhaps be helpful to remember his voluntarism. Descartes' final position is that what is true is true because God wills its truth, and that God even could have willed contradictions to be true (Descartes, 1641: pp. 290-292). Indeed, although Descartes does not doubt that what is clearly and distinctly perceived by the natural light is true, he is completely open to the thought that not everything that is true can be clearly and distinctly perceived by the natural light-at least not by us humans. This thought, I believe, is an expression of Descartes' exasperation with his critics, even to the point of deeming their complaints to be insincere. After all, did those critics really mean to claim that what they clearly and distinctly perceive might be false after all, and if so, is that because they clearly and distinctly perceive that what they clearly and distinctly perceive might be false? Did they really mean that they doubted reasoning itself, even the reasoning that led them to think that what they clearly and distinctly perceive may nonetheless be false?
Finally, it is my hope that this paper will stimulate further scholarly work about the thinking of philosophers during the early modern period about the nature of truth and the importance of "mathematical reasoning". Of course, further research includes not only the Cartesians but also empiricists like Locke and Hume, and idealists like Kant. It also includes Newton as well as the rationalists who followed Descartes, especially Leibniz, who prepared the way for Gauss, Lobachevsky, and other great mathematicians of the nineteenth century.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper. 13 In this respect Descartes clears the way for Pascal, who also recognizes the importance of the concept of infinity for understanding the divine, but contrary to Descartes, insists that God and the infinite are really matters of faith if only because the mind cannot bring itself to accept or to reject the conception of the infinite on a rational basis (Pascal, 1670: §XVI, 226-253;pp. 64-73). | 2020-08-27T09:08:53.231Z | 2020-06-23T00:00:00.000 | {
"year": 2020,
"sha1": "c8b9b0339428a96c0fa4bb350373425d039cd463",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=102496",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5cdb20c3d484bebcd493a80ab65f0eaba980d3e6",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
16764800 | pes2o/s2orc | v3-fos-license | Directed Kinetic Self-Assembly of Mounds on Patterned GaAs (001): Tunable Arrangement, Pattern Amplification and Self-Limiting Growth
We present results demonstrating directed self-assembly of nanometer-scale mounds during molecular beam epitaxial growth on patterned GaAs (001) surfaces. The mound arrangement is tunable via the growth temperature, with an inverse spacing or spatial frequency which can exceed that of the features of the template. We find that the range of film thickness over which particular mound arrangements persist is finite, due to an evolution of the shape of the mounds which causes their growth to self-limit. A difference in the film thickness at which mounds at different sites self-limit provides a means by which different arrangements can be produced.
Introduction
One of the most daunting challenges posed by nanotechnology is achieving the fabrication of extremely high densities of nanometer-scale clusters of atoms, with positional control, and on a OPEN ACCESS practical time scale. A seemingly attractive approach toward meeting this challenge involves the use of a template to direct the self-assembly of nanostructures [1][2][3][4][5][6][7][8]. However, only if assemblies result in which the individual structures are more complex than, or arrange at higher spatial frequencies than the features of the original template does this offer any advantage over conventional, "top down" fabrication. Higher feature spatial frequencies, or "pattern amplification" [9][10][11][12] can result if the wavelength selection displayed in certain growth instabilities can be anchored by features of a template of lower spatial density. Understanding how an artificially defined template interacts with such instabilities could enable control of the positions and densities of spontaneously forming nanostructures. In this letter we examine the effect of an artificially imposed template in a simple system known to display a type of self-assembly: that of nanometer scale multilayer islands, or "mounds", which form during homoepitaxial growth on GaAs (001). A key difference between this system and those in which template-directed assembly of multilayer islands ("quantum dots") have previously studied, including Ge on patterned Si (001) [4,5] and InAs on patterned GaAs (001) [6,7] is the absence of strain as a driving force for self-assembly [8]. Instead mound formation during homoepitaxy is generally thought to be entirely due kinetics, in particular to an instability [13] resulting from an extra ("Ehrlich-Schwoebel") barrier to the diffusion of atoms across atomic steps from above [14][15][16][17]. We find that a predefined topographical pattern on a GaAs (001) substrate can direct the assembly of well defined arrangements of mounds during homoepitaxial growth, and that the sites at which mounds form on the template depend both on the growth temperature and on the deposited film thickness. We further find that a type of self-limiting behavior occurs in this system which can be exploited in selecting between different arrangements of mounds. Finally, and most interestingly from a technological standpoint, we find that an amplification of the spatial frequency of the mound arrangements, beyond that of the features of the template, is realizable in this system. Figure 1 shows the results of homoepitaxial growth on a patterned GaAs (001) substrate on which the initial width of each nanopit (measured between points of half maximum depth) was 140 nm. The starting topography is shown in Figure 1a, while subsequent panels show the topography which results from the growth of films of thickness (b) 60 nm; (c) 100 nm; and (d) 150 nm at a temperature of 460 °C. Figure 1b,c show the directed assembly of mounds at particular sites on the template. We refer to these as "2-fold bridge" sites, i.e., bridges between near-neighbor pits. This observation is in qualitative agreement with the predictions of our earlier kinetic Monte Carlo simulations [18] of growth at such sites on a patterned substrate within a particular temperature range, in the presence of an Ehrlich-Schwoebel barrier.
Directed Self-Assembly and Pattern Amplification
How does the template direct the mounds to assemble at particular sites? There can be multiple effects at play, depending on the energetics and kinetics of the mound formation. One likely contribution is topographical in origin. Mounds are unlikely to overhang the edge of a pit; this reduces the number of sites at which they can form, and leads to an entropic mound-pit edge interaction [18]. A second contributing effect comes from the competition of a "natural" mound spacing, which as we showed elsewhere [18] is determined by the growth temperature at a given set of fluxes, versus the spatial period artificially imposed by the template. (The temperature dependence could stem from the smaller mound capture zone for adatoms at lower temperatures, due to the shorter distances the adatoms can diffuse; this would impede mounds from coalescing to form larger ones.) The underlying physics behind the competition is reminiscent of the Frenkel-Kontorova model [19,20], which can lead to a series of structures with spatial frequencies whose magnitudes relative to those of the template can be changed [18,20], and possibly amplified [9][10][11][12]18]. Indeed our results are consistent with such a model. Figure 2 shows AFM images of the topography resulting from growth at a series of temperatures, and template spatial period. Strikingly, the mound spatial frequency is amplified relative to that of the pattern, resulting in an increased number of mounds per pattern unit cell as the growth temperature is lowered (Figure 2b-d) [21].
Further evidence for a competition between the natural mound spacing and the template period can be seen by comparing Figure 2c,d, where at a fixed growth temperature of 300 °C the number of mounds across a template unit cell decreases as the spatial period of the template is lowered. In these images the mound arrangements do not show a simple (n × m) periodicity; for example, in [18], which predict an increase in the "natural" mound size with temperature for growth on unpatterned surfaces. An additional effect relevant in determining the arrangement of mounds can come from a tendency to nucleate at pit edges of certain orientations, if the edges of the pits provide heterogeneous nucleation sites [18] by locally reducing the perimeter energy, e.g., through a multistep reconstruction. We note that GaAs (001) is a 2-fold, rather than 4-fold symmetric surface, and the inequivalence of 110 and 1 10 step edges [22] seemingly explains the preference for one type of 2-fold bridge site observed in Figure 1b,c.
Self-Limiting Behavior
We now consider one last effect, which leads to changes in the spatial arrangement of mound arrangements as growth continues. Figure 1e illustrates the evolution of the mounds during growth at 460 °C. It consists of a series of height profiles along the path corresponding to the dashed line in Figure 1b. After deposition of an overall film thickness of somewhere between 100 nm and 150 nm, a noticeable change in the evolution of the surface morphology occurs. While initially the mounds at 2-fold bridge sites grow and sharpen, after this point they evidently stop growing: their heights self-limit. Growing beyond the corresponding film thicknesses causes the mound heights to decrease [23]. In addition, beyond this point their heights are surpassed by those of mounds at "4-fold bridge" sites, i.e., at positions centered between quartets of neighboring nanopits. The observation that the film thickness at which the height of a mound self-limits depends on the type of site indicates that the mound arrangement can be tuned via controlling the amount of growth. We also find that the onset of self limiting behavior depends on the spatial period (λ) of the pattern. Figure 3 illustrates the lateral template length scale dependence of the mound heights after 100 nm of growth. For ease of comparison the lateral dimensions of the height profiles shown in Figure 3e are normalized to the period of each nanopit template. For the larger template spatial periods the surface evolves more slowly, and mounds at 2-fold bridge sites show a maximum height for i.e., λ ≈ 400 nm. For the array with a spatial period of 280 nm mounds at these sites are clearly shorter, while at the λ = 200 nm mounds at the 2-fold bridge sites have completely disappeared, and been replaced by downward cusps between the newly dominant mounds at 4-fold bridge sites. Figure 1, i.e., growth temperature = 460 °C; growth rate = 0.28 nm/s. We next show that the self-limiting behavior of mound heights is relevant to understanding the transient amplification of the pattern corrugation (height difference between peaks and valleys) during growth that we reported on earlier [17,[24][25][26]. In Figure 4a we plot the growth rates of the heights of three different features, measured relative to the height of the surrounding unpatterned surface, and normalized to the average growth rate. The individual curves are for the mounds at 2-fold bridge sites (dashed curve), mounds at 4-fold bridge sites (dashed-dotted curve) and the pit bottoms (dotted curve). Using the unpatterned region as a reference level reduces the apparent difference in the range of growth over which mounds in the two types of sites dominate the topography, as shown in the supporting documents [23]. Most significantly, early on (i.e., for the smallest film thicknesses studied) the local growth rate at mound sites is greater than the average growth rate, while that at pit bottoms is below the average. This difference leads to an initial amplification of pattern corrugation during the early stage of growth. By a film thickness of 60 nm the self-limiting behavior of the mounds initiates, with the local mound growth rate falling behind both the average and that at pit bottoms. Indeed, coincident with this, the growth rate at the pit bottoms reaches a maximum, which well exceeds the average rate of growth. The pattern corrugation amplitude in this regime decays, consistent with our earlier reports [17,[24][25][26]. The different rates of growth within and outside the pattern are consistent with an island formation (rather than step flow) mode, a large Ga adatom diffusion length at these temperatures [27], and, as we discuss below an island nucleation probability which depends on the local terrace width [17,28]. These observations also strongly suggest that the self-limiting growth of mounds is at least in part responsible for the transient nature of the amplification of the pattern corrugation.
A remaining question is: what physical mechanism lies behind the self-limiting growth behavior? Previously it has been suggested [16] that mound sidewall orientations should reach a steady state value; indeed we find that coincident with the initiation of self-limiting behavior mound sidewalls along the 110 azimuth evolve to orientations approximately 26° from (001), corresponding to {012}-type facets [29]. While perhaps significant, this faceting alone does not obviously explain the cessation of the growth of the mounds. A plausible explanation, based on observations that the mounds sharpen before self-limiting, is the existence of a minimum top terrace width, beneath which further islands do not nucleate atop the mounds. In Figure 4b, we plot the apparent top terrace width as a function of growth thickness based on height profiles across 2-fold bridge sites measured from Figure 1. The analysis shows a minimum size after growth of 60nm, i.e. coincident with the initiation of self-limiting behavior. Figure 4c shows a histogram of apparent apex terrace widths at the minimum shown in Figure 4b. It exhibits a distribution of widths, with an apparent peak value of 45-50 nm for the critical terrace size. This sets an upper limit for the critical width, as this measured value includes the convolution with a fairly blunt AFM probe. Deconvolution of the point spread function, using the manufacturer's range of tip radii of 20 ± 10 nm would yield a value of 23 ± 23 nm, a range which includes a width as small as a single unit cell.
Intuitively one might expect the critical terrace width to be small, perhaps on the order of a unit cell of the GaAs (001)-c(4 × 4) reconstruction. A plausible hypothesis is that an effect related to "reaction limited island nucleation" of compound semiconductors during MBE growth, proposed by Kratzer and Scheffler, is responsible [30,31]. Specifically, incorporation of a new layer of GaAs into the solid would be prevented once the top terrace width is too small to have a finite probability for island nucleation to occur. Island nucleation involves multiple species (Ga adatoms, As 2 molecules) adsorbed in sequence, along with selection of sufficiently strong absorption sites and surface geometry. A second possible mechanism, based on that proposed by Giesen et al. [32], is that the ES barrier vanishes due to quantum confinement effects of electronic states on the surface if the top terrace width drops below a certain critical size. The vanishing of the ES barrier at the apexes of mounds would increase the probability of interlayer mass transport from the top of the mounds to the pit bottoms, reducing the probability of island nucleation growth at the apexes, and initiating self-limiting behavior. Distinguishing between these and other possibilities is beyond the scope of this article.
Experimental Section
To create the templates used in this study we patterned GaAs (001) wafers using electron beam lithography followed by inductively-coupled plasma etching, creating several sets of square nanopit arrays in which the initial widths were varied systematically from 60 nm to 400 nm. The center-to-center spacings were held fixed at twice the initial nanopit widths, and the initial depths were held at approximately 50 nm. The patterned samples were cycled between a molecular beam epitaxy (MBE) growth chamber (VG V80H, Oxford Instruments, Abingdon, Oxford, UK) (base pressure 2 × 10 −11 mbar) for homoepitaxial growth and an atomic force microscope (AFM) (DI 3100, Veeco/Bruker, Billerica, MA, USA) for surface topography characterization in atmosphere. The latter was operated in tapping mode with carbon nanotube-terminated probes, whose terminal radii were nominally between 10 nm and 30 nm. Before each growth experiment the surface oxide was desorbed by heating to 400 °C in the presence of atomic hydrogen, resulting in negligible desorption induced roughness. To track the evolution of individual features after various stages of growth, we used AFM in atmosphere to measure the topography of the surface. We then reintroduced the sample into the MBE chamber, repeated the deoxidation, and grew additional GaAs. An advantage of the pattern is that it allowed us to navigate back to the same features with the AFM after each growth step. The growth rate was held fixed at 0.28 nm/s with the As 2 and Ga fluxes set for a beam equivalent pressure ratio of 10:1. The substrate temperature was determined by optical pyrometry with an emissivity correction that is calibrated using the thermal desorption temperature of the native GaAs oxide at 582 °C.
Conclusions
In conclusion, we have observed that it is possible to direct the assembly of arrangements of multilayer growth mounds on nanopatterned GaAs (001). Most significantly from a technological point of view we find that growth at low temperatures, near 300 °C leads to mound spatial frequencies exceeding those of the features of an artificially defined template, i.e., pattern amplification. We find that the spatial period of the arrangements can be changed by varying the growth temperature or pattern period, consistent with a competition between temperature-dependent, natural mound spacing and the spatial period of an artificially defined substrate topographical template. We also find that the film thickness over which the self-assembly of a particular mound assembly persists is finite. Once a mound reaches a self-limiting shape, it can only grow further via the apparently slow incorporation of atoms at steps which form its sidewalls. This is consistent with a critical, minimum terrace width for island nucleation. The self limitation of the mound heights casts new light on the origin of a transient amplification of an artificially imposed corrugation during homoepitaxy on GaAs (001). Finally, we note that the kinetic and entropic effects which dominate directed self assembly in this simple system must be taken into account in systems such as Ge on patterned Si (001) and InAs on patterned GaAs (001) along with the strain energy effects and interface tension effects which have been previously considered. experiments. Christopher Richardson provided expertise in MBE growth and commented on the article. Raymond Phaneuf designed the project, participated in the analysis and largely wrote and corrected the article. | 2018-04-03T02:11:16.913Z | 2014-05-12T00:00:00.000 | {
"year": 2014,
"sha1": "a722ef4663d936841c417f3c60121508cef13800",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/2079-4991/4/2/344/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a722ef4663d936841c417f3c60121508cef13800",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
21866555 | pes2o/s2orc | v3-fos-license | A Case of Phenytoin-induced Rhabdomyolysis in Status Epilepticus
Phenytoin is a commonly used antiepileptic drug, especially when treating status epilepticus. Here, we present a patient who suffered from status epilepticus and developed rhabdomyolysis after being treated with phenytoin. As multiple seizures itself can induce rhabdomyolysis, it is difficult to recognize that phenytoin can be the cause of rhabdomyolysis in status epilepticus patients. Even though phenytoin is a rare cause of rhabdomyolysis, we should discern that phenytoin can be a causative drug to bring about rhabdomyolysis.
Introduction
Phenytoin is one of the most commonly used anti-epileptic drugs for treating seizure disorders. The well-known adverse effects of phenytoin are nystagmus, ataxia, drowsiness as well as blood dyscrasia, nephrotoxicity, hepatotoxicity and hypersensitivity syndrome. 1 Rhabdomyolysis by phenytoin was first reported at 1976 and has been rarely been reported since then. 2 Here we present a patient with status epilepticus who suffered from rhabdomyolysis after being treated with intravenous (IV) phenytoin.
Case
A 37-year-old man visited the emergency center due to three events of generalized tonic-clonic seizures without recovery of consciousness between seizure from 30 minutes ago. Two-years ago the patient had left frontal intracranial hemorrhage (ICH) due to a ruptured aneurysm located at anterior communicating artery (Acom). The patients also had diabetes mellitus and liver cirrhosis due to chronic hepatitis B. The patient was receiving metformin 500 mg/day and linagliptine 5mg/day for diabetes and tenofovir 300 mg/day for hepatitis B.
On the presentation to emergency center, the blood pressure was 125/55 mmHg, the heart rate was 112/min, and the body temperature was 37.2℃. The patient was in coma with intact brainstem reflexes. There was no lateralizing sign or any other focal neuro-logical deficit except myoclonic jerks were observed from the chest and abdominal wall continuously. The patient was intubated and IV lorazepam 4 mg was injected twice. After then, phenytoin 20 mg/kg was loaded with starting 24 hr electroencephalography (EEG) monitoring. The myoclonic jerks subsided after phenytoin loading. Brain computed tomography (CT) demonstrated an encephalomalacia at the left frontal lobe due to the prior ICH ( Fig. 1). In initial laboratory tests, the creatine kinase (CK) was elevated to 727 IU/L, but the estimated glomerular filtration rate (eGFR), blood urea nitrogen (BUN) and creatinine level was in the normal range (eGFR: 70 mL/min/1.73 m 2 , BUN: 13 mg/dL, Creatinine 1.31 mg/dL). The patient became alert and no more clinical seizure was observed after phenytoin treatment. The 24hr EEG monitoring showed continuous medium amplitude theta to delta slowing on the left hemisphere due to the encephalomalacia without any epileptiform discharges.
Since the level of CK and creatinine increased to 1,823 IU/L and 2.93 mg/dL at the second day of hospitalization, massive hydration with bicarbonate therapy initiated to treat acute kidney injury. Oral phenytoin 150 mg twice a day was maintained to control seizure. Serum CK decreased to 824 IU/L transiently after starting hydration, but it increased again at the 6th hospital day. Even though aggressive hydration was performed and no more seizures were observed, the serum CK levels peaked at 3,825 IU/L at the 7th hospital day. Considering that phenytoin might be the cause of rhabdomyolysis, phenytoin was substituted with levetiracetam at the 7th hospital day. Subsequently, the serum CK level promptly trended to decrease and was normalized at day 13 (Fig. 2). No more seizures were observed, and the patient was discharged to home at day 13.
Discussion
In the present case, the level of CK increased to 1,908 IU/L at the 3rd day after last seizure and decreased to 824 IU/L at the 5th day after last seizure and increased again up to 3,825 IU/L for the two consecutive days, although there were no additional seizures, immobilizations or any traumas. During hospitalization, the patient received baclofen for hiccups, tenofovir for hepatitis B and linagliptin for diabetes mellitus. But these drugs are not known to cause rhabdomyolysis. The level of CK immediately decreased after discontinuing phenytoin, and consistently decreased to 298 IU/L at the 6th day after stopping phenytoin. In the previous study about postictal CK elevation, the peak level was observed at 2-4 days after last seizure. 3 Therefore, in the present case the second rise of CK level at the 7th day after last seizure could not be explained by the seizure itself and the CK level was normalized after phenytoin discontinuation. We diagnosed the cause of the second rise of the CK level as rhabdomyolysis by phenytoin. Phenytoin was not re-administrated to confirm our hypothesis due to ethical problems.
In the literature review, the classic phenytoin-induced rhabdomyolysis was associated with phenytoin hypersensitivity syndrome, which is a reaction that typically develops within three weeks to three months after initiation of phenytoin medication. [4][5][6][7][8] Phenytoin hypersensitivity syndrome is characterized by fever, rash, lymphadenopathy, and eosinophilia. In the present case, the absolute eosinophil count was in normal range, and other presentations of phenytoin hypersensitivity syndrome were not observed. Recently, the cases of phenytoin-induced rhabdomyolysis without any distinct symptoms of hypersensitivity have been reported 1,9 , and those cases were very similar to our patient. They suffered from generalized tonic-clonic seizures and were treated with IV phenytoin. The serum CK level increased up to the highest level at the 5th day after last seizure and promptly decreased after stopping phenytoin. This temporal correlation provides significant evidence of phenytoin-induced rhabdomyolysis. Our patient accords with the latter type of rhabdomyolysis.
Considering the wide use of phenytoin, the reports of phenytoin-induced rhabdomyolysis are very rare. There can be several causes to explain this. First, the rhabdomyolysis can be caused by status epilepticus itself, it may be hard to distinguish the exact cause of rhabdomyolysis in certain cases, especially, when the CK level fluctuates after multiple seizures. Second, the rise of the CK level by phenytoin is mild and transient than rhabdomyolysis which is caused by multiple seizures.
Rhabdomyolysis is a serious complication of phenytoin and leads to acute kidney injury. So, we should consider that phenytoin can be a causative drug of rhabdomyolysis especially when the CK level increases although the seizure is well-controlled by phenytoin therapy. | 2016-08-09T08:50:54.084Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "81a991a4367007c02e1e6faa94d74c3e7e7a24fe",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.14581/jer.16007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81a991a4367007c02e1e6faa94d74c3e7e7a24fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240886380 | pes2o/s2orc | v3-fos-license | Upper respiratory tract infections and the immune system response. A review.
Purpose: the spreading of the COVID-19 epidemic raised a question on why very well trained, healthy, and young athletes have been infected. In this review, the emerging topic in the eld of sport immunology has been studied with the aim to provide advice on how strengthening the immune system (IS) and how to help the recover after heavy effort and prevent upper respiratory tract infection (URTI) in athletes. Methods: a literature search was performed on available public scientic databases. Results: URTI, a common illness among heavy trained athletes, happens in the time frame of temporary depression of the IS following heavy training or competition. T cells has been identied as the main factor in the immune response to counteract the cascade mediators of inammation. Life habits, environmental and psycho-social factors such as sleep loss and life stressors are the major causes of IS depression, and it emerge that there is an optimal training load exposure which reinforce the IS, while too low or too much training being detrimental. Conclusions: immunodepression in heavy trained athletes can be counteracted with a proper distribution of training loads, nutritional interventions, correction of lifestyle habits such as sleep hygiene, thermotherapy, and recovery techniques. Psycho-social interventions also seem to have a positive effect on reducing the post exercise inammation and in boosting the IS response. Novel bioinformatic approaches can help to understand the IS response in athletes and the management of critical situations.
Introduction
A 29 years old male 3000 m dash runner (personal best 8.17.00) healed from Covid-19 after hospitalization for heavy respiratory symptoms, has triggered a discussion on the heavy exercise and the immune system in competitive athletes on Italian media [1].
Immune system response is a complex concept which refer to thousands of different factors, some orchestrating together and some working independently or in local networks, mediated by many mechanisms at transcriptional, molecular and systemic level.
An Immune Exposure has been de ned as: "the process by which components of the immune system rst encounter a potential trigger" [2]. To understand what really happens in the big picture of the immune response, recently some information theory approach was tried, modeling the process from the existing body of knowledge about the different phenomena which characterized this response, producing a complex modeling bioinformatics system, feeded by data originating in the existing knowledge of the immune response [2] . This approach produced a software environment where the human body organism of immune response can be mathematically simulated [2]. The immune response in athletes is, in some extent, different from that one happening in non-heavily trained organisms, due to the high demands of training. Further, different sports are different in the exposition to pathogens (e.g. water sports vs land sports), and in the environment where the sports take place (e.g. winter and mountain sport vs hot environments sports). Also, the organization and the contents of training in different sports, determine different responses of the athlete immune system, as well as sex, age, genetics factors and level of quali cation (e.g. amateur vs elite athletes). The interaction between these factors, are manifold, complex and mostly unknown. For example, a young athlete who compete in sport where physical appearance is important (e.g. artistic gymnastic) undergone severe diet restrictions, associate with heavy training regimes, while weight lifters have an high caloric intake, associate with intense loads of relatively short duration, and so on. The literature about immune system and sport is broad, but speci cally there are few studies dealing with the respiratory tract infection and training and the existing theories are somewhat controversial.
Methods
An online literature search was performed using PubMed from inception of the database to July 2020 with the following keywords used in different combinations: "upper respiratory tract infections and sport", "training and immune system" "endurance and immunity" "sleep and immunity" "exercise and in ammation" , "upper respiratory tract infections", "stress and training" , "training overload," "nutrition and recovery" "performance," "recovery," "fatigue," "stress". All titles and abstracts were carefully read, and relevant articles were retrieved for review. In addition, the reference lists from both original and review articles retrieved were also reviewed. Inclusion criteria were to deal with respiratory (upper and lower) tract infections in well trained athletes and in elite athletes, be the studies performed with humans (with the exception on one relevant study), be in English language, and be both experimental and theoretical studies. A total of 120 relevant papers were found from which 41 were selected for review.
Sport and upper respiratory tract infections.
Existing literature about heavy exercise and the respiratory tract reports con icting results about heavy training and its association with suppressed mucosal and cellular immunity and increasing symptoms of respiratory tract infections (URTI). During competition in the cold, the incidence of upper respiratory tract infection is obviously very high : 20 out of 44 (45%) athletes and 22 out of 68 (32%) staff members of Finnish team experienced symptoms of the common cold during a median stay of 21 days at Winter Olympic Games [3]. These results are of course in uenced by the environmental conditions, but also in absence of cold weather has been shown that athletes participating in marathon in normal or hot environments [4], showed a 2-to 6-fold increased URTI risk during the 1-2-week post-race. In a large group of 2311 endurance runners, were found nearly 13.0% who reported illness in the week after the Los Angeles Marathon race compared with 2.2% of control runners 3. This was con rmed by other epidemiological studies in triathletes and in marathon and ultramarathon race events and/or during and after very heavy training [5,6,7].
The decrease in exercise performance after a URTI can last 2-4 days, and runners who unwisely start an endurance race with systemic RTI symptoms are 2/3 times less likely to complete the race [8].
Nieman et al. observed that runners training >96 km/week doubled their odds for sickness (URTI) compared with those training <32 km/week [9]. Nieman also concluded that, following acute bouts of prolonged high-intensity endurance exercise, several components of the immune system are suppressed for several hours [10]. This has led to the concept of the 'open window' theory, which was described as the 1-to 9-hour period following prolonged endurance exercise when the host's defenses are decreased and the risk of URTI is increased. During this 'open window' period, athletes should be advised to remain isolated from all the possible sources of infection. The hypothesis of a J shape relationship between exercise dose and susceptibility to URTI has been proposed by Shepard, which identi ed in too low or to high exercise load a long term depressing effect on the immune system, with the heavier loads be a major factor to predispose to illness [11]. Another theoretical study [12] proposed instead a sinusoidal relationships (S shape) relationships between infection odd ratio and training load, being the infections high with low loads, lower with moderate exercise (protective role of moderate exercise) in moderate exercise, high in heavy trained but not elite athletes, and low again in elite athletes, having elite athletes an innate resistance to infections [12].
Psychological stress has been shown to in uence the immune system increasing the susceptibility to respiratory infections. Too much life stress, has been experimental proven to be a major factor in decreasing body defenses [11]. Elite athletes prone to recurrent URTI have altered/adverse cytokine responses to exercise in comparison with healthy athletes [13]. The consensus for studies of elite athletes is that low levels of salivary IgA and/or secretion rates, low pre-season salivary IgA levels, declining levels over a training period, and failure to recover to pre-training resting levels, are associated with an increased risk of URTI [14]. Prolonged and strenuous aerobic exercise induces a marked decrease in plasma concentration of glucose and amino acids, which can lead to immunodepression. In this case, the maintenance of nutrient availability during and after vigorous exercise is essential for proper immune system control, which is coordinated by nutrient sensors (i.e., AMPK and mTOR) and metabolic pathways (i.e., glycolytic or oxidative phosphorylation) in immune cells [13]. Thymus, the site of production of T and B cells, is one of the main organs under stress for immune response. Thymic activity is reduced by strenuous exercise [14]. Since thymic production of T cells naturally decline with age, experimental results raised the concern that prolonging high intensity exercise into the 4th decade of life may have deleterious consequences for athletes' health [14,15,16]. Others available data suggest that the effects of conditioning could be mediated by a preferential effect on T cells [12,16]. Prolonged and exhaustive exercise typically reduces peripheral blood type-1 T-cell number and their capacity to produce the proin ammatory cytokine, interferon-γ [17]. Thymic stromal lymphopoietin (TSLP), an epithelial cell-derived cytokine, exhibits both pro-in ammatory and pro-homeostatic properties depending on the context and tissues in which it is expressed. It is well-known that TSLP can trigger the production of Th2 cytokines, such as IL-13 and IL-4 [18].
Moreover, heavy training loads are associated with elevated numbers of resting peripheral blood type-2 and regulatory T-cells, which characteristically produce the anti-in ammatory cytokines, interleukin-4 and interleukin-10, respectively. This appears to increase the risk of upper respiratory symptoms, potentially due to the cross-regulatory effect of interleukin-4 on interferon-γ production and immunosuppressive action of IL-10 [17]. Time course of the in ammatory response is a kay factor in determine time windows who make the organism prone to infections. It is well known that WBC increases acutely after strenuous exercise. We observed an acute (after 1 hour cycling at exhaustion > 70% of VO 2 max with 3 bursts of 10 minutes > 80% VO 2 max), but not chronic (after 1 month training) of white blood cell count from 6,27±2,34 103 /ul to 9,01±3,63 103 /ul in trained cyclists [19].
In ammatory response is a crucial biological response, which has been extensively studied in the context of skeletal muscle growth and repair, sarcopenia, and myopathies. Recruited and resident immune cells of injured muscle secrete pro-in ammatory cytokines such as IL-1, IL-8, IL-6, and TNFα triggering a cascade of downstream in ammatory signaling pathways where NFκB represents one of the most signi cant signaling molecule activated upon injury in skeletal muscle [20]. Environmental factors have a signi cant role in promoting respiratory tract infections. Air pollution has been shown to be a factor facilitating lung in ammation. Results from literature suggests that acute PM2.5 with different concentration can cause different degrees of adverse effects on lung, especially in high (> 500 μg/m 3 ) concentrations [21]. Some bene t has been observed from low intensity aerobic interval training in impeding the oxidative stress and in ammation caused by pollution exposure [21], helping in removing pollutants from the lungs. Some nutritional facts have been advocated to prevent RTI infection in athletes. Protective role of vitamin D has been invoked for normal population [22], albeit in athlete there was found any effect [23] of Vitamin D supplementation in preventing respiratory tract infections. Vitamin C and E has also been advocated [24], has as protective substances, thus fruit consumption [25].
Recently, intestine microbiota was indicated to have a potentiation role on the immune system [26]. Microbiota also stimulates T cells and neutrophils (Nf), inducing a pathogen spreading control, and B cells. Nutrition countermeasures to exercise stress, probiotics (a derived of dairy products, mainly lactobacillus) has been recently investigated and recommended to improve gut microbiota [27]. However, the link between gut microbiota, mucosal immunity and exercise stimulation has not yet fully explored, leading to several possibilities for further research in the exercise immunology.
Strategies to boost immune system in heavy trained athletes.
Recovery procedures and training load distribution are the most obvious strategies to boost the immune system in heavy trained athletes. Apart of the many nutritional suggestions, there are less explored and quite inexpensive systems to improve immunity in heavy training athletes. First one is the organization of the training. The tetradic system proposed in the ancient times [28] warning the athletes and the trainers about the organization of the training week, to avoid overloads. As stated by Philostratus : "…after a rst day of introductory mild-intensity training, the second day is dedicated to strenuous exercise, followed by a day of low intensity, and another day of mild intensity exercise ". A proper sleep was also recommended together with a regulate life habits and proper diet. Thus, yet in ancient time were well know the risk of overtraining. in lowering the body defenses. A proper sleep is necessary to boost immune system, but a question arises about which is a "proper" sleep. Sleep can be characterized both from duration than from quality. For example, training placed in late evening hours, due to the activation of the adrenergic system (hypothalamus-pituitary-adrenocortical axis), has been proven to be detrimental to a good sleep and to a decrease in total sleep time and slow-wave sleep and REM sleep (the restoring sleep), while moderate training increase REM sleep [29]. The susceptibility window to infections after heavy training is a temporary phenomenon than can last from several hours to days [30], so, another important preventive measure to preserve the body in the susceptibility window, is to adopt physical measures during the recovery phases, for example hyperthermia (saunas) in the cold environments has been shown to be an effective method. Repetitive mild hyperthermia has been proven to be effective in elevating CD56(+) NKT and B and T cells after 7 days of daily exposure at 40 degrees [31,32]. After exercise, cryotherapy has shown to have some effect on immune system recovery [33], lowering peripheral in ammation. Many evidence exist about the e cacy of massage in improving the immune system response. Recent studies in animals [34] and in men [35] shown the e cacy of massage in boosting effects on T cell repertoire and in decrease noradrenergic innervation of lymphoid organs. Lifestyles intervention and psychological methods has also been proven to be effective in boost the immune system, for example meditation and methods to copying with life stressors [36,37]. Quite simple activities, such as breathing control can be helpful in reducing stress, boosting immune system [38,39]. An emerging measure to boost immune system, is the so-called nature-based therapy. Using relaxing environment to decrease stress, ameliorating subjective (stress scales) and objective (lower catecholamines) has proven to be an effective way to restore and improve the immune system [40]. In this respect, social stress is known to be associated with a low immune system [41], and social situation of menace, panic or continuous pressure, can seriously impair the immune system. High level athletes must cope with high social pressure and thus stress, and this is also a possible concurrent cause of immunodepression. From this point on, the discourse about stress and thinking, become philosophic or even religious, and is beyond the scope of this review.
Conclusions
Immunity is a complex interaction between organs and the environment, mediated by several genes, receptors, molecules, hormones, cytokines, antibodies, antigens, in ammation substances which in turn relate to psychological factors. Immune system response of heavy trained athletes has been theorized to follow a J or S shape dynamics in times, being high training loads effective in modify the immune response elevating the biological markers of immunity and the organism susceptibility to infections. The cascade of in ammatory markers of in ammation have as a main player the T cells systems. Training in cold environment put the athletes at risk and every manageable and affordable countermeasure must be considered by coaches and athletes. Athletes, who are considered healthier than normal population, are in fact prone to infections of the respiratory tract, due to lowering of the immune system in the time frames subsequent heavy training sessions. Apart of behavioral intervention to minimize the "open window" effect on infection, some recommendation can be found in the scienti c literature on how to cope with stressors and boost the immune system in athletes which are on heavy training or after heavy competition. A key factor is a progression in training loads, a proper placement of training sessions, and allowing a proper recovery between training sessions. Relatively simple measures can be adopted, such as sleep hygiene and increase personal hygiene, reducing exposure to possible factors of infections, and proper nutrition rich in vitamins, hot beverages in winter, sauna, massage as proper recovery intervention can be helpful in prevent illness together with vaccination. However, social factors are also emerging important determinant of susceptibility to in ammation, such as environmental pollution and cleanliness. Availability of food with high content of high-quality nutrients, and even economic income are determining factors of health. Further research is needed in the modeling of the immune response and on interaction between physiological and social factors involving the sphere of humanities sciences into the model of the immune response.
Declarations
Competing Interests: The author declares no competing interests. | 2020-07-23T09:09:33.514Z | 2020-07-22T00:00:00.000 | {
"year": 2020,
"sha1": "0bb87e635ecc30cd7e04dac02b6767889576f9ec",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-46588/v1",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c1e22e102c4d256ae7a9bbc47c24dec0265cea64",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
107403721 | pes2o/s2orc | v3-fos-license | CONCENTRATIONS OF SODIUM 3 Α , 7 Α-- DIHYDROXY-12-OXO 5 Β C HOLANATE IN BIOLOGICAL MATERIAL AFTER ITS INTRAVENOUS AND INTRANASAL APPLICATION
Newly synthetized derivative of bile acid, sodium salt of 3α, 7α-dihydroxy-12-oxo 5β cholanic acid (monoketocholanate) expressed a good characteristic as intranasal transport enhancer of xenobiotics.The aim of our sudy was to explore if it has an influence on bile metabolism and to measure its concentration in blood and bile after intravenous and intranasal administration. The experiment was performed in vivo on adult male Wistar rats. The determination of monoketocholanate (MKCh) in rats blood and bile, was carried out by high-performance liquid chromatography (HPLC), on an HP ODS2 column, using methanol/acetonitrile/acetate buffer as mobile phase. Absorbances were measured at 210 nm.Blood samples were taken from the prepared right axillary artery in 0, 1, 5, 10, 20, 30, 60, 90, 120 and 180 minutes from the beginning of the experiment. Bile was collected in a half an hour intervals,during the three hour period. The results showed that MKCh changed the amount of excreted bile depending on the way of application. Intranasal application increased the bile volume and the MKCh concentration, both in blood and bile compared to the intravenous application (p<0.05). Distributionm of MKCh through animal organism depends on the way of application of the substance, which probably determines its caracterisation as the transport promotor of applied xenobiotics. HPLC has proved as aa relatively simple, fast and effective method for the determination of synthetic bile acid,MKCh in these biological materials. key words: bile acids and salts, HPLC method, intranasal and intravenous administration, rats
INTRODUCTION
Bile acids and their salts have found many applications in medicine, agriculture and pharmacy [1,2,3].They are very attractive for substance transport researches, according to their chemical properties and detergent-like action [4].Some scientists have reported that bile acids can be used in the monitoring of the phylogenetic origin of vertebrates [5].As far as chemical structure is concerned, keto (oxo) derivatives of natural bile acids have been detected as metabolites, named "tertiary bile acids".As is it known, the hydroxy derivative of cholanoic acid is found only in human bile and some amounts of cholanoic acid metabolite is found only in human feces [6].The main intermediates in the process of reduction of cholic to chenodeoxycholic acid are 3,7-dihydroxy-12-keto-5 cholanoic acid and its esters [7].Monoketocholanate, its sodium salt, has been synthesized from 3,7-trihydroxy-12-keto-5 cholanoic acid [8].This salt appeared to be an effective promoter of intranasal resorption of insulin [9,10,11], as well as salicylates or morphine transport through the brain endothelial cells [12,13].
The aim of this work was to examine the possible influence of MKCh on bile secretion by measuring the amount of excreted bile.Another challenge was to determine, for the first time, the MKCh concentration in blood and bile, after intravenous and intranasal applications of MKCh in experimental rats..
STUDY OBJECTIVE
The aim of this work was to examine the possible influence of MKCh on bile secretion by measuring the amount of excreted bile.Another challenge was to determine, for the first time, the MKCh concentration in blood and bile, after intravenous and intranasal applications of MKCh in experimental rats.
MATHERIAL AND METHODS
Experiments were carried out in vivo on white Wistar male rats (body weight 200-300 g) in three hour time interval.The animals had free access to water and food(with a 12-hour succession of light and dark periods), and then starved for eight hours prior to the experiment.Experiment described in study, complied with ethical principles according to standards of Good Laboratory Practice.
The doses of administered substances were calculated on the basis of animal's body weight.
MKCh was used in the form of its sodium salt (cholanate), which was synthesized in the Department of Chemistry, Faculty of Sciences Novi Sad.Substance was applied intravenously or intranasally to experimental animals, in the dose of 4 mg/kg b.w.Concentrations of MKCh were measured on a Hewlett-Packard 1090, Series II HPLC instrument, using an HP ODS2 (10 cm•2.1 mm) 5 μm column, whith methanol/acetonitrile/buffer (pH 7.4) as mobile phase, and a flow rate of 1.0 mL/min.
Experimental groups
Animals were divided into the following groups: -Control groups -animals received intravenously or intranasally physiological solution and blood was taken after that.Blood samples were taken from the prepared right axillary artery in 0, 1, 5, 10, 20, 30, 60, 90, 120 and in 180 minutes from the beginning of experiment; -Test groups -animals received 4 mg/kg b.w. of MKCh intravenously (MKCh-iv) or intranasally (MKCh-in) and MKCh concentration was measured in blood at ten time points -Control groups -animals received physiological solution intravenously or intranasally and bile was collected after that in six time intervals (0-30 min, 31-60 min.61-90 min, 91-120 min 121-150 min and 151-180 min); -Test groups received 4 mg/kg b.w. of MKCh intravenously (MKCh-iv) or intranasally (MKCh-in) and bile was measured in six time intervals Methods Rats were previously anesthetized by intraperitoneally injected urethane (0.75 mg/kg b.w.) MKCh solution was applied to rats through the prepared left jugular vein (2.0 mL/kg) or intranasally (0.2 mL/kg).A blood volume of 0.15 mL, was taken with a micropipette, then centrifuged and prepared for the high-performance liquid chromatography (HPLC) experiment, following the procedure developed in the course of this work.The MKCh kinetics in rat's blood after applying it intranasally through the left nostril, was monitored by collecting blood at the determined time intervals.
Bile concentrations of MKCh in the course of time were measured in bile, also after the intravenous (iv) and intranasal (in) application.Cannula for bile collecting was inserted to the immobilized ductus choledohus.Then, MKCh solution was injected intravenously or intranasally during 10 s.The excreted bile was collected in 30-min intervals for 180 min.After measuring a total amount of collected bile, 20 μL of the liquid were taken and prepared for the further biochemical analysis, with 40 μL of acetonitrile.
MKCh absorbances were measured on HPLC at 210 nm.Separation of MKCh in the animal bile lasted 15 min.Quantification was carried out by a computerized procedure of measuring the area under the peak and their comparison with MKCh standards of different concentrations.
Statistical analysis
The data analysis included mean values with standard deviations, Area under the curve (AUC) of MKCh in blood and bile, percentages (%) of excreted MKCh in biological materials.All data were analyzed by the Student's T-test, Analysis of variance (ANOVA) and the probabilities of less than 5% were considered statistically significant.
RESULTS
The very first chromatographic records of monoketocholanate in blood is presented in figure 1 and its presence in bile is given in figure 2.
The results obtained by measuring the volumes of secreted bile in control and threated groups in the course of three hours are presented in the Graf 1.
In the case of intravenous application, the bile volume was reduced by 50% already in the 60th minute (p<0.01), and it was significantly lower in all time intervals, according to initial time (p<0.05).Compared to controls volume of excreted bile was also much lower, but significantly only in the last measured period (p<0.05).On the other hand, the intranasal application of MKCh resulted in an increased bile volume, according to controls and iv application.It was statistically significant beginning from the 60th to 120th minute and in the last measured period (p<0.05)compared to i.v.administration, but not compared to controls.Also, the value of AUC bile volume for MKChin was higher than AUC of MKChiv in three hours observing time (p=0.05).
The results of measuring time changes of MKCh concentration in blood samples are presented in Graph 2.
As can be seen, there was a statistically very significant difference in MKCh concentration after its intravenous and intranasal application.In the first 10 min, the MKCh-iv concentration was much higher (p<0.001),whereas in the 120th min the MKCh-in concentration exceeded the MKCh-iv value (p<0.01).The overall amount of excreted MKCh-iv was 248.61 μg and it was significantly higher (p<0.01)compared to the excreted concentration of MKCh-in (70.58 μg).However, by comparing the area under the curve (AUC), it can be seen that the difference was not statistically significant (Table 1).
Maximal MKCh concentration in bile was measured in all treated animals in first 30 min, (Graph 3).
Graph 3. Time changes of MKCh concentration in rat bile after intravenous (MKChiv) and intranasal application
After 60 min, the MKCh-iv value dropped almost quadruple (p<0.05) and continued to fall until the end of the experiment.The presence of MKCh was not registered after 120th min.Compared to the initial period (0-30 min), it was determined a strong decrease of MKCh-iv (p<0.05), which was not in the case of intranasal application of MKCh.The MKCh-in concentration in rats' bile were higher than the MKCh-iv ones, in all measuring intervala and, except in the first and the last half an hour, it was statistically significant (p<0.05).Compared to the first collected period, MKCh-in concentration was, also, significantly higer in all time intervals, (p<0.05).Statistical significance between AUC of MKChiv and MKChin was noted (p<0.05) and presented in table 1.
There was no statistically significant difference between the groups in respect of the overall amount of applied substance(Table 2),nor were differences in the animal body weights (not presented).There is an evident difference (p<0.05) in the amount of excreted bile after intravenous compared to collected bileafter intranasal application of MKCh.Thus, the percent of the excreted MKCh after the intranasal application (30.05%) was more than a 3.5 times higher than the percent of the substance excreted after the intravenous application (8.3%).
Bile acids may increase the solubility of slightly soluble drugs [13].It might be of importance in view of the fact that bile acid derivatives have already beenused for the treatment of various diseaseand enhanced transport through biological barrieres [12,13,14,15,16,17,18,19].Knowing the kinetics of these synthetic derivatives can help in finding simple methods in the application of drugs.This could provide an efficient mode for future bile acid research and, at the same time, the medical praxis in prevention and therapy.
DISCUSSION
Nasal epithelium acts as a barrier for high molecular compounds such as desmopressin, insulin, human growth hormone, etc [14,15].On the other way, the intranasal route has been already known as noninvasive way of drug administration for systemic therapy [16].Earlier studies confirmed the benefits associated with bile salts caracteristics to promote drug transport through intranasal routes [17,18].Monoketocholanate (MKCh) was synthetized by selective oxydation of cholic acid in several steps, in aim to obtain the sodium salt of 3α, 7α -dihydroxy-12 -oxo-5β cholanic acid [8].MKCh, investigated towards its pharmacokinetic and pharmacodynamic properties, was applied through the intravenous and intranasal route to experimental rats.The bile volume, expressed as area under the curve of total amount of excreted bile in observed period of three hours, had no statistical difference compared to controls for both ways of application (p>0.05).According to that result, we assumed that MKCh bioavailability is good for the intranasal administration.
However, it is not clear yet what is the reason of enchansing in bile secretion caused by intranasal application of MKCh.Compared to the first collected period, it was significantly higer in all time intervals.It also remains unclear what could be the possible pathways of distribution and metabolism of MKCh after its intranasal application, which yielded so high MKCh's concentration in the bile.We did not measure the concentration of MKCh in other biological material (feces for example), that could be eventually explaned the route of MKCh kinetics.One of the possible answers could be the interactions of the MKCh with the physiological environment based on its chemical properties (MKCh is the hydrophyllic bile salt).Obviously, the organism reacted to additional amount of bile acid and MkCh distribution took place via penetration to other compartments.
Bile acid derivatives have already been used for the treatment of various disease and enhanced transport through biological barrieres due to their physicochemical properties.They can influence on membrane fluidity, mucus viscosity or on enzyme limitation.They can inhibit the enzymatic activity in the membrane and thus improve the bioavailability of drugs.One of the important mechanism for improving nasal bioavailability lies in their ability to open the thight junction of nasal epithelium [24,25,26,27].
There is an evidence that MKCh can promote the absorbtion of many substances [9-12, 28, 29, 30].MKCh can affects membrane fluidity and improve passive difusion.Also, MKCh is less toxic than other bile salts [31].Critical micelle concentrations (CMC) is one of the factors responsible for cytotoxic effects.Essentially, more hydrophobic bile salts cause hemolysis below their CMC [32,33,34].Hydrophilic ones can cause hemolysis above their CMC.MKCh is supposed to induce hemolysis at a concentration higher than its CMC, due to position of keto group [30,31].So, it can be another benefit of using this bile acid derivative in promoting drug penetration through the nasal route.
Chemical transformations of bile acids and determination in biological materials have challenged researchers to pay attention on their biosynthesis or develop various analytical methods [19,20,21,22,23].The HPLC method is quite simple and punctilious.It does not require too much time for sample preparation.HPLC is a very convenient for qualitative identification and precise quantitative determination of this originally synthesized bile acid in biological materials.As relatively inexpensive method, it might be commonly used for the determination of other novel synthetic bile acids and contribute to future research of bile acids.
CONCLUSION
Summary, knowledge of the kinetics of this and other synthetic derivatives can help in finding simpler methods in the application of physiologically active de- rivatives, which may have a medical application.This could provide an effective way of investigating bile acids as well as their medical use in prevention and therapy.In addition, since intranasally given monoketocholate has greatly influenced the volume of the bile, it could be the basis for exploring potentially new cholagogues.
Table 1 .
MKCh concentration after intravenous and intranasal administration of 4 mg / kg MKCh expressed as AUC in blood and bile within the 3 hours intervalArea under the curve in rat serumArea under the curve in rat bile Area under the curve | 2019-04-11T13:07:45.229Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "498634a936334b9593b3aad676e17f2b09ff0cf0",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0350-8773/2018/0350-87731802075S.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "498634a936334b9593b3aad676e17f2b09ff0cf0",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
241761190 | pes2o/s2orc | v3-fos-license | Longitudinal changes in energy balance during pregnancy in South African women from the Tlokwe Municipal area
Background Energy balance in the era of obesity, contributes to challenges in healthy weight maintenance. The study aims to determine the changes in energy intake and expenditure from the rst to the third trimester of pregnancy in women from the Tlokwe Municipal area. Methods We followed a longitudinal observational design to measure healthy pregnant women in the rst (9–12 weeks), second (20–22 weeks) and third trimester (28–32 weeks). A valitdated, semi-quantitative food frequency questionnaire determined energy and macronutrient intakes. Energy expenditure (EE) was calculated from resting energy expenditure, as measured by indirect calorimetry (FitMate®), whereas activity energy expenditure was measured by combining heart rate and accelerometry (ActiHeart®). Energy balance was calculated as the difference between energy expenditure and energy intake. A mixed-model analysis was performed to determine signicant differences between energy expenditure and intake during pregnancy. Results Energy intake increased from the rst (8841 ± 3456 kJ/day) to the second trimester (9134 ± 3046 kJ/day) and declined in the third trimester of pregnancy (8171 ± 3017 kJ/day). A negative energy balance was found during the rst (-1374 ± 4548 kJ/day) and third trimesters (-1331 ± 3734 kJ/day), whereas a minor positive energy balance was observed in the second trimester (380 ± 14212 kJ/day). Resting energy expenditure showed signicant differences between the second and third, as well as the rst and third trimesters. Changes in activity energy expenditure throughout pregnancy showed practical signicance between the rst and third trimesters. Conclusions intake and differ. The additional energy
signi cance between the rst and third trimesters.
Conclusions Energy intake and expenditure during pregnancy did not differ. The additional energy expenditure in the third trimester could be attributed to resting energy expenditure and a decrease in activity energy expenditure.
Background
Energy requirements during pregnancy is the energy intake from food that balances energy expenditure when the woman has a body composition and physical activity level consistent with good health [1,2].
The primary energy requirements of pregnancy provide a mean for adequate maternal weight gain to ensure growth of the fetus, placenta and associated maternal tissues [2,3]. Secondary energy requirements must allow for increased metabolic demands in addition to the energy necessary to maintain adequate maternal weight, body composition and physical activity throughout gestation, as well as providing energy stores to assist in lactation after delivery [2,3]. The additional energy required for pregnancy is estimated to be negligible during the rst trimester, 1646 kJ/day in the second trimester and 2092 kJ/day in the third trimester [4]. By balancing energy intake with energy requirements for fetal growth and activity, energy balance will be reached where energy intake equals energy expenditure [2].
Energy balance during pregnancy can be achieved by various methods, including decreasing resting energy expenditure, mobilizing maternal fat stores, decreasing physical activity or increasing energy intake by increasing food intake [3,[5][6][7][8].
From a systematic review, it was found that energy intake during pregnancy in developing countries was 8971 ± 1034 kJ/day, with the authors reporting a signi cantly higher energy intake in the third trimester compared to the rst trimester [3]. However, in another study, it was found that healthy pregnancies can be achieved without signi cant increases in energy intake [9]. Women eat signi cantly more than is required to meet the energy requirements for a healthy pregnancy and therefore gain, on average, excess weight of more than 12 kg [10]. Excessive energy intake that contributes to excessive weight gain may increase the risk of developing pregnancy-induced hypertension [11] and increase the risk of caesarean birth, macrosomia [12], postpartum weight retention and gestational diabetes [13]. Therefore, the belief that increasing dietary energy intake will lead to an improved pregnancy outcome has no evidence-base [3,14]. Energy intake guidelines should, therefore, be individually adjusted to meet variations in basal metabolic rate, body composition before and during pregnancy, gestational weight gain and physical activity [15].
In conjunction with energy intake, maternal macronutrients (carbohydrates, lipids, and protein) in uence fetal growth (3,16,17)]. The recommended percentage energy distributions of macronutrients during pregnancy are similar to those for healthy women, with the assumption that dietary energy intake is su cient to maintain current body weight [3,18]. In addition, dietary intakes of pregnant women do not align with country-speci c energy and macronutrient recommendations [3]. More speci cally, total fat and saturated fat intake were generally higher than the recommended guidelines, while carbohydrates and poly-unsaturated fatty acid intake were lower than recommended [3].
During pregnancy, total energy expenditure increases due to the energy required for fetal growth, development of the placenta and various maternal tissues, as well as changes in maternal metabolism and the increase of energy expended during movement and activities of daily living resulting from weight gain (1,2,9,19)]. Total energy expenditure consists of basal metabolic rate, dietary-induced thermogenesis and energy expended during daily living activities and physical activity [2].
Basal metabolic rate-the primary component (60%) of total energy expenditure-refers to the lowest level of energy expended at rest [20]. Basal metabolic rate tends to increase during pregnancy due to increased tissue mass and thus increased energy cost for maintenance [1,9,21]. Increases in both total energy expenditure and resting energy expenditure are more pronounced in the second and third trimesters [1,21]. According to Butte & King [1], on average, there is an increase of basal metabolic rate of 4.5% for the rst, 10.8% for the second and 24.0% for the third trimesters, respectively.
Diet-induced thermogenesis is the energy required to digest and assimilate food and is considered to be small-about 5-10% of total energy expenditure [8]. However, little scope exists for energy savings concerning diet-induced thermogenesis during pregnancy, but there is considerable scope for adaptations in basal metabolism [22].
Any changes that occur in physical activity levels during pregnancy will have important implications for maternal energy requirements [23]. Energy expended during physical activity or activity energy expenditure refers to any energy expended above resting level, due to bodily movement [24]. Activity energy expenditure contributes to about 25-30% of total energy expenditure [25] in a developed world context. Energy expended during physical activity tends to decline during pregnancy due to decreases in habitual physical activity [1,[26][27][28], which are likely due to minor discomforts such as leg cramps, swelling, fatigue, shortness of breath, di culties in movement due to weight gain and perceptions that physical activity might pose risks for the fetus [29,30]. Physical activity may be reduced during pregnancy by selecting less demanding activities or decreasing the pace of activities [19].
Broad variations in energy requirements during pregnancy exist between well-nourished women in developed countries compared to women from low-income, developing societies, where the availability of nutritious foods is limited [1,2,6,13]. Special consideration should be given to the women with low and high Body Mass Index (BMI) as energy adaptations or responses to pregnancy may not re ect optimal nutritional conditions [4].
Underweight (BMI < 18.8 kg/m 2 ) Gambian women living under constraints of limited food supply and obligatory intense physical activity reduced their resting energy expenditure to allow the delivery of a viable infant who may or may not be small for gestational age, depending on the severity of the energy imbalance [5]. Normal-weight women in developing countries with unlimited food availability tend to conserve energy by reducing physical activity [5,28]. However, this is not always the case, since hormonal changes facilitate fat deposition and due to non-restrictive energy supply, these women tend to gain additional fat stores [5]. With regards to overweight women (BMI > 25 kg/m 2 ) in developed countries with free access to food, ample energy reserves are present at conception to protect fetal growth. Therefore, there is no need to accumulate fat [5,19].
The possibility to offset the potential for further increases in energy storing, basal metabolic rate increases in overweight and obese women [5,31]. However, if excessive energy storage occurs, despite increases in basal metabolic rate, the excessive weight gain can be detrimental for both mother and infant [5,17]. Promoting methods to increase energy expenditure in overweight, pregnant women can be extremely valuable to promote energy balance and a good pregnancy outcome by reducing excessive weight gain [17,32].
In South Africa, the prevalence of obesity, especially among women, has increased due to urbanization, increased wealth, increased dietary intake and decreased physical activity [33]. Furthermore, cultural factors shape South African (SA) women's eating habits, such as overeating at social gatherings where food is abundant, associating particular foods with social status, being more accepting of being overweight and relating thinness with illness and HIV/AIDS [34]. It was found SA black women living in an urban towns in South Africa to have a diverse eating pattern, which leads to the consumption of an energy-dense diet that is high in proteins and fat [35]. Another study from non-pregnant SA women found total energy expenditure to be lower in black than white women, due to the lower measured activity energy expenditure and smaller fat-free mass in black women [36]. Preventing excessive weight gain and treating obesity in young black women by propagating a healthy lifestyle [37] is essential during the reproductive period.
Energy requirements during pregnancy should be derived based on healthy populations with favorable outcomes [1,4]. As stated by Löf [19], if the energy expended on physical activity is unknown, pregnant women may be encouraged to increase their energy intake above the required levels, potentially leading to an increased risk of excessive weight gain. Both sides of the energy balance equation-energy intake and expenditure-should be accounted for in relation to gestational weight gain and birth weight [3,38].
Therefore, this paper aims to determine the changes that occur in energy intake and expenditure from the rst to the third trimester of pregnancy in women from the Tlokwe Municipal area of South Africa. It was hypothesized that both energy intake and energy expenditure would increase signi cantly from the rst to the third trimester of pregnancy in women from the Tlokwe Municipal area of South Africa.
Bene ts of the study include objective measurements of energy expenditure in combination with energy intake, which would lead to a more accurate determination of energy balance. If energy imbalances occur, corrective measurements can be taken by means of nutritional and physical activity guidelines during pregnancy.
Research design
A longitudinal observational cohort study design was followed within the longitudinal Habitual Activity Patterns during PregnancY (HAPPY)-study. The study aimed to determine the longitudinal changes in energy intake and energy expenditure from the rst to the third trimester of pregnancy. Women were measured in their rst (9-14 weeks), second (20-22 weeks) and third trimesters (28-32 weeks) of pregnancy. These measurements were purposefully aligned to the recommended sonar measurements that are routinely performed by gynecologists. The setting for the study was in the Tlokwe Municipality of the North-West Province, South Africa.
Participants
The study recruited 41 pregnant women. Participants were recruited using advertisements placed in the local press and the consulting rooms of local gynecologists and clinics in the Tlokwe Municipality of Potchefstroom, North West Province, South Africa. Based on the following criteria, participants were included in the study: healthy pregnant women from any ethnic background, over 18 years of age and in their rst trimester of pregnancy (9-14 weeks of gestation). Participants were excluded from the survey if they were mentally disabled or had physical limitations. A health screen was performed about risk factors for physical activity during pregnancy and for cardiovascular disease to determine whether participants were included or excluded in the study, as indicated by the American College of Sports Medicine (39). The participants who indicated interest in the study were asked to give their informed consent to participate in the study by signing an informed consent form. Ethical approval, complying with the Declaration of Helsinki, was obtained from the Ethics Committee of the North-West University (NWU-00044-10-A1).
Demographic and pregnancy-related information During the rst measurement, a demographic questionnaire was used to obtain information about the participants' ethnicity and age. This questionnaire was compiled speci cally for the current study.
Additional questions to the questionnaire about pregnancy-related data were collected, which included the following: recall pre-pregnancy weight (kg), weeks of pregnancy, type of pregnancy (single, twin or triplets), expected date of birth and the number of previous pregnancies.
Energy intake measurements
At every measurement point, each participant's dietary intake was measured using a semi-quantitative food frequency questionnaire, which determined the nutrient intake of the participant in every trimester [40]. The data was analyzed using the Food-Finder 4 program (Medical Research Council, Tygerberg, South Africa). Energy intake (kJ/day), as well as carbohydrate, lipid and protein intake (g/day), was calculated.
Energy expenditure measurements
Total energy expenditure was calculated as the sum of resting energy expenditure, diet-induced thermogenesis and activity energy expenditure.
Resting energy expenditure
Resting energy expenditure was determined by employing indirect calorimetry with the FitMate TM (Cosmed, Italy). The FitMate TM is a metabolic analyzer designed for the measurement of oxygen consumption and energy expenditure during rest and exercise [41]. The FitMate TM gives accurate and reproducible oxygen consumption and resting energy expenditure measurements for female adults (r = 0.97, p = 0.066) [41]. With the FitMate analyzer, ventilation is measured by a turbine ow meter, while analyses of the fraction of oxygen in expired gases are measured through a galvanic fuel cell oxygen sensor [41]. The FitMate TM uses standard metabolic formulas to calculate oxygen consumption (measured in ml/min), while energy expenditure (measured in kJ/day) is calculated using a xed respiratory quotient of 0.85 [41].
Participants were requested not to perform any exercises during the 24 hours preceding the resting energy expenditure measurement. They were also requested to be fasting for at least 10 hours before the last-mentioned measurement. The FitMate TM was calibrated before each participant was subjected to the measurement. During the test, participants were requested to remain awake for the entire testing period while they breathed through an anti-bacterial lter for 15 minutes after a 10-minute resting period. During that time, the fraction of oxygen in expired gases was quanti ed to determine resting energy expenditure. The rst minute of the measurement was discarded as it was considered as the stabilization period for breathing. Resting energy expenditure was then divided by weight and presented in kJ/kg body weight.
Activity energy expenditure Activity energy expenditure was determined using an objective assessment based on combined accelerometry (movement counts) and heart rate response (ActiHeart®, CamNtech Ltd., Cambridge, UK). The device is a waterproof, self-contained logging device that allows physical activity to be measured synchronously with the heart rate [42]. The ActiHeart® reports simulated heart rate within a beat per minute and above 30 beats per minute, which is comparable to heart rate monitors [42,43]. The device is worn on the chest and consists of two electrodes (connected by a short lead) that clip onto two standard electrocardiograph (ECG) pads. The reliability and validity of the device to measure physical activity were scienti cally validated (p = 0.9) in healthy pregnant women in Switzerland [44].
At every measurement interval in the study, activity energy expenditure was measured for a period of seven days. The following parameters were extracted from the data: activity energy expenditure, dietinduced thermogenesis and total energy expenditure. The activity energy expenditure, diet-induced thermogenesis and total energy expenditure measurements were all expressed in kJ/kg/day.
For the ActiHeart® to calculate the most accurate activity energy expenditure, resting energy expenditure was determined and an eight-minute calibration step test with a ramp protocol on a step box (21.5 cm in height) was conducted. The resting energy expenditure measurement and the step test were administered at every measurement interval. This individual calibration step test develops a heart rate and a VO 2 regression line speci cally for pregnant women and takes into account the physiological changes that the women experience during pregnancy [44].
Before the measurements, the ECG pads were placed on the chest to form an arc across the heart. Fifteen minutes was given to determine a good signal, ensuring an accurate measurement of the heart rate for the following seven consecutive days. When a sound signal was established, the eight-minute calibration step test with the ActiHeart® for activity energy expenditure calculation was done. Participants were able to stop at any time during the step test if they experienced any discomfort or fatigue. Two minutes of quiet sitting was required after the step test to determine the participants' recovery heart rates.
As soon as the step test information was downloaded using the accompanying software, the ActiHeart® was set to "Advanced Energy Expenditure" mode. Participants wore the device for seven consecutive days. The ActiHeart® was programmed to measure energy expenditure using 30-second epochs (counts per minute). The women were advised to take the monitor off when they were bathing or showering and to put it on again immediately afterward. The device was removed after seven days and the data captured was downloaded using the accompanying software (Version 2.132, Cambridge Neurotechnology Ltd., Cambridge, UK). Subjects were encouraged to wear the ActiHeart® for seven days. If the subjects did not adhere to this recommendation, the data was trimmed and only the days on which measurements were taken were included. For accurate results, participants should have worn the ActiHeart® for at least four days, of which one of the days should have been a day over a weekend [45].
Diet-induced thermogenesis
Diet-induced thermogenesis was estimated by the ActiHeart® device, which is factored in as a constant of 10% of the total energy expenditure [45].
Energy balance
In the study, energy balance was determined by applying the following equation: Gestational weight gain Gestational weight gain, measured by an electronic Scale (Beurer, Germany), was computed for each trimester by subtracting the measured weight of the previous trimester from the weight measured in the speci c trimester. Gestational weight gain (kg) in the rst trimester was calculated by subtracting selfreported pre-pregnancy weight from the measured weight in the rst trimester.
Statistical analysis of data
Statistical analyses were performed with the SPSS software package, SPSS version 25 (IBM Corp, NY). The descriptive statistics of the baseline characteristics were determined, while reporting means and standard deviations. Descriptive statistics were also performed on energy intake and energy expenditure variables. Body Mass Index (BMI) was categorized according to underweight, normal, overweight and obese classi cations according to the ACSM's scale [24]. Changes within energy intake and expenditure throughout pregnancy were analyzed using mixed-model analysis, a Bonferroni post-hoc test and an unstructured covariance structure. The dependent variables, energy intake (kJ) and energy expenditure (kJ) from rst to third trimester of pregnancy were included in the analysis. For it to be statistically signi cant, the change between energy intake and energy expenditure from pre-pregnancy to three months postpartum was determined by setting the p-value lower than 0.05. For practical signi cance (Cohen's d), 0.2 can be considered as a 'small' effect size, 0.5 representing a 'medium' effect size and 0.8 a 'large' effect size.
Results
The demographic information of the participants is presented in Table 1. The average age of the participants was approximately 28 (± 5) years. There was an equal distribution in the ethnicity of the participants between the groupings in South Africa. Gestational weight equated 3.23 ± 3.61 kg in the rst trimester, 5.18 ± 5.37 kg in the second trimester and 4.90 ± 3.10 kg in the third trimester. Energy intake and expenditure variables are presented in Table 2. Energy intake throughout pregnancy did not differ signi cantly. There was a small increase in energy intake from the rst trimester (8841 ± 3456 kJ/day) to the second trimester (9134 ± 3046 kJ/day), whereas energy intake declined in the third trimester of pregnancy (8171 ± 3017 kJ/day). Macronutrient intake is also presented in Table 2.
Carbohydrate intake was calculated as 256.4 ± 99.9 g/day, 264.7 ± 86.2 g/day and 244.5 ± 137.7 g/day in the rst, second and third trimesters of pregnancy, respectively. Similar to energy intake and carbohydrate intake, lipid intake increased from the rst trimester (71.5 ± 33.0 g/day) to the second trimester (77.9 ± 34.6 g/day) and declined in the third trimester (67.6 ± 24.0 g/day). Protein intake decreased throughout pregnancy, from 81.7 ± 44.1 g/day in the rst trimester, 79.2 ± 32.8 g/day in the second trimester and 67.7 ± 20.8 g/day in the third trimester. Results from the mixed-model analysis are presented in Table 2 Changes in energy expenditure according to BMI classi cations per trimester of pregnancy are presented as a graph in Figure 1. It was found that overweight women's total energy expenditure exceeded that of normal-weight women in all three trimesters. Resting energy expenditure increased from the rst trimester (6116 kJ/day) to the third trimester (7669 kJ/day) in overweight women, while normal-weight women's resting energy expenditure decreased slightly from the rst trimester (5019 kJ/day) to the second trimester (4991 kJ/day), while increasing in the third trimester (6779 kJ/day). Activity energy expenditure decreased considerably in overweight women from the rst trimester (3929 kJ/day) to the third trimester (2056 kJ/day), while normal-weight women's activity energy expenditure increased from the rst trimester (2965 kJ/day) to the second trimester (3362 kJ/day). Thereafter, it decreased to the third trimester (2056 kJ/day). The energy balance during the rst (-1375 ± 4548 kJ/day) and the third trimester (-1331 ± 3734 kJ/day) were negative, while a minor positive energy balance was observed in the second trimester (380 ± 4212 kJ/day).
Discussion
The aim was to determine the longitudinal changes that occur in energy intake and expenditure throughout a pregnancy. As a characteristic of pregnant women, high variability is seen in energy intake and energy expenditure, and thus, in pregnant women's energy costs [4]. As such, recommendations in terms of energy intake and habitual physical activity patterns should be population-speci c and determined by socio-economic and cultural factors that are speci c to the population [4]. The subjects in the current study were representative of the South African population in terms of ethnicity.
Energy intake
In this study, no signi cant changes were observed relating to energy intake throughout pregnancy.
Similarly, other studies reported non-signi cant increases in energy intake [1,3,46]. As per this study, the participants presented with a negative energy balance in the rst and third trimesters of their pregnancies, which corresponds with previous studies [3,46]. In spite of the negative energy balance, an increase in weight was still observed. The ndings are similar to that reported by a systematic review that found that both women from developed and developing countries appear to only consume a quarter of the theoretical requirement for additional energy, despite uniquely rapid weight gain [3].
Macronutrient intake
No statistically signi cant changes in macronutrient intake per trimester of pregnancy were observed in this study. However, in a systematic review of developed countries, it was found that macronutrient intake to increase during each trimester of pregnancy [47]. In contrast, carbohydrate intake increased from the rst to the second trimester even though a decrease was found from the second to the third trimester of pregnancy in this cohort. Similar to carbohydrate intake, this cohort's lipid intake increased from the rst to the second trimester and decreased from the second to the third trimester of pregnancy. Protein intake decreased from the rst to the third trimester in this cohort. The difference in macronutrient intake could be attributed to the differences in developed and developing countries, whereas the availability of additional food outside the normal intake is hampered by poverty.
Resting energy expenditure
Nonetheless, dietary insu ciencies in undernourished women might be diminished by energy-sparing strategies to protect fetal growth [31]. One of the energy-sparing adaptations that occur is a decrease in resting energy expenditure, which is also known as metabolic " exibility" in the resting energy expenditure during pregnancy [31]. In this cohort, resting energy expenditure declined from the rst to the second trimester, yet increased in the third trimester. It was also found that the increase in resting energy expenditure in the third trimester is signi cantly correlated with higher body mass during pregnancy [25]. Although no statistically signi cant changes were observed in the change of resting energy expenditure throughout each trimester of pregnancy, a medium practically signi cant difference was observed between the rst and third trimester, as well as the second and third trimester. Again, this medium practical signi cant difference emphasizes the contribution of resting energy expenditure to total energy expenditure, whereas the contribution of activity energy expenditure tends to be relatively small and constant from one person to another [25].
Activity energy expenditure
Another energy-sparing adaptation is a decrease in energy expenditure through physical activity [23,25].
Despite non-statistically signi cant differences between activity energy expenditure throughout pregnancy, a practical signi cant decline of activity energy expenditure was observed from the rst to the third trimester. A non-signi cant decrease in activity energy expenditure during pregnancy was also found, which, in accordance with other studies, proves that physical activity declines during pregnancy [25,26,[48][49][50][51][52][53]. A decline in activity energy expenditure can be accounted for due to women shifting toward less intense and more comfortable modes of activity, probably to avoid the risk of maternal and fetal injuries as well as to accommodate an increase in body weight.
Energy expenditure of normal versus overweight/obese women Differences relating to energy expenditure variables in normal versus overweight or obese women are varying [4]. In agreement with [4] this study's overweight or obese cohort, women's total energy expenditure was higher than that of the normal-weight women. The increase in total energy expenditure can primarily be ascribed to the higher increase in resting energy expenditure in the overweight or obese women compared to the normal-weight women [4,25,54]. Furthermore, activity energy expenditure decreased more noticeably in overweight or obese women from the rst to the third trimester, when compared to normal-weight women. Finally, it was found that sedentary SA women tended to be above the recommended weight-gain ranges, however, their study found physical activity to increase as the pregnancy progressed [55].
Strengths and limitations of the study
The strengths of the study include the measurement of both dietary energy and macronutrient intake in conjunction with energy expenditure data which has been captured by an objectively measured heart rate accelerometer (ActiHeart ® ). When examining energy balance, data should include both energy intake and energy expenditure data [3,38].
The ndings of this study should, however, be interpreted with some limitations in mind, such as the reliance on self-reported data with the use of the semi-quantitative Food Frequency Questionnaire. Energy intake underreporting amongst pregnant women should be acknowledged in dietary research [56].
Furthermore, due to the convenience sampling method applied, women who were more active might have been more interested to participate in the study. Activity energy expenditure data was captured during a ve-day period, which could have led to changes in the participants' normal behavior during the study period. Another limitation was the use of estimates of body composition. Lastly, the compliance to the longitudinal design in the study was weak, especially pertaining to women measured from low socioeconomic areas, which limited the sample size of the study.
Conclusion
In conclusion, no statistically signi cant changes were observed in energy intake and expenditure during pregnancy in women from the Tlokwe Municipality area. The additional energy expenditure in the third trimester, mostly attributed to resting energy expenditure, was partly compensated for by the decrease in activity energy expenditure. Energy-sparing adaptations may be more important in balancing the energy budget of pregnant women in populations where restricted food intakes and demands of physical activity are higher. It was found that overweight and obese pregnant women had a higher energy expenditure compared to normal-weight pregnant women, primarily due to their higher resting energy expenditure.
However, their activity energy expenditure decreased more pronouncedly when compared to normalweight women. Variability in responses to energy requirements of pregnancy is essential in providing a healthy pregnancy outcome for both mother and infant. Recommendations for future research include the study of how variations in energy balance in uence gestational weight gain and fetal growth. Ethics approval and consent to participate The participants who indicated interest in the study were asked to give their informed consent to participate in the study by signing an informed consent form. Ethical approval, complying with the Declaration of Helsinki, was obtained from the Ethics Committee of the North-West University (NWU-00044-10-A1).
Consent for publication
Not applicable.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare not competing interests.
Funding
Page 16/20 The research was funded by the National Research Foundation South African-Swiss Joint Programme, award number UID 78606 and the South African Sugar Association, project no 224. Any opinion, ndings and conclusions or recommendations expressed in this material are those of the authors, and therefore the NRF does not accept liability in regard to it.
Authors' contributions SJ and YS conceived and designed the study. AF is a PhD candidate and together with SJ collected the data and performed the analysis. AF drafted the manuscript. SJ and YS critically reviewed the manuscript. All authors read and approved the nal manuscript. | 2019-09-19T09:15:33.635Z | 2019-09-16T00:00:00.000 | {
"year": 2019,
"sha1": "94635133546d8faf6597fcf52c782c605a64f89f",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-5120/v1.pdf?c=1585619472000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3d0ddb538c00499fdc7bc45bd54ee90f20f53a0f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
30839662 | pes2o/s2orc | v3-fos-license | Ototoxicity of cisplatinum.
Ototoxicity of cisplatinum Sir-Skinner et al., in a recent article (1990) 61: 927-931, concerning ototoxicity of cisplatinum in children and adolescents make a number of points which we would like to comment on. The authors take significant hearing loss to be a deterioration in hearing threshold of 20 decibels or greater at any frequency. We would consider this to be a significant change in hearing, but not to be equivalent to a significant hearing loss. We were surprised that a difference was made in the change in hearing threshold between younger (40dB) and older children (20dB). A change of hearing threshold of 15 dB or more should be considered significant in the clinical setting at any age over 7-8 months. Perhaps the authors have confused the 40 dB cutoff used in the grading system which is of clinical importance, and infers hearing loss and disability, with that of a significant change in hearing. The statistical analysis made by Skinner et al. uses maximum hearing loss which they define as being the maximum loss in the right ear plus that in the left, divided by two. This method makes the results worse than they actually are in terms of hearing disability. The more standard method of assessment used by the British Society of Audiology is the standard weighted average hearing threshold which is: (4 x the loss in the better ear + 1 x the loss in the worse ear)/5. The plateau effect that Skinner et al. found at 8000 Hz with a cumulative cisplatinum dose of 600 mg m-2 is interesting but may be misleading. We would disagree about it being of clinical importance and if we look at our group of 29 children (in press) in the same way there is no plateau. We would suggest that in a small group of patients, where wide individual susceptibility is apparent, the median bears little relevance to the clinical situation in any one child. As an example, 7 of the 29 patients in our group had received a cumulative dose of cisplatinum of 600 mg m-2 and their hearing loss at 8000 Hz (mean of the right and left ear) was The authors go on to discuss partial recovery of hearing loss. In seven patients, with high-frequency hearing loss of grade two or more, that we have followed up with multiple audiograms for at least 5 years, we have seen no …
LETTER TO THE EDITOR
Ototoxicity of cisplatinum Sir -Skinner et al., in a recent article (1990) 61: 927-931, concerning ototoxicity of cisplatinum in children and adolescents make a number of points which we would like to comment on.
The authors take significant hearing loss to be a deterioration in hearing threshold of 20 decibels or greater at any frequency. We would consider this to be a significant change in hearing, but not to be equivalent to a significant hearing loss. We were surprised that a difference was made in the change in hearing threshold between younger (40dB) and older children (20dB). A change of hearing threshold of 15 dB or more should be considered significant in the clinical setting at any age over 7-8 months. Perhaps the authors have confused the 40 dB cut-off used in the grading system which is of clinical importance, and infers hearing loss and disability, with that of a significant change in hearing.
The statistical analysis made by Skinner et al. uses maximum hearing loss which they define as being the maximum loss in the right ear plus that in the left, divided by two. This method makes the results worse than they actually are in terms of hearing disability. The more standard method of assessment used by the British Society of Audiology is the standard weighted average hearing threshold which is: (4 x the loss in the better ear + 1 x the loss in the worse ear)/5.
The plateau effect that Skinner et al. found at 8000 Hz with a cumulative cisplatinum dose of 600 mg m-2 is interesting but may be misleading. We would disagree about it being of clinical importance and if we look at our group of 29 children (in press) in the same way there is no plateau. We would suggest that in a small group of patients, where wide individual susceptibility is apparent, the median bears little relevance to the clinical situation in any one child. As an example, 7 of the 29 patients in our group had received a cumulative dose of cisplatinum of 600 mg m-2 and their hearing loss at 8000 Hz (mean of the right and left ear) was 0.0, 7.5, 32.5, 62.5, 70, 72.5 and 87.5 with a median of 62.5.
The authors go on to discuss partial recovery of hearing loss. In seven patients, with high-frequency hearing loss of grade two or more, that we have followed up with multiple audiograms for at least 5 years, we have seen no recovery. The example of partial recovery shown by the authors uses results obtained at a frequency of 8000 Hz. This is the most difficult reading to obtain accurately, it needs to be calibrated more carefully and regularly than the other frequencies and children give more accurate results in the middle frequencies.
The authors then state that severe hearing loss can be asymptomatic and that there is no relation between our ototoxicity grading and the presence or absence of symptoms. To justify this statement Skinner et al. would have to have applied a recognised hearing disability questionnaire or made an objective measurement of speech discrimination levels. From the article this does not appear to have been done. In our experience, severe high-frequency hearing loss is always symptomatic if the child is fully assessed. However mild to moderate high-frequency hearing loss is not always immediately recognised by the patient, parent or schoolteacher. The child unconciously learns to lip read and it may be some time later before the child starts to fall behind at school and the degree of handicap and need for hearing aids are appreciated. | 2016-05-04T20:20:58.661Z | 1991-01-01T00:00:00.000 | {
"year": 1991,
"sha1": "abff930ba99c3dc73c0b07a0a0da22c11d9e9afe",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc1971632?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "abff930ba99c3dc73c0b07a0a0da22c11d9e9afe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17588222 | pes2o/s2orc | v3-fos-license | Preliminary survey of the mayflies (Ephemeroptera) and caddisflies (Trichoptera) of Big Bend Ranch State Park and Big Bend National Park
The mayfly (Insecta: Ephemeroptera) and caddisfly (Insecta: Trichoptera) fauna of Big Bend National Park and Big Bend Ranch State Park are reported based upon numerous records. For mayflies, sixteen species representing four families and twelve genera are reported. By comparison, thirty-five species of caddisflies were collected during this study representing seventeen genera and nine families. Although the Rio Grande supports the greatest diversity of mayflies (n=9) and caddisflies (n=14), numerous spring-fed creeks throughout the park also support a wide variety of species. A general lack of data on the distribution and abundance of invertebrates in Big Bend National and State Park is discussed, along with the importance of continuing this type of research.
Introduction
The invertebrate faunas of national and state parks historically have been given a low priority by the United States National Park Service, and typically they are ignored unless they become pests (Ginsberg, 1994). Studies have noted that few national parks have initiated systematic inventories of invertebrates and essentially no research has been done towards this end in over half of the National Parks (DeWalt et al., 2005, Flory et al., 2000, Jacobus et al., 2003, Jacobus et al., 2005, Kondratieff et al., 2002, Sharkey, 2001, Stohlgren et al., 1991a, Stohlgren et al., 1991b. Baseline data for aquatic environments in and adjacent to the Rio Grande in western Texas, including those in Big Bend National Park and Big Bend Ranch State Park, are relatively sparse. Indeed, the aquatic invertebrate fauna of these parks is poorly documented having never been comprehensivly surveyed (Bowles, 1997, Bane et al., 1978, Gloyd, 1958, Ross, 1944, Tinkham, 1934. Such a paucity of information on these systems must be considered in light of the U.S./Mexico Border XXI Program's identification of loss of species diversity in the Rio Grande corridor as an issue of primary concern (United States Environmental Protection Agency 1996). The broad diversity of aquatic habitats in the Big Bend area, including numerous permanent and temporary springs, springs-runs, water tanks, and the Rio Grande, suggests that a rich variety of aquatic invertebrates occur in the Park.
The lack of data on the distribution and abundance of the majority of invertebrate groups in Big Bend National and State Park seriously impedes ecological investigations that could be used to support management decisions in those parks. In particular, those studies concerning fisheries ecology, monitoring for potential or actual introductions of exotic species ( e.g, Asian clam, zebra mussel, various fishes), water quality and monitoring of pollution events, and other anthropogenic disturbances can be severely confounded by a paucity of aquatic invertebrate data. The potential for using invertebrates as indicators of ecosystem disturbance is seen, for example, in the fact that aquatic invertebrates often are used successfully as indicators of ecosystem health. Because of the importance of invertebrates to ecosystem function, detailed surveys of invertebrate faunas are essential for making decisions necessary to effectively manage conservation areas. Such information allows for construction of a comparative framework to evaluate future changes in species composition and distribution that may occur in these systems. The National Park Service has the goal of conducting baseline inventories to support long-term ecological monitoring including biomonitoring to support assessments of non-point sources of pollution (Freet et al., 1999, Nimmo et al., 2002. Ginsberg (1994) recommended targeting inventories of selected invertebrate groups for ecological monitoring. The Ephemeroptera, Plecoptera and Trichoptera are the insect orders comprising the Ephemeroptera-Plecoptera-Trichoptera (EPT) index commonly used in assessments of stream water quality and integrity (Barbour et al., 1999). Therefore, a baseline inventory of the mayflies and caddisflies of Big Bend National and State Park is crucial to developing management criteria for protecting aquatic systems in the Parks. There are no known species of Plecoptera (stoneflies) from the Big Bend region. The only known species of Plecoptera known from west Texas, Isoperla jewetti Szczytko and Stewart (Szczytko et al., 1977), is a relict population from near El Paso that is likely no longer present.
Big Bend National Park, Big Bend Ranch State Park, Black Gap Wildlife Management Area, and the Santa Elena Canyon Reserve in Mexico are collectively referred to as the "Big Bend Region" and they lie adjacent to each other in the Chihuahuan desert region of far West Texas ( Figure 1). Water resources in this region are generally scarce and Black Gap Wildlife Management Area essentially lacks natural surface waters. However, numerous springs and small streams occur throughout the 1.1 million acres of land comprising the national and state parks, and they are bordered by the Rio Grande on the south.
Although there have been numerous surveys of the terrestrial insect fauna within these two parks, little attention has been focused on the aquatic insects. Basic survey work on mayflies and caddisflies in the Big Bend region has been largely neglected. Previously, only cursory attempts have been made to document this fauna. Previous species-level records of mayflies in the Big Bend region are scattered, and include records found in McCafferty and Davis (1992), Henry (1993), and Baumgardner (1997). Similarly, previous species-level records of caddisflies in this region include those of Bowles and Flint (1997), Moulton and Harris (1997), and Bowles et al. (1999). Bowles (1997) prepared a checklist of the caddisflies of Big Bend National Park, but this list was not published. Moulton and Stewart (1997) presented a checklist of the caddisflies known to occur in Texas at that time, including the Big Bend region, but they did not include specific or regional distributional information. The objectives of this study are to present a preliminary assessment of the diversity and distribution of mayflies (Ephemeroptera) and caddisflies (Trichoptera) in Big Bend National Park and Big Bend Ranch State Park, Texas, and to discuss relevant biological and ecological information related to selected species.
Materials and Methods
Samples were collected from a wide variety of aquatic habitats from throughout the national and state parks (Table 1, Figure 1). The Black Gap Wildlife Management Area was not sampled because it is essentially devoid of surface water, and the Santa Elena Canyon Reserve was not sampled due to bureaucratic restrictions. Latitude and longitude were collected using hand-held geographic positioning systems (Garmin 45®, Garmin Etrex®, www.garmin.com) , and elevation data was collected with the Garmin Etrex. However, the accuracy of the data was dependent on the conditions under which they were measured at a given time. Elevation was not collected at all sites.
Several collecting methods were used to collect insects in this study, depending on the season and type of habitat sampled. Sampling methods for adult specimens included ultraviolet (UV)-light traps, mercury vapor lights, Malaise traps, emergence traps, sweep netting riparian vegetation, and rearing from immature stages. At some collection localities, more than one method was used. Immature stages were collected directly from the source habitat. Species level identification of most aquatic invertebrates depends chiefly on the morphology of adults because the taxonomic knowledge of immature stages is insufficient. Specimens were preserved in the field with 70% isopropyl alcohol except immature caddisflies which were fixed in Kahle's solution (Wiggins, 1996)
Results
Sixteen species of mayflies were collected during this study, representing four families and twelve genera ( Table 2). The greatest diversity was found within the families Baetidae and Leptophlebiidae, each with seven (44% of total species) and five species (35% of total), respectively. The specific identity of one species, Brachycercus sp. (Caenidae) remains unresolved at this time.
Thirty-five species of caddisflies were collected during this study representing 17 genera and 9 families. The greatest diversity was recorded for the microcaddisfly family Hydroptilidae with 18 species, or just over 51% of the total diversity in our samples. The specific identities of some species of caddisflies in our collections remain unknown at this time.
Ephemeroptera species within Big Bend
Family Baetidae Acentrella ampla Traver: This species was reported from the Rio Grande at Santa Elena Canyon by McCafferty and Davis (1992). Acentrella ampla is also known from the mid-western and eastern United States .
Baetis magnus McCafferty and Waltz: B. magnus is known from numerous localities throughout the southwestern United States, and occurs throughout much of Central America, south to Panama (McCafferty et al., 1997). Larvae of this species are often found in small spring runs, their apparent preferred habitat.
Callibaetis montanus Eaton: The species is known from Arizona, New Mexico and west Texas south to Nicaragua (Lugo-Ortiz et al., 1996). Larvae were found only in the Rio Grande at Santa Elena Canyon, clinging to snag material in an area of the river having low current velocity.
Callibaetis pictus (Eaton): C.pictus is a widespread species throughout western North America and is known from as far south as Costa Rica (Lugo-Ortiz et al., 1996). Larvae were common and abundant in pools of numerous spring-fed streams and in very small, stagnant pools. Larvae were collected at elevations ranging from 1,500 meters to over 2,200 meters. This was the only species of mayfly collected above 2,200 meters.
Many spring-fed streams in the Big Bend region are reduced to small, isolated, stagnant pools throughout much of the summer. Callibaetis pictus larvae commonly occur in these pools where they can be abundant. The larvae varied from those with black wing pads (about to emerge) to early instars. These larvae apparently can survive severe drying conditions, and appear well adapted to the highly variable and often temporary spring runs of the desert region of Big Bend. Mature larvae were collected in late April and early May. et al., 1992) as Camelobaetidius sp. 1, and then formally described as C. kickapoo by Randolph and McCafferty (2000). Camelobaetidius kickapoo is also known from Colorado and Arizona (Randolph et al., 2000).
Camelobaetidius kickapoo
Camelobaetidius mexicanus (Traver and Edmunds): This species is known from throughout much of Texas and Mexico (Lugo-Ortiz et al., 1995), and Idaho (Lester et al., 2002). Fallceon quilleri (Dodds): This is an extremely common and wide-ranging species known from throughout the United States and Central America, south to Costa Rica (Lugo-Ortiz et al., 1994). It was collected at numerous locations in both parks. Habitat of the larvae ranged from small spring-fed creeks to the Rio Grande.
Family Caenidae
Bracycercus sp.: This is the first record of this genus from the Big Bend region and west Texas. Only adults were collected, making species identification not possible at this point. Considering that no other species of Brachycercus are known from west Texas, and the adults do not match any of the described species, this is likely an undescribed species. A formal description is not possible until larvae are associated and its uniqueness confirmed.
Caenis bajaensis Allen and Murvosh: C. bajaensis is known from throughout the southwestern United States and Mexico (Provonsha, 1990).
Family Leptohyphidae
Tricorythodes fictus Traver: The larval stage of T. fictus was only recently described by Baumgardner et al. (Baumgardner et al., 2003). Larvae are found commonly in many streams throughout Texas, and the species may also occur in Mexico. Big Bend represents the most western known limit of this species.
Tricorythodes explicatus (Eaton): Although this species is only known from the extreme southwestern United States and northwestern Mexico (Allen et. al, 1987), Tricorythodes minutus Traver is likely a synonym of T. explicatus, and it is known from throughout much of the western United States.
Family Leptophlebiidae
Choroterpes inornata Eaton: This species is distributed throughout the southwestern United States and into northwestern Mexico (McCafferty, 1992). Larvae appear to be restricted to cool, isolated mountain streams and pools (Baumgardner et al., 1997). Choroterpes inornata was first reported from Cattail Falls, Big Bend National Park, by Baumgardner et al. (1997). These authors reported on the possibility that this population might represent a new species or subspecies of C. inornata, because of their extremely long antennae and caudal filaments that were two to three times the body length. Examination of reared male imagoes and additional larvae indicate that, although this character does appear unusual for the species and might perhaps be an adaptation to life in pools. No other characters have been found to support either species or subspecies status for the Big Bend populations. Larvae were common at Cattail Falls and Oak Creek, both very small, spring-fed creeks. Mature larvae were collected in late April and May.
Farrodes mexicanus Dominguez: Although only F. mexicanus larvae were collected from the Big Bend Region and F. mexicanus is known only from adults, larvae collected from Big Bend were very mature and their abdominal color pattern matched that of F. mexicanus, indicating they are probably the undescribed larval stage of this species (W.P. McCafferty, personal communication). Additional collections of larvae and reared adults will be necessary to confirm this suspicion. The apparent presence of F. mexicanus in Texas is a new country record for this species in the United States, which was previously known only from southern Mexico. Farrodes mexicanus larvae were collected from Ojito Adentro in Big Bend Ranch and from Oak Creek at "The Window" in Big Bend National Park. Both locations are small, permanently flowing, spring-fed creeks. Only a few larvae were found at each location, clinging to the underside of small stones and rocks in regions of the stream with little flow.
Neochoroterpes oklahoma (Traver): This is the most widely distributed species of N. oklahoma, known from throughout much of Texas, Oklahoma, Colorado, New Mexico, and northern Mexico (Henry, 1993). Larvae live on the underside of rocks in moderate current of medium size streams and rivers (Henry, 1993). As first observed by Baumgardner et al. (1997), larvae collected from the Rio Grande had a very small, untracheated branch of abdominal gill 1, which could be easily confused with Neochoroterpes nanita. However, the maxillary and labial palps of N. oklahoma have very long setae while those of N. nanita do not. In addition, no adults of N. nanita have been collected from Big Bend. However, adults of N. oklahoma are common there in the late spring and summer.
Thraulodes gonzalesi Traver and Edmunds: This is a common species in many of the river drainages of Journal of Insect Science | www.insectscience.org ISSN: 1536-2442 central Texas and is also known from scattered localities in northeastern Mexico (Allen et al., 1978). Larvae were found commonly at numerous localities in the Rio Grande.
Traverella presidiana (Traver): This species is found commonly throughout rivers in Texas and northeastern Mexico. Larvae prefer moderately large to large rivers and can be found clinging to rocks and debris (Allen, 1973). In the Big Bend region, this species has only been found in the Rio Grande.
Family Calamoceratidae
Phylloicus aeneus (Hagen): This Neotropical species is widely distributed from throughout central Texas westward into the Big Bend region and southward throughout Central America , Prather, 2003. Larvae inhabit small to moderate volume springs and spring-runs.
In the Big Bend region, larvae were found living in the cool spring-runs of the Chisos Mountains and from Ojito Adentro on Big Bend Ranch State Park, but they typically are absent from small springs of the lowland desert where trees are few in number or absent, and ambient water temperatures are high.
Family Glossosomatidae
Protoptila alexanderi Ross: This was the only glossosomatid caddisfly collected during this study, found only from the Rio Grande. This species is primarily distributed in eastern and central Texas, and this record represents a substantial western range extension.
Family Helicopsychidae
Helicopsyche borealis (Hagen): This species is common and widely distributed in the United States (Wiggins, 1996). The genitalia of specimens we have examined vary somewhat from examples taken elsewhere in Texas, suggesting that the Big Bend population may represent an undescribed species. However, considerable genetic variability may occur among the various populations of H. borealis (Jackson et al., 1992) suggesting the genitalic differences may be a phenotypic artifact. We have collected Helicopsyche mexicana Banks from elsewhere in western Texas, but not from the Big Bend Region.
Family Hydropsychidae
The diversity of hydropsychids in the Big Bend region is low and is likely due to a paucity of flowing water habitat. Only five species of hydropsychids were collected, most from the Rio Grande, and Terlingua Creek, a lowland tributary of the Rio Grande, and from flowing water habitats on Big Bend Ranch State Park. Three species, Cheumatopsyche campyla Ross, Cheumatopsyche lasia Ross and Smicridea fasciatella McLachlan, are common and fairly widespread species (Gordon, 1974, Flint, 1974. Examples of a western species, Cheumatopsyche arizonensis, were collected at Ojito Adentro in Big Bend Ranch State Park which appears to be the limit of the eastern distribution of this species. Similarly, Smicridea signata (Banks), primarily distributed in the southwestern U.S., Mexico and Central America (Flint, 1974), also appears to have its eastern range limit in Big Bend.
Family Leptoceridae
Leptocerids were poorly represented in collections. Only one species was collected from the Big Bend region, Nectopsyche gracilis (Banks), from Terlingua Creek in Big Bend National Park. No leptocerids were collected at Big Bend Ranch State Park. Other leptocerids are known from western Texas including Oecetis avara (Banks), Oecetis inconspicua (Walker) and Oecetis cinerascens, but none of these species were collected during this study.
Family Hydroptilidae
Eighteen species of hydroptilids were collected from the Big Bend region. Several of these species are common and widely distributed in North America (Blickle, 1979). However, several others are much less common, or their collections in the Big Bend region represent extensions of their respective known eastern ranges.
Alisotrichia arizonica (Blickle and Denning): Bowles et al. (1999) described the larva of this unusual microcaddisfly (Hydroptilidae), which occupy madiculous habitats receiving most of their flow from springs. In the Big Bend region, A. arizonica is restricted to the mountain springs in the National Park, and a spring-run, Ojito Adentro, in Big Bend State Park. This species also is known from Arizona and Utah, and from an unpublished record from a mountain spring-run in Chihuahua, Mexico (Bowles, personal observation).
Neotrichia spp.: Two species in this genus are known from the Big Bend region. The type locality for Neotrichia sonora Ross is Neville Spring in Big Bend National Park (Ross, 1944) and only two male paratypes of the type series remain known. The male holotype was accidentally destroyed in a shipping accident several years ago. Neotrichia sonora was not collected from the Big Bend region during this study. However, N. sonora has been collected from a small spring-run in the mountains near Chihuahua State Mexico (Bowles, unpublished data), not far from Big Bend National Park. The proximity of this collection to the Big Bend region suggests this species likely still occurs in the vicinity of the type locality. Neotrichia minutisimella (Chambers), the other species collected during this study, is widely distributed in the central and eastern U.S. (Blickle, 1979).
Hydroptila spp.: Four species in this genus were collected during this study. Hydroptila angusta Ross is widely distributed and common east of the Rocky Mountains in the U.S. (Blickle, 1979). The three remaining species, Hydroptila arcti a Ross, Hydroptila icona Mosely, and Hydroptila protera Ross, are all widely distributed in the central and southwestern U.S.
Leucotrichia limpia Ross has been reported from the southwestern U. S. southward through Central America (Flint 1970). The type locality for this species is Limpia Creek located in the Big Bend region (Ross, 1944).
Mayatrichia spp.: Two species in this genus were collected including Mayatrichia acuna Ross, and Mayatrichia ayama Mosely. The former species is widely distributed in the southwestern U.S., and northern Mexico, while the latter is widely distributed throughout much of North America (Blickle, 1979 Oxyethira spp.: Two species of Oxyethira were collected from the Big Bend region during this study, including Oxyethira aculea Ross, and Oxyethira azteca (Mosely). Both species are widely distributed in the southwestern U.S. (Blickle, 1979).
Family Limnephilidae
Limnephilus sp.: Only larvae were collected of this genus during this study, and specific determination could not be made. Five species of Limnephilus have been recorded for the western portion of Texas including L. adapaus Ross, L. frijole Ross, L. lithus (Milne), L. tulatus Denning, and L. taloga Ross (Ruiter, 1995). Specimens were taken exclusively from lowland desert springs in Big Bend National Park.
Family Odontoceridae
Marilia nobsca Milne: This species was collected from several locations in the Big Bend Region. It also occurs elsewhere in the Southwestern U.S., Mexico and Guatemala (Bueno-Soria et al., 2004).
Marilia sp.: Larvae and adults of this unusual species were collected throughout the Big Bend region. The specimens appear closely related to Marilia flexuosa Ulmer, but they differ in several respects. The eyes of the male specimens from the Big Bend region are widely separated and are roughly 1.5 times as large as those of the female, but the eyes of male M. flexuosa have the eyes nearly touching on the midline and are more than twice the size of the female eyes. Also, the scutellum of the Big Bend specimens is evenly colored and lacks any distinct marks while the scutellum of M. flexuosa is distinctly marked with a light pigment bar along the meson. However, no differences in the genitalia of either sex were found between the Big Bend material and typical examples of M. flexuosa. The larvae of the two species also can be distinguished on the basis of markings found on the head and thorax. While the Big Bend material clearly appears not to be M. flexouosa, it may it represent either an undescribed species, or Marilia mexicana (Banks) which is known from Northwestern Mexico. Marilia mexicana is known only from the female holotype (Bueno-Soria et al., 2004), and male and immature stages have not yet been associated. Although the Big Bend specimens may indeed be M. mexicana, a formal description of the larvae and adult cannot be accomplished until further research is completed and the female holotype has been examined. Marilia flexuosa appears to be absent from the Big Bend region although it commonly occurs elsewhere in Texas and the Southwestern U.S., Mexico and southward to South America (Flint, 1967, Flint, 1991, Bueno-Soria et al., 2004.
Family Philopotamidae
Four species of Chimarra and a single species of Wormaldia are currently known from the Big Bend Region, and all of them are relatively common. Chimarra larvae were collected from several locations, but they could not be identified to species. Chimarra ridleyi (Denning) and C. angustipennis (Banks) are widely distributed throughout the southwestern U.S. and southward through Central America (Armitage, 1991, Blahnik, 1998. Chimarra adella Denning and C. utahensis (Ross) are known primarily from the southwestern U.S. and northern Mexico (Armitage, 1991, Flint, 1967, Ross, 1951. Western Texas appears to be the eastern boundary of the respective ranges of these two species. Similarly, Wormaldia arizonensis (Ling), the only representative of this genus known to occur in western Texas, is primarily distributed in the southwestern U.S. and northern Mexico (Armitage, 1991, Flint, 1967.
Family Polycentropodidae
The only polycentropodid collected in the Big Bend region, Polycentropus halidus Milne, is primarily distributed in the southwestern U.S. and northern Mexico (Denning et al., 1966, Flint, 1967. Western Texas appears to represent the easternmost boundary of this species distribution.
Discussion
Sixteen species of mayflies were collected during this study, but the identity of one species, Brachycercus sp. (Caenidae), remains unresolved at this time due to the lack of larval specimens. It appears possibly to be new species, but correlation of larval and adult life stages and additional research will be necessary to make this determination. Among collection locations, highest species diversity was observed for the Rio Grande, which accounted for nine of the sixteen species collected. Three species, C. inornata Eaton, F. mexicanus Dominguez, and C. pictus (Eaton) are apparently restricted to small, permanently flowing spring-fed creeks.
The relatively low number of mayfly species within Big Bend is perhaps less than what might be predicted based upon the numerous aquatic habitats. However, this low diversity could be explained by the fact that highest mayfly diversity is often found in highly aerated, rocky, rapidly flowing permanent water bodies (Berner et al., 1988). The vast majority of aquatic habitats in Big Bend are small spring-fed creeks and streams, many of which dry out during much of the year. In addition, many permanent creeks in the Big Bend region are often reduced to stagnant, unconnected pools during much of the year. Even the Rio Grande, the largest aquatic habitat in the region, can become completely dry during the summer months and droughts. This lack of suitable habitat probably explains why the mayfly diversity is so low.
The mayfly fauna of Big Bend has strong Neotropical affinities. The majority of species documented from Big Bend are either wide-ranging species in North and Central America, such as B. magnus and F. quilleri, or those known principally from the southwestern United States and Central America (C. bajaensis, C. montanus, C. pictus, C. inornata, and T. explicatus). A few species (C. mexicanus, T. presidiana, T. gonzalesi) are distributed chiefly in the south-central United States, south throughout Central America. Farrodes mexicanus was previously known only from southern Mexico.
The caddisfly fauna of Big Bend is quite diverse and most of its components are from southwestern North America and the Neotropics. This was clearly shown in the diverse microcaddisfly family Hydroptilidae. Other families, including the Leptoceridae and Polycentropodidae, are generally common elsewhere in North America, but they are poorly represented in the Big Bend region suggesting a relationship with the paucity of permanent water sources in this area. The identity of some species remains unresolved at this time due to either taxonomic uncertainty (Marilia sp.) or the absence of adult specimens in collections (Chimarra spp. Hydroptila sp., Limnephilus sp.).
Big Bend National Park appears to contain a much greater diversity of mayflies and caddisflies than Big Bend Ranch State Park. This is probably due to the greater abundance and diversity of aquatic habitats in Big Bend National Park. Or, it could simply be a collection bias owing to the fact that Big Bend National Park has been more thoroughly surveyed. The high diversity of mayflies (n=9) and caddisflies (n=14) from the Rio Grande is not surprising considering the size of the river and its normal permanent flow. However, undersampling the more than 300 springs occurring on the desert floor also may have contributed to this difference although many of these springs that were sampled contained no mayflies or caddisflies. Among spring-fed creek sites, Cattail Falls (Site 12, Table 3) and Oak Creek at The Window (Site 13, Table 3) support the greatest diversity of mayflies and caddisflies. Both these creeks normally flow throughout the year, but due to extensive regional drought in recent years these systems have been reduced to intermittent pools. Numerous other small springs generally support a smaller diversity of mayflies and caddisflies. However, some springs contain insects that are not found anywhere else in the park, such as the caddisflies N. sonora Ross (Family Hydroptilidae), C. angustipennis (Banks) and C. utahensis (Ross) (Family Philopotamidae).
The results of this study allow for a better understanding of the regional diversity and distribution of mayflies and caddisflies in the Rio Grande drainage basin. Such information will provide a solid basis through which to obtain a better understanding of the structure and functioning of this complex ecosystem. Data collected on mayflies and caddisflies also can be used towards development of rapid bioassessment protocols that are regionally specific and indices used for estimating ecosystem health. Establishment of monitoring criteria for aquatic systems is an important tool for management of fish and wildlife populations, protecting human health, and maintaining quality of life in response to deterioration in water quality and quantity. The information provided here allows for a better understanding of the diversity and distribution of mayflies and caddisflies in the Big Bend region, but additional research is required to fully assess the threats to their existence such as land development, impacts of ground water extraction on the springs, and degraded water quality. Furthermore, research on other groups of aquatic invertebrates also is required to gain a better understanding of the overall structure and function of aquatic ecosystems in this unique region. | 2016-05-04T20:20:58.661Z | 2005-11-03T00:00:00.000 | {
"year": 2005,
"sha1": "1eed93f0f490719112e96ff79fc0057fc58b2211",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/5/1/28/18148497/jis5-0028.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1eed93f0f490719112e96ff79fc0057fc58b2211",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3856607 | pes2o/s2orc | v3-fos-license | A facile doxorubicin-dichloroacetate conjugate nanomedicine with high drug loading for safe drug delivery
Background Doxorubicin (DOX) is an effective chemotherapeutic agent but severe side effects limit its clinical application. Nanoformulations can reduce the toxicity while still have various limitations, such as complexity, low drug loading capability and excipient related concerns. Methods An amphiphilic conjugate, doxorubicin-dichloroacetate, was synthesized and the corresponding nanoparticles were prepared. The in vitro cytotoxicity and intracellular uptake, in vivo imaging, antitumor effects and systemic toxicities of nanoparticles were carried out to evaluate the therapeutic efficiency of tumor. Results Doxorubicin-dichloroacetate conjugate can self-assemble into nanoparticles with small amount of DSPE-PEG2000, leading to high drug loading (71.8%, w/w) and diminished excipient associated concerns. The nanoparticles exhibited invisible systemic toxicity and high maximum tolerated dose of 75 mg DOX equiv./kg, which was 15-fold higher than that of free DOX. It also showed good tumor targeting capability and enhanced antitumor efficacy in murine melanoma model. Conclusion This work provides a promising strategy to simplify the drug preparation process, increase drug loading content, reduce systemic toxicity as well as enhance antitumor efficiency.
Introduction
Doxorubicin (DOX) is an effective chemotherapeutic drug for many cancers, such as breast cancer, bladder cancer, Kaposi's sarcoma, lymphoma and acute lymphocytic leukemia. 1 However, its clinical application is limited by the acute and chronic toxicities, such as cardiotoxicity, hepatotoxicity, vomiting and nausea, phlebosclerosis and myelosuppression. 2 The dose-dependent and cumulative cardiotoxicity, including cardiomyopathy and congestive heart failure, is grave for the life-threatening characteristic and prevalent for the organs' greater sensitivity to free radicals induced by DOX. 3,4 Another common damage is hepatotoxicity, which was found in 40% patients during the treatment. The serious side effects of DOX lead to a quite narrow therapeutic window and limited clinical application. To diminish or even abolish the DOX-induced organic dysfunctions, numerous work has been done to develop nanodrug delivery systems, such as micelles, nanoparticles (NPs), hydrogels, liposomes and so on. 5 Nanotechnology has gained profound benefit of reducing side effects and improving the therapeutic efficacy for chemotherapeutics. 6 MacKay et al 7 developed polypeptide-DOX NPs with 4-fold higher maximum tolerated dose (MTD) and impressive therapeutic efficacy against tumor. Some other DOX-loaded NPs were also constructed to realize reduced adverse effect and enhanced therapeutic efficiency. [8][9][10][11][12] These nanomedicines demonstrated diverse advantages, such as enhanced drug stability and intracellular uptake, prolonged circulation time, decreased systemic toxicity and improved antitumor efficacy, by virtue of the multifunctional materials as well as the enhanced permeability and retention (EPR) effect. However, the content of excipient in most nanomedicines can reach as high as 90%, resulting in low drug loading content (DLC) (usually less than 10%, w/w). 13 Such a large amount of excipient can bring extra burden to patients, including high cost, biodegradability concern, systemic toxicity and metabolism/excretion problem. [14][15][16] Despite the fact that thousands of nanomedicines have been developed, only a few of them can be translated into the clinic. It could be attributed to the complicated formulation design and construction, high cost and excipient content and potential side effects of nanoformulations. [17][18][19] As the most successful nanoformulations, Doxil ® and Abraxane ® are prosperous owing to their simplicity. 20 Therefore, it is highly desired to develop a simple nanomedicine with facile operability, high DLC, low side effect and high therapeutic efficacy.
In the past few years, self-assembled nanomedicine by exploiting the unique π-π stacking and hydrophobic interactions has attracted attentions from researchers. [21][22][23][24] It was prepared through a simple nanoprecipitation method by the use of amphiphilic drug, which was synthesized by conjugating hydrophobic drug with small molecule or hydrophilic drug. This kind of nanomedicine has exhibited many advantages. 25 First, the simple preparation procedure of nanomedicine makes it easy to translate into large-scale manufacture. Second, the self-assembly ability can greatly decrease the usage of excipient, resulting in high DLC and diminished excipient associated concerns. Furthermore, it also shows good stability under different conditions. DOX, a hydrophobic drug with rich aromatic structure, can facilitate weak intermolecular interactions, including π-π stacking and hydrophobic interactions, which is the fundamental for self-assembled nanomedicine. However, DOX cannot selfassemble in water due to the strong lipophilicity. We herein try to conjugate DOX with a hydrophilic drug to promote the capability of self-assembly. Dichloroacetate (DCA) is a small hydrophilic drug that is used to treat congenital lactic acidosis in children. 26 It can also reverse the Warburg effect by inhibiting the activity of pyruvate dehydrogenase kinase in many cancer cells. 27 However, the dosage of DCA used to realize tumor inhibition is as high as 100 mg/kg, 28 which is much higher than that of DOX (5 mg/kg). Herein, we constructed an amphiphilic DOX-DCA conjugate via the reaction of amino and carboxyl group in DOX and DCA, respectively. DOX-DCA can self-assemble into NPs with small amount of PEGylated lipid DSPE-PEG 2000 . This kind of NPs exhibited many advantages: 1) facile fabrication; 2) high DLC; 3) reduced excipients related concerns; 4) low adverse effect and 5) enhanced therapeutic response. These findings could give insights in promoting large-scale manufacture and clinical translation of nanomedicines.
1283
Doxorubicin-dichloroacetate conjugate spectroscopy (FTIR) was performed on a Bruker VERTEX 70 spectrophotometer (Ettlingen, Germany). The HPLC spectrogram was recorded on UltiMate 3000 Thermo Fisher Scientific machine (Waltham, MA, USA) with fluorescence detector and Restek Viva C18 column (150 × 4.6 mm, 5 μm) at a flow rate of 1 mL/min. The injection volume was 10 μL, and the temperature of column oven was set at 30°C. The identification and purity measurement of DOX and DOX-DCA were accomplished using a gradient elution of 25%-60% solvent B for 10 min where solvent A was water with 0.1% trifluoroacetic acid and solvent B was acetonitrile.
synthesis of DOX-Dca
To synthesize DOX-DCA, DOX⋅HCl (50 mg, 0.086 mmol) and triethylamine (17.5 μL, 0.129 mmol) were mixed and stirred in anhydrous DMF (5 mL) at room temperature for 3 h. Then dichloroacetic acid anhydride (19.7 μL, 0.129 mmol) was added. After stirring for 24 h, DMF was removed by rotary evaporation under vacuum condition to yield a red residue. Afterward, dichloromethane was added and unreacted DOX was removed by the repetitive washing with water (3 × 10 mL). The final product was isolated and obtained using the silica gel column chromatography (dichloromethane:methanol, 20:1). The whole experimental process was performed in dark under normal atmospheric conditions. Yield: 33.2 mg (59%). 1
Preparation and characterization of DOX-Dca nanoparticles (DOX-Dca NPs)
DOX-DCA NPs were prepared via a simple nanoprecipitation method. Briefly, 1 mg DOX-DCA and 0.3 mg DSPE-PEG 2000 (w/w, 1/0.3) were mixed together in 50 μL of dimethyl sulfoxide (DMSO) and added dropwise into 1 mL of distilled water with magnetic stirring at 100 rpm for 5 min. Thereafter, the solution was dialyzed against water to remove DMSO (MWCO 1 kDa). For animal administration, the NPs were concentrated with centrifugal filter (Ultracel YM-50, MWCO 50kDa, Millipore, Ireland). DOX NPs were constructed by the same nanoprecipitation method. In all, 1 mg desalinated DOX and 0.3 mg DSPE-PEG 2000 (w/w, 1/0.3) were mixed for further NP preparation. For the preparation of DIR-labeled NPs, DIR and DOX-DCA were mixed with a ratio of 1:20 (w/w) and performed by the similar method.
The morphology of DOX-DCA NPs was observed by transmission electron microscopy (TEM, JEM-1230; JEOL, Tokyo, Japan) via negative staining method. The particle size, distribution and zeta potential were measured by dynamic light scattering (DLS; Zeta Plus, Brookhaven, USA). The stability of DOX-DCA NPs was monitored for 7 days by DLS in phosphate-buffered saline (PBS) at room temperature. The drug encapsulation efficiency (EE) and DLC were calculated according to the following formulas: EE (%) = drug mass in NPs/drug feeding amount × 100%; DLC (%) = drug mass in NPs/total mass of NPs × 100%.
Intracellular uptake and in vitro cellular pharmacokinetics
Intracellular uptake was carried out on B16F10 melanoma cells using confocal microscopy (710META, Zeiss, Oberkochen, Germany). B16F10 cells were harvested and seeded into 24-well plates (1 × 10 5 per well), which were pretiled with a round glass. After overnight incubation, the medium was changed into DOX, free DOX-DCA or DOX-DCA NPs with an equivalent (equiv.) concentration of DOX (3 μg/mL). After 4 h and 24 h, B16F10 cells were washed with precooling PBS (1 mL) for three times and then fixed with 4% paraformaldehyde at room temperature for 15 min. Followed by another repeated washing with PBS, 5 μg/mL DAPI was added and the cells were incubated at room temperature for 8 min. Finally, the glass was turned over on a glass slide for confocal microscopy imaging. For in vitro cellular pharmacokinetics, B16F10 cells were seeded into 6-well plates (2 × 10 5 per well) and incubated with DOX-DCA NPs for 4 h, 8 h, 12 h and 24 h, respectively. The cells were lysed with lysis buffer containing phenylmethanesulfonyl fluoride. DOX and DOX-DCA concentrations were determined by HPLC and the relative cellular protein contents were measured by a BCA assay kit.
evaluation of cell viability by MTT
The in vitro antitumor activity of DOX-DCA NPs was measured using the MTT test. Briefly, B16F10 cells were harvested and seeded into 96-well plates (5 × 10 3 per well). After overnight incubation, the medium was replaced with different concentrations of DOX, free DOX-DCA or DOX-DCA NPs, respectively. After 24 h, 10 μL of MTT (5 mg/mL) was added in each well followed by 4 h of incubation.
In vivo antitumor effect
The in vivo therapeutic effect was detected on two tumor models, H22 sarcoma and B16F10 melanoma. In total, 5 × 10 6 of H22 cells were inoculated subcutaneously at the right flank of Kunming mice (20-22 g). Tumor volume was calculated by the use of following equation: Tumor volume = length × width 2 /2. The mice were randomly divided into five groups: PBS, DOX, DOX + DCA, DOX-DCA and DOX-DCA NPs. These mice were injected every 2 days for four times via the tail vein at an equiv. dosage of 5 mg/kg DOX. Tumor volume and body weight were monitored every day. For B16F10 melanoma model, 200 μL of 5 × 10 4 B16F10 cells were subcutaneously injected at the right flank of C57/BL6 mice (16-18 g). When the tumor size was about 30-50 mm 3 , the mice were randomly divided into four groups. PBS, DOX 5 mg/kg, 5 mg DOX-DCA NPs equiv./kg or 15 mg DOX-DCA NPs equiv./kg was intravenously administrated every 2 days for three times. The tumor size and body weight change were recorded every other day. After 11 days, all mice were scarified. And all tumors were harvested for imaging and weighting.
MTD and systemic toxicity study of DOX-Dca NPs
The systemic toxicity was carried out by determining MTD in tumor-free C57/BL6 mice. The mice were intravenously administrated a single injection with PBS, 5, 10 or 15 mg/kg DOX, 25, 50 or 75 mg DOX-DCA NPs equiv./kg on day 0, respectively. The body weight, survival conditions and physical states of each mouse were monitored every day for the following 10 days. MTD was defined as the maximum dosage of DOX or DOX-DCA NPs that lead to less than 15% body weight loss and no other obvious toxicities during 10 days. 8 At day 10, all mice were scarified. The serum and main organs were harvested for further toxicity studies. The activities of lactate dehydrogenase (LDH), aspartate transaminase (AST), alanine transaminase (ALT), blood urea nitrogen (BUN) and creatinine were assayed as indicators of cardiac, hepatic and renal functions. Pathological studies were carried on the H&E staining analysis of major organs.
statistical analysis
The statistical analyses were conducted using Graphpad Prism 7.00 software. The significance level among two groups was calculated using two-tailed unpaired t-test. The significance level among multiple groups was identified by one-way ANOVA with Tukey post hoc test. p-value , 0.05 was considered to show a significant difference.
Results and discussion synthesis and characterization of DOX-Dca
The synthesis route of DOX-DCA is shown in Figure 1A. Briefly, DOX-DCA was synthesized by directly conjugating DOX with dichloroacetic acid anhydride. The chemical structure of DOX-DCA was characterized by 1 H-NMR, HPLC, ESI-MS and FTIR ( Figure 1B-D). As shown in Figure 1B, the successful synthesis of DOX-DCA was verified by the generation of amide bond (a) and the signal from DCA (b), which were demonstrated by the proton signal at 8.2-8.4 ppm and 6.4-6.6 ppm, respectively. All the proton signal of DOX was remained in the spectrum of DOX-DCA with little chemical shift. Besides, the proton signal at 5.7-5.8 ppm came from the trace amount of dichloromethane. HPLC chromatograms exhibited that DOX-DCA showed absolutely different retention time (9.4 min) compared to DOX (5.4 min) ( Figure 1C), suggesting the successful synthesis of DOX-DCA. As identified by HPLC analysis, the purity of DOX-DCA was 98.5%. Moreover, the m/z of DOX-DCA was 652.3 Da in negative mode, which was in accordance with the calculated value of 652.1 Da ( Figure 1D). In the FTIR spectrum of DOX-DCA, the peak at 1,687 cm -1 further confirmed the generation of amide bound ( Figure 1E). All results demonstrated the successful synthesis of DOX-DCA with high purity.
1285
Doxorubicin-dichloroacetate conjugate assemble into stable and uniform NPs with small amount of DSPE-PEG 2000 . In a sharp contrast, when the same method was performed using free DOX, large amount of precipitate appeared. This phenomenon indicated that DCA played an important role in the process of NP formation. It may be attributed to the amphiphilic structure of DOX-DCA, which promoted the assembly of DOX in NPs. Moreover, the introduced PEGylated lipid in the formulation would facilitate the stability and in vivo circulation of NPs but had minor influence on the DLC. As shown in Figure 2A and B, the NPs possessed near-spherical morphology with an average particle size of approximate 56 nm, which was close to the hydrodynamic size (55.8 nm). DLS analysis indicated that the surface charge of NPs was -28.6 mV. The appropriate size and negative surface charge of NP make it suitable for long blood circulation and tumor accumulation via EPR effect. 6 Furthermore, the NPs were homogeneous with a narrow distribution, which was evidenced from the low polydispersity index (0.198). Moreover, the preparation method of DOX-DCA NPs is simple with good reproducibility, suggesting the possibility of large-scale manufacture. The drug concentration can be as high as 15.3 mg/mL after
1286
Yang et al the NPs were concentrated. More importantly, DOX-DCA NPs possessed a quite high EE (76.5% ± 6.5%) and DLC (71.8% ± 1.7%), whereas the NPs constructed with DOX demonstrated a relatively low EE (6.4% ± 1.1%) as well as DLC (17.6% ± 2.4%) ( Figure 2C). The self-assembly ability of DOX-DCA and small amount of DSPE-PEG 2000 resulted in the high drug/carrier ratio and DLC, which showed superiority against most of conventional nanoformulations (usually less than 10%). 13 High DLC is essential for low excipient associated toxicity and biodegradability concerns as well as high therapeutic response. What is more, the NPs showed very high stability in PBS with small average size change over a period of 7 days ( Figure 2D). TEM analysis was also used to testify the stability. Figure 2E represents the TEM images of DOX-DCA NPs after 7 days at room temperature, 24 h at 37°C and 24 h in acetate buffer (pH 5.0). DOX-DCA NPs demonstrated no significant difference in morphology and good stability against time, temperature and acidic condition. Compared with conventional nanomedicines, DOX-DCA NPs would attract much attention for the simplicity, reproducibility, high DLC, reduced excipientrelated toxicity and good stability.
In vitro cellular uptake and cytotoxicity assay
The cellular uptake and localization of free DOX, DOX-DCA and DOX-DCA NPs were investigated in B16F10 tumor cells at 4 h and 24 h using confocal microscopy. As seen Figure 3A, the red fluorescence of DOX and blue fluorescence of DAPI overlapped well in the group of free DOX after 4 h of incubation, which was attributed to the high affinity of DOX with nucleic acids. On the contrary, most of free DOX-DCA localized into the cell cytoplasm at 4 h and subsequently diffused into cell nucleus at 24 h. DOX-DCA NPs exhibited similar and enhanced intracellular uptake compared with free DOX-DCA both at 4 h and 24 h. After DOX-DCA NPs were internalized into cells by the endocytosis pathway and escaped to the cytoplasm, DOX-DCA was released and showed gradual accumulation in nucleus to exert the cell cytotoxicity effect. Similar results can also be found in the internalization and intracellular drug delivery of other DOX-loaded NPs. 7,16 In order to investigate the fate of DOX-DCA NPs after uptake, the cellular pharmacokinetics study was performed. As shown in Figure 3B, the concentration of DOX-DCA and dissociated DOX increased with time going on. DOX can be released from DOX-DCA after cell uptake under acidic and esterase conditions. We further examined the in vitro antitumor effect of DOX, free DOX-DCA and DOX-DCA NPs by MTT assay. Free DOX showed the best antitumor efficiency against cancer cells, followed by DOX-DCA NPs and then free DOX-DCA ( Figure 3C). The results were consistent with the tendency of intracellular uptake characteristics.
In vivo imaging and biodistribution study
To study the in vivo tumor targeting effect and biodistribution of NPs, B16F10 tumor-bearing mice were intravenously injected with free DIR or DIR-labeled DOX-DCA NPs (DIR@DOX-DCA NPs). As shown in Figure 4A, free DIR was quickly eliminated from body at 4 h and invisible DIR signal was found in tumor. On the contrary, strong and durable signal in tumor was found in DIR@DOX-DCA NP-treated mice up to 48 h. At 8 h, the fluorescence signal reached the peak. The prolonged circulation may be attributed
1288
Yang et al to small particle size, PEGylation and negative charge of DOX-DCA NPs. It seemed that DOX-DCA NPs showed significant tumor targeting and retention capabilities.
We further investigated ex vivo biodistribution of free DIR and DIR@DOX-DCA NPs after harvesting the main organs and tumor. DIR@DOX-DCA NPs showed absolute advantage against free DIR in the battle of tumor targeting, which was evidenced by the fluorescence image ( Figure 4B) and semi-quantitative results ( Figure 4C). Similar results can also be found in the investigation of DIR-labeled PTX-S-S-OA/TPGS2k NPs. 15 The tumor fluorescence signal of DIR@DOX-DCA NPs was 47.2-, 48.2-and 39.4-fold compared with that of free DIR at 8 h, 24 h and 48 h, respectively. More importantly, the fluorescence in excised tumor can last as long as 48 h. The higher drug targeting and retention abilities can result in better therapeutic response. These findings suggested that DOX-DCA NPs showed good system circulation, tumor targeting and retention abilities.
In vivo antitumor efficiency
We explored the antitumor efficiency of DOX-DCA NPs on two tumor models, H22 sarcoma and B16F10 melanoma. Figure 5A represents the tumor volume growth with the treatment of PBS, DOX, DOX + DCA, free DOX-DCA and DOX-DCA NPs, respectively. The treatment with the admixture of free DOX and DCA (DOX + DCA), free DOX-DCA or DOX-DCA NPs showed comparable tumor inhibition capability in comparison to DOX ( Figure 5A-C). The tumor inhibitory rate of DOX, DOX + DCA, free DOX-DCA and DOX-DCA NPs was 63.2%, 51.5%, 62.4% and 51.0%, respectively. Notably, compared with the stably increased body weight in DOX-DCA NPs, there was obvious body weight loss in mice receiving DOX or DOX + DCA ( Figure 5D). It may be resulted from the systemic toxicity of free DOX. It seemed that the conjugation of DOX with DCA can reduce the systemic toxicity of free DOX. We further investigated the antitumor efficiency on aggressive murine melanoma B16F10 model. To improve the therapeutic effect, we intensified the dosage of DOX-DCA NPs to 15 mg DOX/kg. However, the dosage of free DOX was restricted to 5 mg/kg due to the systemic toxicity mentioned earlier. As shown in Figure 6A and B, the tumorbearing mice administrated with PBS exhibited a rapid tumor volume growth. By contrast, other groups showed delayed tumor growth. Notably, DOX-DCA NPs 15 mg DOX/kg demonstrated the best tumor inhibition. The tendency of tumor weight was consistent with that of tumor volume ( Figure 6C). The tumor inhibitory rate of DOX, DOX-DCA NPs 5 mg DOX/kg and DOX-DCA NPs 15 mg DOX/kg was 72.7%, 62.3% and 83.0%, respectively. The decreased therapeutic response of DOX-DCA NPs with equiv. DOX could be attributed to the inferior cellular uptake and restricted cytotoxicity. As an indicator of systemic toxicity, the body weight was also monitored every day. Compared with PBS group, mice receiving DOX suffered from significant body weight loss during the treatment, which was similar to the result in H22 sarcoma ( Figure 6D). However, the body weight of mice treated with PBS, DOX-DCA NPs 5 mg DOX/kg or even DOX-DCA NPs 15 mg DOX/kg exhibited an increase tendency. It further confirmed the reduced systemic toxicity of DOX-DCA NPs in comparison to free DOX. These results suggested that DOX-DCA NPs demonstrated enhanced antitumor efficiency and reduced toxicity profile, which can be a safe and effective nanomedicine.
MTD and systemic toxicity study of DOX-Dca NPs
DOX has been reported to cause serious cardiotoxicity and hepatotoxicity, leading to a narrow therapeutic window. 2,3 To explore the nanoformulation, it is essential to study the systemic toxicity of DOX-DCA NPs. Therefore, MTD study was first performed to investigate the systemic toxicity and safety concern of the developed DOX-DCA NPs. MTD was determined on the basis that all animals survived with a body weight loss less than 15% as well as no other obvious toxicities during 10 days. 8 animal tests, DOX-DCA NPs have exhibited significant safety in comparison to free DOX and DOX + DCA at the same dosage of 5 mg/kg ( Figure 5D). Moreover, even at DOX equiv. dosage of 15 mg/kg, the nanoformulation still demonstrated improved safety ( Figure 6D). Therefore, the dosage of free DOX was selected as 5 mg/kg, 10 mg/kg and 15 mg/kg for MTD study, whereas the dosage of DOX-DCA NPs was set as 25 mg, 50 mg and 75 mg DOX equiv./kg. As expected, DOX-DCA NPs demonstrated extremely low systemic toxicity. Compared with free DOX, DOX-DCA NPs caused negligible body weight loss ( Figure 7A). The maximum body weight loss for DOX 5 mg/kg-, DOX 10 mg/kg-and DOX 15 mg/kg-treated mice was 4.2%, 17.0% and 24.0% respectively, whereas only 1.8% for DOX-DCA NPstreated mice with the dosage of 75 mg DOX equiv./kg. No body weight loss was found in the groups of control, 25 mg DOX equiv./kg and 50 mg DOX equiv./kg. Interestingly, the body weight change were almost the same in the groups of control and 25 mg DOX equiv./kg, indicating the safe drug delivery of DOX-DCA NPs. Induced by the severe systemic toxicity, one out of six mice died in the group of 15 mg/kg DOX-treated mice ( Figure 7B). The relative body weight of remaining five mice ranged from 71.1% to 74.5% and showed no sign of increasing during the following days. Moreover, their physiological activity were sluggish, and the body temperature decreased. The MTD value of DOX-DCA NPs was 75 mg DOX equiv./kg, which was 15-fold higher compared with free DOX (5 mg/kg). The MTD of DOX-DCA NPs was higher than most of the reported nanomedicines. For example, the MTD of PolyMPC-DOX Prodrugs, 29 SQ-DOX NAs 30 and DOX-loaded HSA NPs 31 was 10 mg/kg, 20 mg/kg and 30 mg/kg, respectively. The significantly enhanced MTD indicated that DOX-DCA NPs can immensely minimize the systemic toxicity of DOX, and thus widen the therapeutic window for promising cancer treatment.
At day 10, all the survived mice were scarified and major organs were excised. Blood biochemical parameter analysis ( Figure 8A-C) and pathological studies ( Figure 8D) were carried out to further investigate the systemic toxicity. Cardiotoxicity could be life threatening during the use of DOX for chemotherapy and should be the foremost investigation indicator. As the biggest site for drug accumulation and
1291
Doxorubicin-dichloroacetate conjugate metabolism, liver can also reflect the pathological state. Administration of DOX at a dose of 15 mg/kg did affect the blood level of LDH, ALT and AST ( Figure 8A-C), indicating the occurrence of cardiotoxicity and hepatotoxicity. Minor increases in LDH, ALT and AST were also found in DOX 10 mg/kg-treated mice compared with the control. On the contrary, the levels of LDH, ALT and AST were within normal range in DOX-DCA NP-treated mice even at the highest dose, 75 mg DOX equiv./kg, suggesting the negligible cardiotoxicity and hepatotoxicity. Likewise, H&E staining analysis revealed that cardiac injury (vacuolar degeneration in cardiomyocytes, blue arrow) and hepatic damage (large amount of inflammatory cell infiltration and hepatocyte necrosis, yellow arrow) were caused by 10 mg/kg and 15 mg/kg DOX treatment ( Figure 8D). No obvious damage in heart and liver was found in the DOX-DCA NP-treated mice. It was worth noting that DOX-DCA NPs can immensely mitigate the hepatotoxicity and cardiotoxicity compared with DOX. Moreover, there are invisible toxicities in kidney, lung and spleen in any groups, which were evidenced from the results of BUN, creatinine level (data not shown) and H&E staining analysis ( Figure 8D). These results suggested that systemic toxicity can be greatly reduced by the use of DOX-DCA NPs, indicating the feasibility of this kind of self-assembled nanoformulation for clinic translation.
Conclusion
In summary, we developed a new DOX derivative, DOX-DCA, by directly conjugating DCA with DOX. It has high purity and can be easily self-assembled into NPs with small amount of PEGylated lipid DSPE-PEG 2000 . The DOX-DCA nanoformulation exhibited high DLC as 71.8%, which can greatly decrease the side effects caused by excipient. After systemic administration, the NPs demonstrated good tumor
1292
Yang et al targeting and retention capability as well as good antitumor efficacy with high tumor inhibitory rate in murine melanoma model. It is noteworthy that the MTD of DOX-DCA NPs was 15 times higher than that of free DOX. No obvious cardiotoxicity and hepatotoxicity were found in DOX-DCA NP-treated mice. DOX-DCA nanoformulation can reduce the systemic toxicity and widen the therapeutic window. These results suggest the clinical applicability of DOX-DCA nanoformulation for the simplicity, reproducibility, high drug loading capability, reduced side effect and enhanced therapeutic effect. | 2018-04-03T00:16:56.050Z | 2018-03-06T00:00:00.000 | {
"year": 2018,
"sha1": "8237015493374226d809b045f53293f8a572f081",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=40816",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6eb933335d88dd807a71b260a381d7b5f4d4ebf9",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
125906087 | pes2o/s2orc | v3-fos-license | DIFFERENTIAL OBJECT MARKING AND INFORMATION STRUCTURE: ON THE FUNCTION OF TWO DIFFERENT PRONOMINAL ACCUSATIVES IN KOMI AND KHANTY DIALECTS
The present contribution calls attention to a marginal but interesting phenomenon of variation in grammar, namely the employment of two different accusative markings for pronominal objects encountered (i) in dialect texts from the Komi varieties of Upper Vym’ and Luza, and (ii) in varieties of Kazym-Khanty, i.e. in two different branches of Uralic (Permic and Ugric). Based on contextual observations an explanation in terms of information structure is achieved: as will be argued, in both language varieties, additional accusative forms of pronominal object expressions signal their focality resp. non-focality. The study contributes to the theory of differential object marking by establishing focality as one of its parameters.
1.1. In linguistic literature on differential object marking it is generally assumed that pronominal object expressions, especially those referring to speech act participants, are highly probable in being object marked because they are very prominent object expressions (Bossong 1998, Lazard 2001, Aissen 2003. Their prominency is due to their upmost position on the scales of animacy and/or defi niteness as in (1a, b). The correlation of prominency of an object expression and its probability of being object marked is understood as due to the fact that animacy and defi niteness are considered prototypical subject properties. According to the so called markedness reversal an object expression which has these properties must be formally distinguished from the subject by an object marker in order to avoid misinterpretations.
(1) Prominency scales (e.g. Aissen 2003: 437, 442) a. Animacy scale: HUMAN > ANIMATE > INANIMATE b. Defi niteness scale PERSONAL PRONOUN > PROPER NAME > DEFINITE NP > INDEFI-NITE SPECIFIC NP > NON-SPECIFIC NP Among pronominal objects 1st and 2d person pronouns are most likely to be object marked because they refer to human speech act participants. With 3d person pronouns there is often a distinction between animate and inanimate pronouns, e.g., he, she vs. it, Finnish hän vs. se, Kazym Khanty λŭw '(s)he' vs. tăm 'this; it'. Again, according to the prominency scales, with animate 3d person pronouns object marking is more probable than with inanimate ones, and it is also more probable for personal pronouns than it is for demonstratives. The prominency parameter can be successfully applied to account for several differential object marking patterns concerning pronominal object expressions in Uralic languages. 1 E.g., in Khanty, object marking occurs exclusively with personal pronouns 2 as illustrated in (2).
(2) Northern Khanty (Nikolaeva et al. 1993: 132) Ma Petra-Ø ~ lŭw-el (*lŭw) reskə-s-em I Peter (s)he-ACC ([s]he.NOM) hit-PST-SBJ1SG.OBJSG 'I hit Peter ~ him.' In other Uralic languages object marking is not restricted to personal pronouns but applies to all nouns. Object marking might be generalized (i.e., non differential) as in Hungarian and Mari, or it may work according to the prominency parameter: only defi nite objects are in the accusative case, indefi nite objects are not. Since personal and demonstrative pronouns are inherently defi nite expressions they are obligatorily object marked, cf., e.g., the data from Kamas (Samoyed branch) in (3).
(4) Komi-Zyrian (elicited data) a. Me ľubit-a Syktyvkar-ös ~ Syktyvkar. I like-PRS1SG Syktyvkar-ACC ~ Syktyvkar.NOM 'I like Syktyvkar.' b. Me ľubit-a Bobyk-ös (*Bobyk). I like-PRS1SG Bobyk-ACC (Bobyk.NOM) 'I like Bobyk.' c. Me ľubit-a tenö (*te) I like-PRS1SG you.ACC (you.NOM) 'I like you.' Non-marking of a direct object may also be due to a specifi c syntactic context. In Finnish a direct object is in the nominative instead of the genitive-accusative if the verbal predicate of the sentence is an imperative, an impersonal passive, or an infi nitive form. In this context, again, the nominative object occurs only with nominal object expressions, whereas personal pronouns are obligatorily in the accusative; cf. (5).
(5) Finnish Vieras ~ hän-et (*hän) tuot-iin huonee-seen. guest.NOM (s)he-ACC ([s]he.NOM) bring-PASS room-ILL 'The guest ~ (s)he was led into the room.' All Uralic patterns mentioned so far are in full accordance with the prediction made by the prominency parameter: in a language, which marks objects differentially, personal pronouns constitute a class of object expressions which is obligatorily object marked. Still, not all patterns of differential object marking are explainable in terms of the prominency parameter as is shown by the following paragraphs on aspect and information structure. Moreover, there is one Uralic language in which, contrary to the above stated obligatoriness of pronominal object marking, only nouns are object marked but personal pronouns are not. In Nganasan only nouns have a distinct accusative-genitive case form, as e.g. in (6), whereas a pronoun as e.g. tənə 'you (sing.)' has the same form when subject as in (7a), or direct object as in (7b). The Nganasan data thus shows that the correlation of high prominency and obligatory object marking refl ects a tendency rather then a universal law.
you well know-PRS-1SG 'I know you well.' 1.2. A different object marking parameter is aspect. Among Uralic languages the most prominent example is the partitive object of Finnic languages. Traditionally, the meaning of partitive is twofold: (i) it quantifi es nominal expressions as partial (in opposition to total nominative subjects and total genitive objects); (ii) it quantifi es predicates as imperfective/irresultative (cf. Denison 1957, Kont 1963, Larsson 1983, Kiparsky 1998, Tveite 2004: 17-20, Huumo 2010. Both partitive meanings, partiality and imperfectivity, have been united under the meaning of unboundedness. Following Kiparsky's (1998) analysis for Finnish a partitive object is part of an unbounded situation whereas a genitive/accusative object is part of a bounded situation. A Finnish pronominal object is illustrated in (8). In a bounded situation, with no restrictions concerning affectedness, as in (8a), the object is in the accusative case. In unbounded situations the object is in the partitive case. Unboundedness can either result from partial affectedness as in (8b), or it is a general property of negated sentences as in (8c).
see-1SG you-PTV 'I'm seeing you, I see a bit of you.' c. E-n näe sinu-a. not-1SG see.CN you-PTV 'I don't see you.' In discussing problems of the markedness reversal Naess (2004) unifi es the notions of prominency and partial affectedness into a unifi ed DOM parameter which she calls degree of affectedness. The Finnish partitive has often been connected with the prominency notion of indefi niteness, and the accusative with defi niteness (e.g. Larsson 1983, Pusztay 1975: 360, Krámský 1972. Still,examples like (8b,c), in which a defi nite object expression is marked with partitive case despite its high grade of prominency, show that aspect and prominency work essentially independently. In addition, there is a difference between the two object marking parameters concerning the number of cases involved: with prominency the opposition is one between an overt case and zero. With aspect (or boundedness) it is one between two overt case markers (cf. Aissen 2003: 436, fn. 3). 4 1.3 Neither prominency nor aspect can be responsible for the following patterns of pronominal differential object marking found in dialects of Komi (Permic) and Khanty (Ugric). Generally, in these languages object marking with personal pronouns is obligatory (see 1.1 above). In addition, as dialectal phenomena, we fi nd two different pronominal accusative forms, a primary common form (ACC 1 ) and a secondary dialect-specifi c form (ACC 2 ). For instance, the Komi 1st person pronoun me 'I' has the accusative form menö (ACC 1 ), as in (9a) and (10a), in all dialects, but in Upper Vym' and in Luza this form alternates with a longer accusative form (ACC 2 ) menöly in Vym', as in (9b), and menölö in Luza, as in (10b). Similarly, in Kazym-Khanty ma 'I' has a shorter accusative form mănăt, as in (11a), and a longer form, mănăttĭ, as in (11b). (Koškareva and Solovar 2004: 279-280) a. Mănăt nux kŭrit-e … śos-n! me.ACC 1 up wake-IMP.SBJ2SG.OBJSG clock-LOC 'Wake me up at … o'clock!' b. Mănăttĭ nux kŭrit-e! me.ACC 2 up wake-IMP.SBJ2SG.OBJSG 'Wake me up!' The parallel use of two different pronominal accusative case forms as in (9)-(11) has received comparably little attention in the literature. The works that do exist do not suffi ciently explain their different functions (cf. for Vym ' Žilina 1998: 57-58, 94-108, Ljašev 1975: 92-93, Ljašev 1977, Baker 1985: 202-221, for Luza Žilina 1985: 62-63, and for Kazym Koškareva 2001a, 2001b, 2002. What seems clear is that the prominency parameter cannot be applied to account for variations of the type menö ~ menöly in (9), or mănăt ~ mănăttĭ 'me' in (11): different forms of the 1st person pronoun do not differ in degrees of animacy or defi niteness. And, as the examples cited show, the different object forms are not due to different verb semantics. Less obvious may be irrelevance of aspect, or degree of affectedness. It could be possible that different aspectual readings are achieved by changing the form of the object. Such a pattern, on the other hand, is not known neither in Komi nor in Khanty. Perhaps with the exception of the question in (10a), there is also no reason to look for different degrees of affectedness. Therefore, another parameter has to be identifi ed. Such a parameter may be found in the domain of information structure. Lazard (2001: 878-879) explicitly lists thematicity (~ topicality) of the object as a relevant factor for object marking in Persian, Romance, and other languages, and also object rhematicity (~ focality) in Badaga, Arabic and others. 5 In Northern Khanty, as Nikolaeva (1999Nikolaeva ( , 2001 has demonstrated, object agreement is triggered exclusively by the secondary topic status of the object. And fi nally, Baker (1985: 212-215) assumed that topicality may be a relevant factor for Vym' and Luza object marking patterns as in (9b) and (10b).
In the following section 2. it is specifi ed what kind of impact information structural notions may have on object marking. As we argue, the form of the object may depend on enclosure resp. non-enclosure into the focus of an uttering. In sections 3. and 4. the variation exemplifi ed in (9)-(11) is treated in detail and explained in terms of focality. Conclusions are presented in section 5. The data comes from Komi and Khanty text publications as well as from unpublished archive material collected at the Komi Research Centre in Syktyvkar in 2007. The main purpose of the paper is to offer an explanation for a puzzling grammatical variation encountered in dialect texts. The discussion has to based on these data.
Information structure as a parameter of differential object marking
The basic assumption which underlies the following explanations is that the surface form of a direct object expression may depend on its enclosure (or non-enclosure) into that part of an uttering which constitutes the focus of this uttering. Focus means new information as opposed to given (old, presupposed, topical) information (cf. Schwarzschild 1999, Krifka 2007). E.g., a sentence of the type The doctor helped him quickly can have different readings, depending on the type of given or new information provided. By focus accent -indicated here by capital letters -a speaker highlights this part of the sentence which is to be understood as the new information. A neutral reading of this sentence would be The doctor HELPed him imMEDiately, asserting the immediate act of helping against a presupposed background {He needed help, there was a doctor}. A reading The doctor helped him imMEDiately, with focus only on the adverb, presupposes the act of helping. Focus on the object pronoun as in The doctor helped HIM immediately yields a contrastive reading which could also be expressed with a different syntactic construction: It was him, who the doctor helped immediately. The crucial point is that in languages other than English an equivalent to focal HIM might be expressed differently from a non focal (given) him. This difference may not only be due to narrow focus on HIM but also to a general differentiation between being part of the focus or not. The focus of a sentence can consist of more than one expression but only one can have focus accent. In this case it is appropriate to distinguish the focus independently from the focus accent using brackets, e.g., The doctor [HELPed] FOC him vs. The doctor [HELPed him] FOC where the fi rst sentence is an appropriate answer to a question What did the doctor do to him? with everything presupposed except the predicate. The second sentence is an appropriate answer to the question What did the doctor do? where the object is not a part of the presupposition. In other words, it may be crucial for the form of a direct object whether it is a given expression which is part of the presupposition of a sentence (i.e. a topic expression in the tradition of Lambrecht 1994, and Nikolaeva 1999, or whether it is a focus expression, which is part of the assertion. Givenness of the object will be identifi ed in section 3. as the responsible factor for the choice of the longer accusative form in Vym' and Luza. And in section 4. we show that in Kazym it is, conversely, focality which triggers the longer accusative form.
Two different accusative forms of personal and demonstrative pronouns in the Komi dialects of Vym' and Luza
3.1.1. The Komi-Zyrian dialect of Upper Vym' shows within its case system of personal and demonstrative pronouns two accusative forms: 1st-3d person singular pronouns and the 3d person plural pronoun as well as demonstrative pronouns meaning 'this' and 'that' have a standard accusative form and a so called "accusative-dative form" (the form which was glossed "ACC 2 " in the introductory examples (9b) and (10b) above). The name of the latter is due to its morphological structure, which is a combination of the standard pronominal accusative form plus the dative ending -ly. Note that the accusative-dative is distinct from the dative form of the respective pronouns as can be seen in table 1. 6 3.1.2. As stated above, the difference in meaning between accusative and accusative-dative as e.g. in sijö ~ sijöly cannot be expressed in terms of prominency (defi niteness, animacy) nor in affectedness of the object nor in terms of aspect of the situation. In trying to explain the function of the morpheme -ly with direct objects Frolova (1950: 137), Ljašev (1975: 94) and others (e.g. Serebrennikov 1963: 44) identifi ed it as emphasis ("èmfatičeskoe vydelenie"), and labelled the morpheme -ly an "emphatic particle" (dative semantics thereby considered completely irrelevant). An interpretation of menöly 'me' in (9b) -repeated here as (12b) -as an emphatic object form, opposed to neutral menö in (12a), would achieve a contrastive reading: "Now, that you marry ME (and not anybody else)". In other words, the longer form would signal contrastive focus on the pronominal object.
(12) Komi-Zyrian, Vym' (Lug; Rédei 1978: 14) a. Ivan, menö vaj-an, o-n? Ivan I.ACC bring-PRS2SG not.PRS-2SG 'Ivan, will you marry me, or won't you?' b. Vot jeśli-kö te vaj-an menö-ly, te so if-COND you bring-PRS2SG I.ACC-DAT you asy lok taťťśö! tomorrow come.IMP2SG here.ILL ? 'So if you then marry [me] FOC , come here tomorrow!' For the structural parallel object form sijöly in (13), on the other hand, an interpretation operating with contrastive focus or emphasis on the object pronoun runs into diffi culties. These attempts must fail because emphasis here has to be put clearly on other elements. The sentence can only be understood correctly with (i) contrastive topic accents on the subject expressions, and (ii) contrastive focus accents on the predicates: I CTOP [will make her x fall ILL] FOC , and YOU CTOP [go about to CURE] FOC her x ! 7 The actual emphasis within the clause where the accusative-dative pronoun occurs, is on the contrastive topic expression te 'you' and on the contrastive focal predicate leťśitny kuťśiś 'go about to CURe', but not on the object expression sijöly.
(13) Komi-Zyrian, Vym' (Koni; Frolova 1948/49: 61) Context: "Death," in order to help a friend in becoming rich, sends him as a doctor to a rich merchant whose daughter is supposed to fall ill. FOC her", but if such a constituent has to appear within the focal sentence part it will appear in brackets marked as topic, as e.g. in (23) "[mash [it] TOP upon the stove] FOC ".
Judging on the base of (13) the meaning of the accusative-dative form is just the opposite of emphasis, namely deaccenting. Now, with the background of (13), let us reconsider our interpretation of (12b) which was 'Now, that you marry ME (and not anybody else)'. The question 'Ivan, will you marry me?' in (12a) introduces the idea of marriage into the discourse. The question is answered in the affi rmative and in (13b) the future bride makes preparations on how to proceed: 'So [IF then] FOC you marry me, come here tomorrow!' In this sentence all constituents (subject, object, verbal predicate) can be understood as given (presupposed) and there is no contrastive emphasis on the pronoun menöly. Instead, there is narrow focus on the the conditional particle jeśli-kö 'if, if then'. The pronoun is not part of the focus, it rather seems to indicate that focus is on a different syntactic element, in this case on the only new element in the sentence, the conditional particle jeśli-kö 'if, if then'. The result so far is that the former reading with contrastive emphasis on the accusative-dative pronoun must be abolished; the fi nal reading for (12b) is expressed in (14). FOC you then marry me, come here tomorrow!' 3.1.3. The following example illustrates a different case of narrow focus on a constituent which is not the pronominal object expression. In (15b) this constituent is the subject pronoun. Both questions in (15) are asked by a father who needs to be rescued by one of his daughters. First, (15a) is addressed to his eldest daughter who refuses to help him. After that he asks his second daughter (15b): "Will [YOU] FOC not save me?" Again, the emphasized element is not the accusative-dative pronoun. FOC not save me?' While in the preceding examples the new information could be associated with constituents -a verbal predicate in (13), a conditional particle in (14b), a subject expression in (15b) -in the following two examples the new information cannot be associated with such a specifi c constituent; at least not in the Komi text. In the English translations such an assiciation is possible: it is the auxiliary which bears focus accent: DID in (16b) and WILL in (17). Both examples represent so called verum focus, where the new information consists in the confi rmation of an already established proposition with all elements given. This type of focus is quite common with repetitions in narratives for which (16b) is an example. In (16a) a proposition is established: a tsar and his wife send a soldier to the tsarevič. The predication in (16b) repeats and confi rms this proposition.
[…] seśśa sar gozja-ys i saldat-ös yst-isny then tsar couple-3SG and soldier-ACC send-PST3PL pi-ys din-ö: myj sy-ly ńim-sö boś-ny? son-3SG to-ILL what (s)he-DAT name-ACC3SG take-INF '[…] and then the tsar and the tsarina sent a soldier to their son, saying, "What name shall the boy be given?"' b. Ystisny sijö-ly, saldat-ös. send-PST3PL (s)he.ACC-DAT soldier-ACC 'They [did] FOC send him, the soldier.' Verum focus is also the motivation for the use of an accusative-dative pronoun (17). The text fragment starts with the decision of a poor brother to invite his rich brother to a party.
The following sentence provides background information in recalling an earlier reverse situation; the new information in this sentence consists in the reversal of subject and object roles, and in the negation of the given predicate kor-'invite'. This situation is, again, reversed, repeating thus the fi rst sentence, but replacing the object expression by a pronoun. The focus in this sentence consists in the confi rmation of the already established proposition, and this focus reading is enabled by the choice of the explicitely non-focal object expression. [will] FOC invite him." 3.1.4. In summary it can be stated, that a pronominal accusative-dative form in Vym' is an object expression signaling that it is not part of the focus of the sentence. This is especially clear with narrow focus contexts as, e.g., subject constituent focus in (15b). Former analyses which interpreted the accusative-dative marked pronoun as an emphatic object expressions, appear to be wrong as our readings of examples (13)-(17) have shown. Moreover, it can be demonstrated that if a pronominal object expression has narrow focus it is in the accusative case and not in the accusative-dative case: in (18) the main protagonist is object of an attempt, he shall be killed by a bunch of rascals. In order to irritate them he starts preparations for a trick: pretending that he does not want to leave a dolorous widow behind he will apparently stab her before he gets killed himself (what he'll really stab is a bladder fi lled with red water). In explaining this plan to his wife he says me pö pervej tenö vija 'I will fi rst kill YOU', with contrastive object focus. The act of killing (or pretending to do so) is presupposed by the preceding context, but the object is not. The object is not a contrastive topic, since the expected arguments of the killing event are the main protagonist and the rascals, but not the wife. The object expression thus bears a clear contrastive focus accent. The form is accusative, not accusativedative.
(18) Komi-Zyrian, Vym' (Koni; Frolova 1948/49: 22) Context: A trickster protagonist expects a bunch of rascals, who want to kill him, and develops a strategy: Baba-ys-ly gaďď-ö kraska va pukt-is wife-3SG-DAT bladder-ILL red water put-PST3SG kunul-as. "Žuľik-jas kö pö kuťťś-asny vi-ny, armpit-ILL3SG rascal-PL COND QUOT begin-FUT3PL kill-INF me pö pervej tenö vij-a, o-g koľ I quot fi rst you.ACC kill-PRS1SG not.PRS-1SG leave.CN muťśiťťśy-ny, purt-ön pö tšuköd-a gaďď-ad." suffer-INF knife-INS QUOT stab-PRS1SG bladder-ILL2SG 'He puts a bladder with red water under his wife's armpit (and says): "If the rascals come and start to kill, I fi rst kill [you] FOC , I don't leave you (alone) suffering, I stab into this bladder of yours with a knife." 3.1.5. In Komi dialects, a special object marking strategy for presupposed objects involving the dative is not exclusively found with pronominal objects. In Vym', as well as in other dialects of Komi-Zyrian (Ižma, Luza-Letka, Vym') and Komi-Permyak (Kosa-Kama, Kočëvo) it applies to nouns as well (see Baker 1985: 202-221, Klumpp 2009). Morphologically the dative marked direct object has the same form as an indirect object, i.e. other than with pronouns, a presuppositional nominal object is marked by dative case proper and not by a special accusativedative case. Its function is basically the same as the function of the accusative-dative marked pronominal objects: it signals givenness (presupposedness, topicality) of the direct object. For illustration cf. (19b) where a dative marked direct object, kerkaly 'the house', occurs in a narrow focus context with focus on the adverbial expression setšöm ńeštšaśľiveja 'in such an UNlucky way'. TOP off course] FOC and tore apart the sails'. Instances of dative marked pronominal object expression have been reported also from Northern Permyak dialects (Batalova 1975: 141).
3.2. Accusative-dative marked pronominal objects as in Vym' are found also in Luza, a Southern dialect of Komi-Zyrian which is not adjacent to Vym'. To be exact, for the Luza area accusative-dative-forms have been reported from Čitaevo, Ob"jačevo, Nošul' and Lovlja, the latter is situated between the rivers Luza und Letka. In Luza the same pronouns as in Vym', except for the 3rd person plural pronoun ńida, show two object forms. As can be seen from table 2, morphologically we face the same accusative form as well as the same combined accusative-dative form, the only difference consists in the quality of the suffi x vowel. Concerning the function of this category in Luza there is nothing new to be stated. Dialect texts from the above mentioned settlements are scarcer than texts from Upper Vym', examples rather rare. Still, there are instances which call for explanation. A successful interpretation of the Luza pronominal object forms in (10) -repeated as (21) below -can be achieved with the same focus type readings as in Vym'. In (21b) this is a reading with verum focus. (21) is a fragment from a wedding song in which the bride laments over the fact that she will be handed over to her husbands family. In the fi rst mention in (21a) the object is not presupposed in this role and, consequently, the focal accusative form menö 'me' is used. In (21b), after resignating over the fact that she cannot do anything to it, the situation expressed before is repeated, now with an accusative-dative form of the pronoun. FOC hand me over to good people.' A verum focus reading is also possible for (22), which appears as an isolated sentence in N. Loskutova's (1972) fi eld materials from Ob"jačevo; therefore, in the English translation, the focus is constituted by the auxiliary verb. However, if the idea of forgetting is the new information in this sentence, then a reading with narrow verb focus is appropriate, as indicated in the second translation by the focus accent on the main verb. Note that this sentence would allow to observe a correlation between verum focus and VSO word order, i.e. fronting of the verbal predicate; due to subject pro-drop, the above instances of verum focus (16b) and (21b) FOC .' Finally, (24) parallels the Vym' example (14) above. The example comes from a tale about a poor boy who gets into grief for having shot a beautiful duck's wing with an arrow. The duck turns out to be a rich and beautiful girl who lives inside the lake and is willing to reward the boy's compassion with her love. Again, we fi nd a conditional particle as the focus element in a surrounding of given elements. FOC you then have compassion for me, 8 I will do you much good."' 3.3. In this section we have demonstrated how the information structural category of focus functions as a parameter of differential object marking. We advocated for the interpretation of the accusative-dative form of pronouns, which is a dialect specifi c innovation in the Komi-Zyrian dialects of Vym' and Luza, as a special form for non-focal object expressions. This form indicates that the object expression is not part of the focus, often in sentences with narrow focus on a constituent other than the object itself.
Two different accusative forms of personal pronouns in Kazym-Khanty
4.1. Khanty personal pronouns are infl ected in the accusative and dative case. In addition, the infl ectional paradigm of personal pronouns in Kazym Khanty shows two variants of these case markers, a morphologically simple case and a more complex one. The simple dative form consists of the stem of a respective pronoun followed by a homodeictic possessive suffi x -a pattern found throughout Khanty dialects. In the complex dative form a suffi x -a is added, which is the lative case suffi x from the nominal declension. Concerning the accusative, the simple form has the common Khanty pronominal accusative suffi x -t. The complex form adds an -i, whose etymological origin is unknown to me. Table 3 shows only the singular forms, but dual and plural personal pronouns are affected in the same way. Note that Kazym is not a homogenous dialect but consist of subvarieties, of which several do not distinguish two different accusative forms, some even do not distinguish accusative and dative in their pronominal infl ection (cf. Nëmysova et al. 1996: 15, 21). 4.2. There is contradictory information concerning the different uses of these pronominal case forms. Firstly, for the dative forms, Koškareva (2002: 30) explains that the simple form is used in thetic (all new) sentences as in (25a), as well as in topic-focus sentences, where it constitutes a topic expression, cf. (25b). The complex form is used in topic-focus sentences as a focus expression, cf. (25c). Judging from this example the complex form is used with narrow focus on the pronoun. For the accusative pronouns we would expect the same distribution, i.e. the longer form in -i functioning in contexts where there is narrow focus on the pronoun. According to Rédei (1968: 21) complex case forms are emphatic ("nachdrücklich"), an observation which, at least, is not contradictory to the notion of narrow focus. Koškareva (2001: 112) only points out that there is a general difference between thematic (~ topical) and rhematic (~ focal) pronouns in Kazym, but she provides examples only for the dative case. Moreover, in another article Koškareva (2001: 238) claims that manət ~ manti 'me (ACC)', năŋət ~ năŋti 'you (ACC)' etc. are "regional variants". However, there is clear counterevidence, because these variants occur with one and the same speaker. In Kazym text publications the variation in question occurs with at least fi ve speakers: N. P. Kaksin from Amninskie uses manət 'me' in OA III: 478, 479 and manti id. in OA III: 391, 443, 479, 482;M. K. Tarlina from Juil'sk uses năŋət 'you' in Rédei 1968: 66 but năŋti id. in Rédei 1968; similar variation is found with A. M. Moldanov from Juil'sk in Moldanov 2001: 165, 177 vs. op. cit.: 145, 164, 177;with V. N. Tarlina from Kazym in Moldanov 2001: 180, 182 vs. op. cit.: 180;with N. M. Lozjamov from It'-jaxa in Moldanov 2001: 218 vs. op. cit.: 218. In consequence, a closer inspection of this variation in search for a functional difference seems appropriate.
Let us try to interprete the above example (12) -repeated as (26) below -along the notion of narrow focus or emphacy as given by Koškareva (2002: 30) and Rédei (1968: 21). The simple form in (26a) can be appropriate here either because (i) it is an all-new-context, (ii) the object expression is the only topical constituent, or (iii) the time adverbial has narrow focus; cf. the focus brackets in the English translations. 9 The correct interpretation of (26b) -according to our information thus far -would be (i), with narrow focus on the object expression. However, in the case Differential object marking and information structure 9 The original Russian translation is added in brackets. of the time adverbial in (26a) narrow focus seemed quite natural in a conversation guide, whereas in the case of the pronominal object in (26b) it appears somehow very specifi c. As a solution for (26b) one could think that the longer accusative form does not necessarily trigger a reading with narrow focus, but simply signals that the object expression is part of the focus, as in the translation in (ii). But in the case of (26b) this would mean that the sentence is thetic (all new), a context whereby Koškareva (2002: 30), in case of the dative forms, predicts the use of the shorter form. Obviously, there is a dilemma. The question is: does the complex accusative form only signal narrow focus on the object, or does it signal focality of the object? The difference in meaning between the supposed topical form mant 'me (ACC 1 )' in (27a) and the supposed focal variant manti 'me (ACC 2 ) in (27b) can be understood as a difference in alternatives. While in (27a) there is no alternative object referent to take over, in (27b) there is one, because (27b) is uttered by a second hero, who had appeared after the fi rst hero and who is about to kill the fi rst one and then return in his place. The alternative consists in this fi rst hero, and the object can be con-
!'
In interpreting the next example (28a) we read the simple accusative form mănət 'me' as non-focal in the context of narrow predicate focus. This is a plausible reading because the situation in which the request to kick is uttered, involves clearly both protagonists, the addressee and the speaker, who arrived at the place together and are now bound to separate. The question underlying the request is "What are you supposed to do now in respect to me?". This request is fi rst followed by a predication about what will happen to the horse, and then, in (28b), advice concerning the future of the addressee. If he should happen to get into trouble, he is supposed to do the following: look for his horse. The underlying question, not presupposing the occurence of the object referent, is now "What are you supposed to do?". Obviously with this question, it is not necessary to read the longer form in (28b) as a contrastive one: "look for ME (and for nobody else)". Instead, (28b) simply signals that the occurrence of the object referent in the predication was not presupposed. FOC take me along!' ('Mladšij brat sil'no stal prosit'sja: Net, brat'ja, sdelajte xorošoe delo, voz'mite menja s soboj.') Finally (30), which concludes the present discussion, contrasts the two focal forms in (30a, b) with the non-focal form in (30c). The difference, in our opinion, is that in (30c) the object referent is part of the background question ("what are you supposed to do to me?"), whereas in the preceding (30a, b) the background question does not include the object referent ("what are you supposed to do?").
Conclusions and outlook
The DOM pattern which appears in the data presented in this paper cannot be suffi ciently explained by the categories of defi niteness, animacy and affectedness. Instead, it can be demonstrated that an explanation in terms of information structure is possible, i.e., in terms of discourse topic and focus. To be more exact, it can be demonstrated that variation in marking the direct object is an instrument for defi ning the size of focus within a sentence. In addition to the common accusative form of a pronominal direct object, the Uralic languages, discussed above, created a second case form which explicitly indicates that the object expression is either part of the focus (Kazym) or outside of the focus (Vym', Luza). It seems, among Uralic languages, that focus sensitive DOM appears with personal pronouns in a special area, including dialects of Komi and Khanty. Is this exhaustive, or have comparable patterns up to this point been overlooked? | 2018-12-05T14:03:20.351Z | 2012-06-18T00:00:00.000 | {
"year": 2012,
"sha1": "60ae570fa1ed0f477277f921d3fd030b4c412414",
"oa_license": null,
"oa_url": "http://ojs.utlib.ee/index.php/jeful/article/download/jeful.2012.3.1.17/10329",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "60ae570fa1ed0f477277f921d3fd030b4c412414",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
29187754 | pes2o/s2orc | v3-fos-license | Chromium oxide coatings with the potential for eliminating the risk of chromium ion release in orthopaedic implants
Chromium oxide coatings prepared by radiofrequency reactive magnetron sputtering on stainless steel substrates were exposed to Ringer's physiological solution and tested for their electrochemical corrosion stability using an open circuit potential measurement, potentiodynamic polarization, electrochemical impedance spectroscopy and Mott–Schottky analysis. The coatings were found to be predominantly Cr2O3, based on the observation of the dominance of A1g and Eg symmetric modes in our Raman spectroscopic investigation and the Eu vibrational modes in our Fourier transform infrared spectroscopic measurements on the coatings. We investigated for the presence of chromium ions in Ringer's solution after all of the above electrochemical tests using atomic absorption spectroscopy, without finding a trace of chromium ions at the ppm level for coatings tested under open circuit and at the lower potentials implants are likely to experience in the human body. The coatings were further exposed to Ringer's solution for one month and tested for adhesion strength changes, and we found that they retained substantial adhesion to the substrates. We expect this finding to be significant for future orthopaedic implants where chromium ion release is still a major challenge.
Introduction
One of the major challenges faced by patients with metal-onmetal hip replacements is the potential of implant failure. The implant failure in vivo is well known to involve corrosion, which leads to the release of wear debris/ions and have been linked with possible adverse health effects such as pains, pseudotumour formation and inflammation in patients [1][2][3] zirconium alloy, ultra-high molecular weight polyethylene and alumina-based ceramics are widely used as implant materials due to their high corrosion resistance, mechanical properties and biocompatibility. However, the above-mentioned implant materials are not immune to wear, corrosion and biocompatibility issues. Studies conducted on retrieved 316 L stainless steel implant show that more than 90% of failures of the material is due to pitting and crevice corrosion attack [4], resulting in the release of Fe, Cr and Ni ions, which are toxic to the human body system. Nevertheless, its low cost and poor corrosion resistance compared with other biomaterials (like cobalt chromium molybdenum and titanium alloy) make it suitable for temporary implants like screws and plates that are normally removed after a few years. Cobalt chromium molybdenum alloy (CoCrMo) implant, on the other hand, releases cobalt and chromium ions into the surrounding tissue fluid because of wear at the bearing surface and the corrosion of the implant material [5][6][7]. The biocompatibility of an implant material is strongly linked to its resistance to tribological and corrosion processes. These properties can be enhanced by applying a wear and corrosion protective coating material on the implant surface to eliminate or minimize surface damage and degradation while retaining the bulk material properties.
Several coating materials such as diamond-like carbon (DLC), chromium nitride (CrN), hydroxyapatite (HA), titanium nitride (TiN) and titanium niobium nitride (TiNbN) [8][9][10][11][12][13][14] have been explored by different researchers for orthopaedic implant applications. While some of these coatings are either in their early or advanced stages of investigation, others such as TiN and TiNbN are already in use in commercially available hip and knee prostheses [13,14]. One of the major drawbacks for some of these coatings is delamination, which occurs due to poor adhesion on substrates resulting from the high level of internal stress generated during the coating process. Contrary to the excellent results obtained from in vitro wear simulator studies on DLC coatings, a poor adhesion of the coatings on the substrate was reported during in vivo tests [15,16]. Raimondi et al. [17] reported a local delamination and coating breakdown on case studies of explanted TiNi-coated hip implants. HA coatings, on the other hand, despite their excellent biomaterial properties have inherent mechanical properties such as poor tensile strength, poor impact resistance and brittleness that have restricted their use for many load-bearing applications [18]. Chromium oxide coatings can be prepared using various techniques such as sputtering, chemical vapour deposition, electron-beam evaporation, plasma spray pyrolysis, arc ion plating and pulse laser deposition [19][20][21][22][23][24][25]. Of all the deposition techniques, radiofrequency reactive magnetron sputtering is regarded as the most preferable for industrial production as this technique will enable the synthesis of high-quality Cr 2 O 3 with desired hardness and wear properties. Previous studies on chromium oxide coatings prepared by magnetron sputtering have revealed that, depending on the preparation method and deposition conditions used during the deposition, films with single or mixed phases and varying properties can be formed [19,26,27]. Barshilia & Rajam [26] and Hones et al. [19] reported the presence of single-phase Cr 2 O 3 for the chromium oxide coating prepared using pulsed DC reactive unbalanced magnetron sputtering and reactive magnetron sputtering while films with mixed phases of Cr 2 O 3 and CrO 3 were obtained by Pang & Gao [27] using reactive unbalanced magnetron sputtering. The authors observed a maximum hardness value of 22 GPa for films prepared by pulse DC unbalanced magnetron sputtering and a hardness of 32 GPa for films prepared by reactive magnetron sputtering [19,26]. A number of investigations conducted on the corrosion properties of chromium oxide coatings have shown that chromium oxide coatings possess good corrosion resistance in phosphatebuffered saline (PBS) solution and saline solution [26,[28][29][30], and have the capability to act as a protective coating material for stainless steel and other substrates exposed to the aforementioned solutions. It has been shown that chromium oxide coatings prepared by radiofrequency reactive magnetron sputtering, which are predominately single-phase Cr 2 O 3 , release a minimal amount of chromium ions during electrochemical tests in saline solution [29]. The authors reported that the ion release behaviour of the coating is most probably due to the lower defect density on the surface of the coatings. In order for chromium oxide coatings to be considered a viable candidate material for implant applications, their electrochemical behaviour in physiological solutions such as Ringer's and Hanks' solutions, which contain ion concentrations close to those of the human body, needs to be investigated.
We now report our investigation of the corrosion and adhesion behaviour of chromium oxide coatings prepared by reactive magnetron sputtering when exposed to Ringer's solution. This is necessary for evaluating their suitability for implant applications.
Theory of the substrate plastic straining adhesion test
The substrate plastic straining test is an adaptation based on the shear-lag analysis for composites as shown below. Cox [31] using the shear-lag analysis has shown that for a fibre of length L embedded in a matrix subjected to strain, as long as the fibre is sufficiently long, the stress in the fibre will increase from the two ends to a maximum value and the average stress in the fibre will be given by Here, e is the strain, Gm is the shear modulus of matrix, E f is the fibre elastic modulus, L is the fibre length and R is the distance between neighbouring fibres. The variation in the shear stress along the fibre-matrix interface can be expressed as where x is the distance from one end of the fibre and r f is the fibre radius.
Kelly & Tyson [32] further proposed a relationship for the interfacial shear strength in the above situation, given by where d is the fibre diameter, L c is the critical fibre length and σ fu is the fibre strength at a length equal to L c . As a range of values will occur for the critical fibre length, L c , a set of proposals have been made for determining the L c value to be used in equation (2.4) using fibre length mean values obtained by fitting the experimental fibre length distribution measurements to the most closely matching probability distribution of fibre lengths, e.g. Weibull, lognormal, etc.
The substrate-straining technique is a variant of the method for determining the interfacial shear strength for fibre-reinforced composites developed by Kelly & Tyson [32] in equation (2.4). A non-ductile thin film/coating deposited on a ductile substrate is subjected to a tensile stress and deformed plastically. The strain at which the cracks begin to appear on the film/coating, observed by scanning electron microscopy, gives a measure of the tensile fracture strength of the film, expressed as [33] σ = ε f .E, (2.5) where σ is the film fracture stress, E is Young's modulus of the film and ε f is the fracture strain. The substrate-film system is then continuously strained, and the spacing of the cracks on the film is monitored by the observation of the test specimen in the scanning electron microscope at various strains, until saturation is achieved, i.e. a steady-state spacing of cracks. The crack spacing distribution under steady-state conditions is then determined and the mean crack spacing λ is evaluated. The interfacial shear strength is determined by substituting λ into the following equation: where σ is the film fracture stress, δ is the film thickness of the film and λ is the average crack spacing at saturation.
Mott-Schottky analysis
Mott-Schottky analysis was conducted to determine the electronic defect properties of oxide layers formed on the surface of the chromium oxide-coated and -uncoated stainless steel substrates. The relationship between the capacitance and the applied potential is given by the well-known Mott-Schottky equations for n-type and p-type semiconductors and are shown in equations (3.1) and (3.2), respectively: and where ε o is the permittivity of free space (8.85 × 10 −14 F cm −1 ), ε is the dielectric constant of the passive film, Q is the electron charge (1.602 × 10 −19 C), N D and N A are the donor and acceptor density, respectively, E FB is the flat-band potential, K is the Boltzmann constant (1.38 × 10 −23 J K −1 ) and T is the absolute temperature. We see that N D and N A can be determined from the slope of the experimental C −2 versus applied potential (E), while the intercept on the potential axis corresponds to the flat-band potential E FB . The validity of Mott-Schottky analysis is based on the assumption that the capacitance of the space charge layer is much smaller than the double-layer capacitance.
Film deposition
Chromium oxide thin film deposition was carried out with a cryo-pumped vacuum chamber radiofrequency magnetron sputtering unit using the deposition conditions given in table 1. A solid chromium target was used as the starting material with high-purity argon and oxygen as sputtering and reactive gases, respectively. The substrate materials were glass slides, silicon wafer and 304 medical grade stainless steel. Each of the steel substrates for tensile tests were machined into a dog bone shape for mechanical testing and the substrates for corrosion testing were cut to a square shape of 80 × 80 mm, ground and metallographically polished. The substrates were cleaned ultrasonically with isopropanol and then washed with de-ionized water. Prior to deposition, the deposition chamber was evacuated to a pressure of about 8 µTorr.
Material characterization
The chemical constituents of the prepared chromium oxide films were probed with Raman spectroscopy, Fourier transform infrared (FTIR) spectroscopy and X-ray photoelectron spectroscopy (XPS). A Thermo Scientific DXR Raman microscope, Nicolet iS50 FTIR equipment and Scienta ESCA 300 spectrophotometer equipped with a monochromatic Al Kα (1486.6 eV) X-ray source were used for the measurements. The XPS spectral features were background-corrected and fitted with a mixture of Gaussian and Lorentzian fitting functions. The morphology and structural properties of the chromium oxide films were investigated with a Hitachi S-4100 model scanning electron microscope (SEM) and Siemens D5000 X-ray diffractometer. A Perkin Elmer Analyst 300 atomic absorption spectroscopy facility was used to probe Ringer's solution from the corrosion test for a determination of chromium ion concentrations released into the solution.
Electrochemical corrosion and Mott-Schottky analysis measurements
The electrochemical measurement on the chromium oxide-coated and -uncoated steel substrates was conducted in Ringer's solution at 37°C using a VoltaLab 40 consisting of a PGZ 301 potentiostat and a three-electrode electrochemical cell. The chemical composition of Ringer's solution used was as follows: NaCl (6.5 g l −1 ), KCl (0.42 g l −1 ), CaCl 2 (0.25 g l −1 ) and NaHCO 3 (0.20 g l −1 ). For the electrochemical test, the deposition times for the coatings were varied beyond 120 min for some samples to achieve thicknesses that are approximately equal to each other to guarantee a consistent comparison of the corrosion performance of the coated samples in Ringer's solutions. Chromium oxide-coated samples of thicknesses in the range of 210-220 nm were used in the investigation. The sample surface was used as the working electrode, while the reference and counter electrodes were a saturated calomel electrode (SCE) and platinum wire, respectively. Potentiodynamic polarization and electrochemical impedance spectroscopy (EIS) methods were employed in the corrosion investigation. Prior to the corrosion test, an open circuit potential measurement was performed for 2 h to enable the system to reach a steady state, thereby allowing the corrosion potential to stabilize before the experiment. The potentiodynamic polarization involved a polarization of the samples from −1000 mV to 1000 mV at the rate of 2 mV s −1 and the EIS measurements were conducted at a frequency range of 10 mHz to 40 kHz and an AC amplitude of 20 mV. The properties of the passive films formed on the surface of the samples were investigated with Mott-Schottky analysis. The samples were polarized from −1000 mV to +700 mV with successive steps of 50 mV.
Substrate plastic straining adhesion test
Each substrate for tensile testing was machined into a tensile 'dog bone' shape before coating and measured 120 × 20 mm with a thickness of 1 mm. The chromium oxide-coated dog bone samples (as-prepared films and films soaked in Ringer's solution) were subjected to an increasing tensile strain of 3% and 5% to initiate cracks in the films and to determine the strain at which initial cracking occurred using an Instron 50 KN machine. As the steel substrate was deformed plastically in tension, the chromium oxide films developed cracks that are normal to the loading axis. The samples were further strained to 10, 15, 20 and 25% to determine the saturation crack spacing. The gauge sections of all the chromium oxide-coated dog bone-shaped samples subjected to tensile testing were cut and examined with a Hitachi S-4100 model SEM; image analysis computer software on the SEM was used to measure crack spacings before an analysis of the crack spacing statistics on the films was conducted. One hundred crack spacing measurements were taken for each of the samples, and the results were tabulated for statistical analysis. The strain at which the film crack starts to appear gives the measure of the tensile fracture strength of the film; this was determined using equation (2.5) and the interfacial adhesion between the substrate material and the thin film coating can then be calculated using equation (2.6). 5. Results and discussion 5. 1
. Raman, Fourier transform infrared and X-ray photoelectron spectroscopic investigations
The Raman spectra for chromium oxide coatings prepared at various forward powers showing three Raman peaks for each of the conditions are depicted in figure 1. An average of four to five Raman peaks have been reported by various authors for the Raman spectroscopic study of chromium oxide films in the literature. Khamlich et al. [34], in their investigation of the growth mechanism of α-Cr 2 O 3 monodispersed particles prepared by aqueous chemical growth, observed four peaks: one A 1 g mode of chromium oxide at 550 cm −1 and three other peaks at approximately 302, 349, 605 cm −1 , which they attributed to the E g mode. In a similar investigation by Barshilia & Rajam [26], the authors reported four Raman bands and identified the band centred at 544 cm −1 as A 1 g symmetry and the other bands centred at 302, 349 and 605 cm −1 as the E g symmetry of chromium oxide. Mougin et al. [35] reported further Raman peaks at 307, 524, 350, 551 and 610 cm −1 for chromium oxide obtained at ambient pressure. In our current investigation of chromium oxide films prepared at different deposition powers, we observed three Raman peaks that are associated with chromium oxide at approximately 306, 352 and 549 cm −1 , with the most pronounced peak at 549 cm −1 , as shown in figure 1. All the peaks correspond to one single phase of chromium oxide, i.e. Cr 2 O 3 [26,29]. The little shift in the Raman peaks in this study compared to those reported in the literature is probably due to differences in the deposition methods and preparation conditions used in the production of the thin films. The FTIR spectra for chromium oxide coatings prepared at different deposition powers and an oxygen flow rate of 10 sccm on silicon wafer are shown in figure 2. Each spectrum is composed of peaks at 548 and 611 cm −1 , both of which represent vibrational modes of Cr 2 O 3 [36] and their intensity increases with an increase in the forward power used during the deposition of the films. The peak at 411 cm −1 was assigned to the E u vibrational mode.
Scanning electron microscopy and X-ray diffraction analysis
Cross-sectional images of the chromium oxide coatings prepared under various oxygen flow rates and a constant power of 500 W are depicted in figure 4. The SEM images indicate that the films have columnar structures but films prepared at lower oxygen flow rates appear to display a dense columnar structure. A dense or closely packed structure is a desirable surface feature for anti-corrosion films, which can help reduce the pathway through which aggressive ions such as chloride ions can diffuse and attack the substrate material. The results of the X-ray diffraction (XRD) analysis of the chromium oxide coatings are shown in figure 5. As can be seen from figure 5, no XRD peak was found in the scans, which suggests that the prepared films are predominately amorphous.
Potentiodynamic polarization
The Tafel plots obtained from the potentiodynamic polarization test for the corrosion of chromium oxide-coated and -uncoated steel in Ringer's solution at 37°C are presented in figure 6. The corrosion parameters (corrosion potential, corrosion current and polarization resistance) for both chromium oxidecoated and -uncoated stainless steel were determined from the Tafel extrapolation method and are presented in stainless steel substrate, which implies a lower corrosion susceptibility for the chromium oxide-coated substrates. Higher polarization resistance (R p ) values were also observed for chromium oxide-coated samples. Furthermore, for all the chromium oxide coatings, the breakdown potentials were more positive compared to those of the uncoated stainless steel, which is an indication that the coatings have a better pitting corrosion resistance and possess the capability to protect the stainless steel in Ringer's solution.
The chromium oxide coatings prepared under various oxygen flow rates showed a varying degree of protection of the stainless steel substrate, which is evident in the values of the corrosion parameters such as current density obtained from the Tafel extrapolation. The chromium oxide coating prepared at an oxygen flow rate of 4 sccm showed the least current density, followed by the 8 sccm sample, with the sample deposited at 10 sccm showing the highest current density among the coated samples, as illustrated in table 2. This trend in the coating performance is most probably due to a decrease in the compactness of the films with the increase in oxygen flow rates used during deposition, which was earlier observed during the SEM characterization of the samples. The potentiodynamic polarization results show that chromium oxide coatings not only possess a better electrochemical stability in solutions such as PBS solution [28] and saline solution [26,29,30] as previously reported in the literature, but they also have superior electrochemical stability, compared to bare uncoated stainless steel in Ringer's solution. This makes them a promising candidate material for medical implant applications. Figure 7 shows a typical SEM image of a chromium oxide-coated sample after the potentiodynamic polarization test in Ringer's solution. As can be seen from the SEM image, the coated sample showed a good resistance to corrosion with no sign of pitting corrosion, but with crystallites from Ringer's solution deposited on the surface.
Electrochemical impedance spectroscopic measurements
The impedance spectra obtained from EIS measurements for the corrosion of chromium oxide-coated and -uncoated stainless steel substrates in Ringer's solution at 37°C are shown as Nyquist and Bode plots in figure 8a-c. The data were analysed by fitting them to an equivalent circuit model shown in Figure 9. Equivalent circuit model used for fitting of experimental data of chromium oxide-coated stainless steel (a) and uncoated stainless steel substrates (b) exposed to Ringer's solution at 37°C.
with the constant phase element (CPE) instead of pure capacitors to account for the inhomogeneous state of the sample surfaces. In the equivalent circuit in figure 9, R 1 represents the solution resistance, R 2 and CPE 1 are the coating resistance and capacitance (for the outer part of the coating system), representing the electrochemical behaviour at high frequencies. The circuit elements R 3 and CPE 2 are the interfacial oxide resistance and capacitance of the coating material (for the inner part of the coating system), which account for the electrochemical response at low frequencies. CPE-P is the exponent index, which represents the deviation of the capacitance of the passive film from the ideal behaviour. When CPE-P = 1, it implies pure capacitive behaviour of the CPE and CPE-P = 0 indicates resistive behaviour. As can be seen from the Nyquist plots in figure 8a, a significant increase in the impedance value was observed for stainless steel coated with chromium oxide coatings compared to bare stainless steel, which is an indication of an increase in corrosion resistance of the samples. The improvement in the corrosion resistance is also evident in the Bode plots (magnitude versus frequency) shown in figure 8b, where the impedance values of all chromium oxide coatings lie above that of bare stainless steel. The chromium Table 3. Electrochemical impedance parameters for chromium oxide-coated and -uncoated stainless steel exposed to Ringer's solution obtained using the equivalent circuit shown in figure 9. oxide-coated sample prepared at an oxygen flow rate of 4 sccm showed the highest impedance value (i.e. the highest corrosion resistance), which is in agreement with the results from the potentiodynamic polarization technique where the same sample exhibited the least corrosion current density and highest polarization resistance values. The values of R 3 obtained for chromium oxide prepared under various conditions were much higher than R 2 values, which suggest that the inner configuration of the coating is likely to be responsible for corrosion protection provided on the stainless steel substrate by chromium oxide coatings. The Bode plots shown in figure 8c indicate that chromium oxide-coated and -uncoated stainless steel exhibited phase angles near −60°at low frequencies and shifted to a phase angle close to −90°a t intermediate frequencies.
This implies a nearly capacitive behaviour, which is typical of a passive material.
Mott-Schottky analysis
The Mott-Schottky plots for chromium oxide-coated and -uncoated steel exposed to Ringer's solution at 37°C are shown in figures 10 and 11. The chromium oxide coatings prepared using various deposition conditions gave Mott-Schottky plots that are all negative, which implies that the oxide layers exhibit a p-type semiconductor property. The Mott-Schottky plot for uncoated stainless steel showed that it possesses a duplex structure, i.e. n-type and p-type semiconductor behaviour. The n-type semiconductor behaviour exhibited by the passive films on stainless steel can be attributed to the iron-rich layer, while the p-type semiconductivity of the passive film is the result of the presence of chromium in the inner layer [29]. The Mott-Schottky plots for chromium oxide coatings show that increasing the oxygen flow rate during deposition led to a decrease in the slope of the Mott-Schottky straight line and a corresponding increase in the acceptor defect density. The lower the oxygen flow rate during deposition, the lower the acceptor defect density obtained from the capacitance measurements. The lowest acceptor defect density was observed for chromium oxide coatings prepared at an oxygen flow rate of 4 sccm. The structure of the film is likely to have influenced the nature of the passive film formed on the coated samples, with denser films showing a lower defect density.
The uncoated stainless steel sample exhibited a higher defect density when compared with the chromium oxide-coated samples, which is an indication of a higher disorder defect density in the passive native films on uncoated steel, as illustrated in table 4. The lower acceptor defect densities of the passive film formed on chromium oxide coatings reflect the better passivation potential of chromium oxide-coated substrates compared to the bare stainless steel, which exhibited a highly defective passive film.
The Mott-Schottky result is supported by our earlier corrosion experiments discussed above based on potentiodynamic polarization and EIS where chromium oxide-coated samples showed a better corrosion resistance when compared to uncoated steel substrate. A similar observation was previously reported in a Mott-Schottky analysis on chromium oxide-coated and -uncoated stainless steel substrates tested in saline solution by the present authors [29].
Ion release measurements
Ringer's solution used during the electrochemical corrosion testing of the chromium oxide coatings prepared at various deposition conditions was probed for the presence of chromium ions. The results of
Substrate-straining adhesion measurements
One hundred crack spacing measurements were taken for each of the samples strained to 5, 10, 15, 20 and 25%. Typical SEM images and the crack frequency distribution observed at various strains for the as-prepared chromium oxide thin films are shown in figures 13a-f and 14a,b. The probability plots for fitting the observed crack spacing distributions are as shown in figure 17.
The SEM images and crack frequency distribution observed at various strains for the films soaked in Ringer's solution for the period of one month are shown in figures 15a,b and 16. The crack spacing measurements and SEM images suggest that saturation occurred at 25% tensile strain, with exfoliation and delamination of the films witnessed at this stage, as depicted in figures 13f and 15b. The crack spacing distribution for the tensile-strained chromium oxide thin films (as-prepared films) on stainless steel was found to fit closely to the lognormal distribution function among various probability distribution functions considered, as shown in figure 17a-c, using the statistical software package SPPS. The mean spacing was determined from this function and used in the relationship shown in equation (2.6) to determine the interfacial shear strength of the prepared films. For the films soaked in Ringer's solution, the crack spacing distribution for the samples prepared at forward powers of 300 and 500 W was found to fit closely to the lognormal distribution, while films prepared at forward powers of 350, 400 and 450 W changed to the Weibull distribution functions, as shown in figure 18a-c. A summary of the adhesion measurement results for the as-prepared films and films soaked in Ringer's solution for one month is shown in figure 19.
As can be seen from figure 19, a decrease in interfacial shear strength was observed for the films soaked in Ringer's solution for the period of one month when compared to the as-prepared films. Ogwu et al. [37,38] and Chandra et al. [39] have previously investigated the influence of biological fluid on the crack spacing distribution and the adhesion strength of Si-DLC films prepared on steel substrates. The authors observed lower interfacial shear strength for films exposed to biological fluids or saline solution when compared to as-prepared samples.
They proposed that the presence of nanoporosities in the DLC/Si-DLC films deposited on steel substrate could have acted as a source for crack advancement in the films during the adhesion test [37][38][39]. The authors suggested that on the exposure of the films to biological fluids prior to the substratestraining test, these defects or nanoporosities aid the penetration of fluids at the interface between the films and the substrate, which has an effect on the adhesion and residual stress distribution in the films. In our current investigation, the above mechanism is suggested as being responsible for the decrease in adhesion strength of films soaked in Ringer's solution. We observed a crack spacing distribution that fitted closely to the lognormal distribution for all of our as-prepared samples. However, on exposure to Ringer's solution for one month, a change to the Weibull distribution function was observed with the exception of films prepared at 500 and 300 W, which still fitted closely to the lognormal function. This change is most probably due to a variation in the microstructure of the coatings and nanoporosities present within the chromium oxide films, resulting from the deposition conditions used during the deposition process. The interfacial strength of the prepared chromium oxide coatings obtained in
4.0
as-prepared chromium oxide coatings interfacial shear strength (GPa) chromium oxide coatings exposed to Ringer's solution 3. . The change in interfacial shear strength for chromium oxide coatings exposed to Ringer's solution for a period of one month.
this study is comparable to those reported for DLC and Si-DLC coatings by various authors in the literature [37][38][39].
Conclusion
The corrosion behaviour of chromium oxide coatings prepared by reactive magnetron sputtering has been investigated at 37°C in Ringer's solution. The corrosion results based on potentiodynamic polarization and EIS measurements showed an improvement in corrosion resistance for chromium oxidecoated stainless steel samples compared to the bare stainless steel substrate. The corrosion current density for chromium oxide-coated stainless steel is much lower than for uncoated stainless steel, suggesting the presence of a higher resistant passive film on the coatings. The Mott-Schottky analysis revealed that the chromium oxide-coated stainless steel samples possess a p-type semiconductor behaviour with a lower defect density with respect to bare stainless steel, which exhibited highly defective passive films characterized by the duplex structure (n-type and p-type semiconductor). The chromium oxide coating prepared at an oxygen flow rate of 4 sccm revealed the lowest defect density and the best corrosion resistance in Ringer's solution at 37°C. The as-prepared chromium oxide films showed a good adhesion to the steel substrate, but a reduction in the interfacial adhesion value was observed for the coatings when they were exposed to Ringer's solution prior to the adhesion test. The presence of nanopores or pinholes in the coating is thought to have contributed to the change in adhesion strength as these pinholes can act as a pathway for fluid penetration unto the substrate, thereby weakening the coating/substrate system. The interfacial strength of the prepared chromium oxide coatings obtained in this study is comparable to that reported for Si-DLC films. The ion release results from the atomic absorption measurements indicate that there was no chromium ions released from the chromium oxide-coated samples into Ringer's solution at the ppm level, for samples tested under open circuit electrochemical testing conditions and at the relatively lower corrosion potentials expected on implants in the human body. The corrosion and adhesion results as well as the negligible chromium ion release into Ringer's solution open the door to progressing to evaluating the immune cell activation and other biocompatibility tests required for orthopaedic and other possible medical implant applications of chromium oxide coatings. Chromium ion release into body fluids in vivo is still an outstanding issue in patients with orthopaedic implants.
Data accessibility. All of the data in this investigation have been reported in the paper and are freely available. Authors' contributions. A | 2017-08-28T09:29:48.640Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "2206830263e5499a1d5ce943fd4623729f78d403",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.170218",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f3b7b4ee1771dabdb6831e24601980b843bceaa2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
88231040 | pes2o/s2orc | v3-fos-license | Leptophis ahaetulla ( Linnaeus , 1758 ) ( Serpentes , Colubridae ) : first record for the state of Rio Grande do Sul , Brazil
We present the first record of Leptophis ahaetulla for the State of Rio Grande do Sul, Brazil. Between November and December 2014, and February 2015, three specimens were found, respectively: one male dead on a highway at the Parque Estadual do Espinilho, a conservation unit located at the municipality of Barra do Quaraí; and two females collected in an anthropic landscape of Salso farm at the municipality of Uruguaiana, Rio Grande do Sul, Brazil. Meristic data and coloration enables the identification of these specimens as Leptophis ahaetulla marginatus.
The Neotropical snake genus Leptophis Bell, 1825 encompasses semi-arboreal and diurnal snakes, with slender and elongated bodies, a head well-distinguished from the neck, and coloration pattern characterized by a predominantly green, copper or bronze, with or without longitudinal stripes or narrow cross-cutting bands (Albuquerque 2008).Eleven species of the Leptophis genus are currently recognized, broadly distributed in Central and South America (Uetz and Hošek 2015).
Leptophis ahaetulla (Linnaeus, 1758) presents the broadest geographical distribution in this genus, occurring in North America, Central America, and in most of South America, from Mexico to Northeastern Argentina and Northern Uruguay (Oliver 1948;Giraudo 2001, Carreira et al. 2005, Albuquerque 2008), with 10 currently recognized subspecies (Uetz and Hošek 2015).In Brazil, the species is reported to occur from the extreme North to the state of Paraná, occupying a wide range of biomes, including the Amazon, Pantanal, Cerrado, Caatinga and Atlantic Forest (Cunha and Nascimento 1978;Vanzolini et al. 1980;Strüssmann and Sazima 1993;Martins and Oliveira 1999;Colli et al. 2002;Albuquerque et al. 2007, Albuquerque 2008).Four subspecies of L. ahaetulla are thought to occur in Brazil (L.a. ahaetulla, L. a liocercus, L. a. marginatus, and L. a. nigromarginatus) (Costa and Bérnils 2014), and L. a. marginatus has the southernmost geographic distribution (Giraudo 2001).According to Albuquerque (2008) L. a. marginatus (referred to as L. marginatus in his thesis) is reported to occur from southeastern Bolivia, across Mato Grosso and Mato Grosso do Sul, to western São Paulo in Brazil, southward through Uruguay and Paraguay into northern Argentina.Recently, three specimens of L. ahaetulla were collected in the Pampa Biome in Rio Grande do Sul State.The color pattern described by Oliver (1948) and Albuquerque (2008) for the subspecies (but see comments above) listed as L. a. marginatus matches the one found in these specimens, which expand south-western the distribution of subspecies in Brazil, and also reports it for the first time to the State of Rio Grande do Sul.Here, we describe those new specimens.
Three specimens of L. ahaetulla were found in the domains of the Pampa biome in Rio Grande do Sul state.The first individual was caught by Christian Beier on 15 November 2014.The specimen was dead at the BR 472 (km 642) (30°11ʹ42.0ʺS, 057°29ʹ20.5ʺW; Figure 1), a highway stretch that passes through the Parque Estadual do Espinilho, a conservation unit under integral protection located at the municipality of Barra at surroundings of human dwelling.
The specimen collected in the municipality of Barra do Quaraí is a male with a snout-vent length of 581 mm and a tail length of 348 mm; dorsal scales arranged in 15-15-11 rows (0-13-9 keeled rows); 157 ventrals; and 138 paired subcaudals.The specimen presents divided nasals; 8 right supralabials (4 th and 5 th in contact with orbit); 9 left supralabials (4 th , 5 th , and 6 th in contact with orbit);10 on each side infralabials (six in contact with anterior chin shields); a single preocular; two postoculars on each side; anterior temporal 1/1; and posterior temporals 2/2.
The female specimens (ZUFSM 3280 and ZUFSM 3278) collected in the municipality of Uruguaiana possess, respectively, the following states of characters: snout-vent length of 680 mm and 580 mm; tail length of 348 mm and 50 mm (the ZUFSM 3278 has an incomplete tail); dorsal scales arranged in 15-15-11 rows (0-13-9 keeled rows), 157 and 159 ventrals; the ZUFSM 3280 has 138 paired subcaudals.The specimens presents separated nasals, 8/8 supralabials (4 th and 5 th in contact with orbit); 10/10 infralabials (six in contact with chin shields), a single preocular; two postoculars on each In all specimens the dorsal coloration is green metallic on the head and anterior region of the body, gradually changing to bronze toward tail (Figure 2).Cephalic shields margined with black (Figure 3), and a black postocular line that extends from upper edges of supralabials to temporals (Figure 4).The venter is lightly colored.The meristic characters and dorsal coloration pattern described by Giraudo (2001), Carreira et al. (2005), and Albuquerque (2008) for the taxon listed as L. a. marginatus matches that of specimens herein described for the first time to Rio Grande do Sul.
The record of L. ahaetulla in Rio Grande do Sul increases the fauna of reptile reported from the State, as well as the species distribution southward in Brazil.Based on the distribution of collecting localities, Leptophis ahaetulla from Rio Grande do Sul state occurs in the western region of the Uruguayan Savanna, which was originally characterized as 'grasslands with espinillo' by Hasenack et al. (2010).
Check List 12(1): 1838, 6 February 2016 doi: http://dx.doi.org/10.15560/12.1.1838ISSN 1809-127X © 2016 Check List and Authors Notes oN GeoGraphic DistributioN Oliveira et al. | First record of Leptophis ahaetulla in Rio Grande do Sul, Brazil between 19 December 2014 and 6 February 2015 in the municipality of Uruguaiana, Itapitocai locality, at the Salso farm (29°48ʹ19.28ʺS, 057°05ʹ46.88ʺW; collect permit #24041-2 granted by Instituto Chico Mendes-ICMBio), and are stored in the Herpetological Collection of the Universidade Federal de Santa Maria (ZUFSM 3278 and ZUFSM 3280, respectively).The Salso farm is an anthropic landscape modified by agricultural activities (rice crops) and the specimens were collected do Quaraí.The vegetation of the Parque Estadual do Espinilho is Steppe Savanna Park (IBGE 2012), with a significant presence of espinillo (Acacia caven), nandubay (Prosopis affinis), and black mesquite (Prosopis nigra) (Galvani and Baptista 2003).The specimen is deposited at the reptile collection of the Museu de Ciências e Tecnologia da Pontifícia Universidade Católica do Rio Grande do Sul, under catalogue number MCP 19362.Two other individuals were caught by Giancarlo Bilo, | 2019-03-31T13:42:29.102Z | 2016-02-06T00:00:00.000 | {
"year": 2016,
"sha1": "0f2858b75fb3664114585eb768b1606d74fcf640",
"oa_license": "CCBY",
"oa_url": "https://checklist.pensoft.net/article/19440/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f2858b75fb3664114585eb768b1606d74fcf640",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
250109191 | pes2o/s2orc | v3-fos-license | Service Robots in Long-Term Care: A Consumer-Centric View
Service robots with advanced intelligence capabilities can potentially transform servicescapes. However, limited attention has been given to how consumers experiencing vulnerabilities, particularly those with disabilities, envisage the characteristics of robots’ prospective integration into emotionally intense servicescapes, such as long-term care (LTC). We take an interdisciplinary approach conducting three exploratory studies with consumers with disabilities involving Community Philosophy, LEGO ® Serious Play ® , and Design Thinking methods. Addressing a lack of consumer-centric research, we offer a three-fold contribution by 1) developing a conceptualization of consumer-conceived value of robots in LTC, which are envisaged as a supporting resource offering consumers opportunities to realize value; 2) empirically evidencing pathogenic vulnerabilities as a potential value-destruction factor to underscore the importance of integrating service robots research with a service inclusion paradigm; and 3) providing a theoretical extension and clarification of prior characterizations of robots’ empathetic and emotion-related AI capabilities. Consumers with disabilities conceive robots able to stimulate and regulate emotions by mimicking cognitive and behavioral empathy, but unable to express affective and moral empathy, which is central to care experience. While providing support for care practices, for the foreseeable future, service robots will not, in themselves, actualize the experience of “being cared for.”
Introduction
Service robots are a physically embodied form of artificial intelligence (AI) that are attracting exponentially growing attention (McLeay et al. 2021). Defined as "system based autonomous and adaptable interfaces that interact, communicate and deliver service" (Wirtz et al. 2018, p. 909), robots offer the prospect of remarkable and revolutionary changes in service delivery and experiences (Mende et al. 2019). A key challenge, however, will be to ensure that these changes increase the provision of fair opportunities and choices for receiving and cocreating value from service (Williams et al. 2020). This challenge emerges from the contemporary shift in research and practice toward service inclusion-a paradigm that requires service providers to anticipate, diagnose, and rectify problems that might preclude or disadvantage some consumers from realizing value in a service experience (Boenigk et al. 2021;Fisk et al. 2018). Adopting a service inclusion perspective in early stages of research on robots' integration into services can facilitate avoiding problems that have characterized "traditional" servicescapes (e.g., non-accommodation for visual impairments in retail design ;Baker 2006).
A service inclusion perspective is inherently consumer-centric, in that it requires service concepts, systems' architecture, and frontline interactions to cater for the precise needs and circumstances of all potential consumers. In particular, this includes consumers who may be disadvantaged (through belonging to a group potentially targeted for discrimination, such as ethnic minorities or women) or experience vulnerabilities (through lacking power and control) in service exchanges (Fisk et al. 2018). A focus on service inclusion can enable service providers to anticipate and address aspects of a servicescape that might preclude these consumers from realizing (i.e., receiving and co-creating; Fisk et al. 2018) value and thus enhance social justice and consumer wellbeing (Anderson and Ostrom 2015). However, the factors impacting the (non)acceptance of robotic interactions in servicescapes are largely understudied from the consumer perspective. This is the case for broad consumer populations Kumar 2021(Xiao &) and specifically for consumers experiencing vulnerabilities . We argue that, if service research is to answer calls for making a positive impact to the lives of vulnerable consumers , it must address the fundamental question of whether consumers consider robotic service agents to provide significant potential to relieve or, conversely, exacerbate vulnerabilities. The current absence of a consumer perspective represents a significant gap in our understanding of how, and under what conditions, robots might facilitate consumers realizing greater or lesser value in servicescapes.
Emotion-intense services, such as long-term care (LTC), provide a compelling setting for beginning these explorations. Robots are perceived as a key potential solution to a looming crisis in the LTC service sector, in which rapidly increasing demand 1 will be accompanied by significant labor shortages (Osterland 2021;Spetz et al. 2015). Care is consumed when people face threats to their wellbeing. The care service experience can thus be emotionally fraught and has the inherent potential to ameliorate and/or exacerbate consumer vulnerabilities (Berry et al. 2020). Prior research on the integration of non-robotic technologies in care service demonstrates dual, sometimes diametrically opposed, changes in the value consumers are able to realize. For instance, in medical care, electronic patient records can improve the continuity of care provision, but also limit consumers' ability to control data informing decisions on the type of care they receive (e.g., curative vs palliative; Berry et al. 2020).
The small number of studies concerned with robot integration into care service highlights the possibility of similar dual impacts. For example, robots may create value by motivating physical exercise but destroy value by invading space (e.g.,Čaić, Oderkerken-Schroder and Mahr 2018; Deutsch et al. 2019). Uncertainties regarding whether AI will replace or augment human care agents are also considered to be a significant factor in continued consumer resistance toward the deployment of robots in this context 2 (Longoni, Bonezzi and Morewedg 2019;Van Doorn et al. 2017). While these initial insights are drawn from the contexts of medical care (Agarwal et al. 2020) and care for the elderly (Čaić et al. 2018,Čaić, Mahr andOderkerken-Schröder 2019;Melkas et al. 2020), holistic consumer-informed knowledge concerning the implications of robots' integration into care servicescapes is in its infancy. Extant research has largely overlooked the robot-integrated LTC servicescape-a particularly complex context that extends beyond medical and elderly care. LTC incorporates a wide range of services (e.g., personal, social, and medical; Grabowski 2008) as it is something that is often required by everyone who faces health circumstances without "quick cure" possibilities, regardless of age (e.g., stable disabilities or long-term illness with prognoses of recovery or deterioration). Against the backdrop of crisis in LTC, the deployment of AI and robotic technology is vital in addressing resource shortages (Tan and Taeihagh 2020). Yet consumer willingness to accept robots as caregivers remains uncertain (Deutsch et al. 2019).
In this paper, we-a team of consumer, service marketing, sociology, healthcare technology, and engineering design researchers-explore how consumers with disabilities envisage the potential value of robots in LTC, and the vulnerabilityinducing factors that may impact their acceptance. We ground our study in the perspectives of consumers with disabilities because it is a population that has been subject to significant marketplace exclusion and constitutes a group of potential LTC consumers with a heightened propensity for experiencing vulnerability (Fisk et al. 2018;Higgins 2020). Adopting a consumer with disabilities' perspective can thus offer valuable directions for designing inclusive service concepts for complex, emotion-intense service, and for care service in particular.
Our objective is to develop a consumer-centric conceptualization that illuminates how the integration of robots in LTC service might contribute to (or detract from) consumer opportunities to realize value in terms of enhanced wellbeing. We achieve this objective by drawing on three multi-method qualitative studies, conducted as part of a wider and ongoing program of research exploring robots in the context of LTC. Following a grounded theory approach (Gioia, Corley and Hamilton 2013;Strauss and Corbin 1998), we examine insights from consumers with disabilities concerning what care constitutes as a service experience and how robots are envisaged in this context. We synthesize the emergent consumer characterizations of robots in LTC by drawing on the notion of conceived value, that is, how value is envisaged in the absence of prior experience (see Hardyman, Kitchener and Daunt 2019;McGinn 2004). We theorize that the consumerconceived value of LTC robots is mitigated by conceptions of potential pathogenic vulnerabilities-a perverse effect of a change aimed at ameliorating existing vulnerabilities, whereby new vulnerabilities arise (Lange, Rogers, and Dodds 2013). Pathogenic vulnerabilities emerge as a key factor influencing consumer conceptions of how robots might enhance or detract from the realization of value.
Our study responds to calls for interdisciplinary research that explores how inclusive technology-integrated service (re)design can offer opportunities to enhance value-centered care and improve consumer wellbeing, particularly amongst those experiencing vulnerability (Anderson, Nasr and Rayburn 2018;. Specifically, we provide three distinct theoretical contributions. First, by conceptualizing the consumer-conceived value of robots' integration into LTC servicescape, we show that robots are envisaged as a supporting and (to an extent) emotion-regulating resource, which can a) augment a human-facilitated LTC service offering and b) postpone or reduce the need to consume LTC service. Second, by empirically evidencing that consumers conceive robots might mitigate existing vulnerability, whilst also potentially inducing pathogenic vulnerability experiences, we offer an explanation for consumer resistance to the idea of robots in LTC. In this respect, the service inclusion perspective provides important insights into the factors informing robots' (non)acceptance. Third, by showing that consumers do not envisage AI to be capable of affective and moral dimensions of empathy, we illuminate a deep-seated belief amongst consumers that "robots cannot care." This provides an important clarification of prior theorizations of robots' empathetic and emotion-related AI capabilities (Huang and Rust 2021;Wirtz et al. 2018).
Informed by guidance on presenting grounded theory-based studies (Gioia et al. 2013), the paper follows a conventional structure. We first present literature concerned with concepts that informed our conceptualization. We then outline the data collection and analytical procedures, followed by findings that are integrated in a conceptualization of consumer-conceived value of robots' integration in a LTC servicescape. For clarity, our empirical analysis guided initial consultations with the literature, and our conceptualization was developed through iterating between findings and literature.
Conceptual Background
Service Inclusion: An Important Lens for Conceptualizing Robot-Integrated Service Advancing socially just service systems is a vital priority for contemporary service research (Field et al. 2021). Crucially, this entails going beyond "simply replicating established research with vulnerable groups of consumers" [and] "tackling new problems faced by these consumers that have the potential to improve their quality of life and wellbeing" (Huang et al. 2021, p. 460). Addressing this priority is at the heart of the concept of service inclusion (Fisk et al. 2018) which stems from a transformative service research family of initiatives (e.g., Anderson and Ostrom 2015;Boenigk et al. 2021;Sandberg et al. 2021). Exclusion from service can harm consumers' wellbeing by depriving them of opportunities to fully realize (e.g., receive and co-create) value as a result of systemic biases, discrimination, and customer vulnerability (Fisk et al. 2018). Consequently, service inclusion entails a multi-level service (re)design paradigm that targets the causes of exclusion to improve consumer wellbeing. It does so by developing 1) inclusive service concepts-identifying what consumers experiencing exclusion need and want and developing offerings that eliminate or mitigate causes of exclusion; 2) service systems that promote inclusion through system architecture and navigation; and 3) processes for inclusive service interactions (Fisk et al. 2018). Taking theoretical direction from service inclusion perspective, this paper is founded on a premise that the design of robotintegrated service requires consumer-informed service concepts.
Because service inclusion highlights the importance of grounding service concepts in consumer views of what constitutes value, and what might preclude consumers from realizing value in a given service, it is necessary to consider how consumer perspectives on the value of robots in care service can be theorized. There is broad consensus that value is situation or context-specific, determined by the consumer, and represents an outcome of improved wellbeing for the consumer (Anderson and Ostrom 2015; Lusch and Vargo 2014; Zeithaml et al. 2020 (Zeithaml et al., 2020)). The resources service providers deploy offer means by which consumers might create value through interactions with these resources (Lusch and Vargo 2014).
The concept of perceived value is defined in health and social care contexts as "perception of benefits received for burdens endured" (Berry et al. 2020, p. 1). It encapsulates the notion that a service concept offering consumers meaningful value rests on understanding the service resource(s) characteristics that they consider important for enhancing, as opposed to burdening, their wellbeing (Anderson, Narus, and van Rossum 2006). However, where consumers have not experienced a given service resource (e.g., where new technologies, such as AI, are yet to be designed and deployed within a particular service), conceived value is arguably a more pertinent notion to consider. Conception is defined as "an idea of what something or someone is like" (Cambridge Dictionary 2021). Hence, conceived value entails how consumers envisage the nature of value in a given experience, and how this might be realized via interaction with particular service resources (McGinn, 2004;see also, Hardyman, Kitchener, and Daunt, 2019).
The development of inclusive service requires an understanding of what may deprive consumers in disadvantaged and/ or vulnerable circumstances from realizing value (Fisk et al. 2018). Therefore, in the context of robot-integrated services, it is necessary to consider how consumers conceive robots as new resources that impact (improve or limit) their opportunities for realizing value. In a consumption encounter, vulnerability entails a dependency on marketized (private or public) systems providing goods and services to enable the individual to function, whereby a lack of or restriction in access or control over these resources renders a person unable to realize value (Fisk et al. 2018;Hill and Sharma 2020). That is, (in)sensitive service design and delivery can increase consumer vulnerability if it prohibits the freedom of choice necessary for receiving a fair service. Conversely, it can "render consumers less vulnerable" by eliminating barriers and empowering consumers to realize value from a service offering (Baker, Gentry and Rittenburg 2005;Shultz and Holbrook 2009, p. 126).
On this basis, and to gain a complete understanding of consumer-conceived value in robot-integrated LTC servicescape, we explore consumer conceptions of a) care, in terms of what constitutes value in an overall care service experience and the vulnerabilities that might preclude the realization of value in care; and b) whether, how and why the integration of robotic care service resources might enhance or detract from the realization of value in care. To situate our inquiry in the extant literature, the following sections synthesize concepts, findings, and debates in two key areas: i) care itself and ii) the integration of robots into LTC.
Understanding Consumer-Conceived Value in Care Service
Value in care consumption experience: an integrated conceptualization. Defining care as a social experience remains a contested terrain (Edwards 2009). Here, we draw on the seminal ethical perspective on care developed by Tronto (1993), and subsequently extended by Puig de la Bellacasa (2017), to define care in terms of the three components by which it is actualized: cognitive care (recognizing a need for care), emotional care (taking care of: having a concern for and assuming the responsibility to provide care and feeling cared for: experiencing care as a response to one's needs), and care action (participating in care as a giver or receiver). Actualization of care is generally recognized as activities by those participating in care (givers- Wilkes and Wallis 1998;receivers-Söderhamn, Dale, and Söderhamn 2013) whereby they draw on abilities, competences, ethical codes, and resources to materialize care with a goal of restoring, sustaining, or improving wellbeing. Three pertinent considerations follow: 1) actualizing care incorporates emotions with particular practices; 2) care is co-actualized through interactions between givers and receivers; and 3) determining whether care has been actualized requires consideration of the outcome(s) as experienced by care recipient(s). Hence, the value in care as a service offering rests upon how those being cared for (care consumers) conceive of care attributes essential for their wellbeing.
A deeper understanding of what underlies care recipients' conception of essential care attributes and determination of whether actualization of care has taken place is afforded by the concept of empathy. Empathy is often regarded as a primary attribute of good care by those receiving it (Mercer and Reynolds 2002). Simply, empathy represents a foundation of a helpful interaction sought to satisfy the basic human need of being understood (Kunyk and Olson 2001). In assimilating the extant literature to consider empathy in the context of care, Jeffrey (2016) identifies four interacting dimensions that correspond with care actualization components discussed above: 1) cognitive empathy-comprehending another's emotions; 2) affective empathy-sharing and re-experiencing the feelings of others; 3) behavioral empathy-acting and communicating in therapeutic ways, as needed by others; and 4) moral empathygenuine compassion for the others and an altruistic motivation to improve their wellbeing (Irving and Dickson 2004;Morse et al. 1992). The opportunity to realize an experience of an empathetic, therapeutic interaction that engages their perspectives in determining and pursuing wellbeing outcomes is key to consumer conceptions of value in care. A quote from a healthcare consumer in Berry and Bendapudi (2007, p. 113) eloquently illustrates this: "We want doctors who can empathize and understand our needs as a whole person.
[…] every doctor needs to know how to apply their knowledge with wisdom and relate to us." Together, the integrated conceptualization of care and empathy (in terms of their shared cognitive, affective, behavioral, and ethical/moral dimensions), provides a useful theoretical foundation for informing the design of care service that offers consumers opportunities to realize value. There is a recent momentum toward (re)designing care service to foreground consumer perspectives on value in care, for diagnosing and addressing factors that maximize or limit the wellbeing outcomes-and, consequently, value-sought by consumers across the entire care experience (Anderson et al. 2018;Berry et al. 2020). To conceptually explicate how these factors occur, we consider consumer vulnerability in care servicescapes.
Consumer vulnerability in care servicescapes. Care is a service whereby consumer expectations and evaluations of both utilitarian and experiential dimensions of service resources are emotionally intense (Agarwal et al. 2020;Berry et al. 2020). Often, the need for care is associated with experiencing concern, perceived helplessness due to lacking specialist knowledge and skills required for (some) care practices, and anxiety over potential outcomes (Anderson et al. 2018;Berry et al. 2020). Hence, the wellbeing outcomes consumers seek in care servicescapes are multidimensional, ranging from physiological, to psychological, to affective (Agarwal et al. 2020).
Further, consumers seek to be engaged in decisions regarding their care and its design, yet this engagement can place a variety of burdens upon them (Anderson et al. 2018;Berry et al. 2020). These burdens can include regulating emotions associated with needing and receiving care, identifying care provider(s) that feel safe and trustworthy, managing records or prescribed self-care practices, and preserving a sense of self and independence (Agarwal et al. 2020;Baker et al. 2005;Berry et al. 2020). From this perspective, another important wellbeing outcome sought from the care service (and thus, a component of conceived value) is for some of these burdens to be alleviated by mitigating consumer vulnerability potential (Anderson et al. 2018;Berry et al. 2020). That is, the burdens people might endure in themselves do not necessarily render a person vulnerable as a consumer (Sandberg et al. 2021). However, the propensity for experiencing consumer vulnerability is inherently significant, particularly if one requires care for a serious health condition (Agarwal et al. 2020;Berry and Bendapudi 2007). Consumer vulnerability occurs if the encounters with care service limit a person's ability to access care resources, and to exercise agency in which resources are utilized, and how, to transform burdens into manageable circumstances (Baker et al. 2005;Sandberg et al. 2021). Hence, the (re)design of service concepts entails adaptation of care resources' offering such that consumer vulnerability sources are mitigated, and conceptions of value in care are realized.
The potential impacts of service (re)design initiatives should be examined holistically. Even well-meaning efforts to mitigate existing vulnerabilities might unintentionally give rise to pathogenic vulnerabilities-a perverse effect of a change in an external environment aimed at ameliorating existing vulnerabilities, whereby new vulnerabilities are generated (Lange et al. 2013). Sandberg et al. (2021) recently exemplified such effects, albeit without pathogenic vulnerability underpinnings, in a study of people consuming care in nursing homes. In their findings, mitigating the circumstances of deteriorating physical security (e.g., being at risk of sustaining injuries) increased consumer feelings of diminishing autonomy, and vice versa. Hardyman et al. (2019) have shown that consumer conceptions of whether a care service resource enhances or limits the conceived value of care experience is dependent on conceived nature of the resource; which we understand to be the total of this resource's characteristics and role in the experience. Thus, consumer evaluations of a care service resource will entail appraisal of 1) benefits a resource offers, including whether it might mitigate existing vulnerabilities vs 2) pathogenic vulnerabilities that may arise from the resource.
Holistic consideration of the potential impact of resource innovations is particularly crucial as care service continues to undergo what Rust and Huang (2014) term the "service revolution," facilitated by the widespread integration of new technology resources (Berry et al. 2020). Technology resources offer promise for improving the efficiency and effectiveness of care services, such as safety, convenience, and consistency in accessing consumer records and care providers, analyzing which consumers are benefitting from, or are disadvantaged by, particular care options, safeguarding from diagnoses errors and mitigating work pressures for caregivers (Agarwal et al. 2020;Berry and Bendapudi 2007). Concurrently, integration of technology resources might inadvertently raise barriers to access for culturally and socio-economically underserved consumer populations (Agarwal et al. 2020) and minimize or eliminate the relational, empathy-centered care attributes (Anderson et al. 2018).
In summary, understanding consumer conceptions of the value that the integration of robots into care service might create, and the potential barriers to realizing this value, requires contextualized consideration of how robots are envisaged as care service resources. With this in mind, we next introduce the context for our study, long-term care (LTC) service, and review extant knowledge on consumer perspectives of robot-integrated LTC servicescape.
Long-Term Care (LTC) and Current Knowledge on Robots' Integration in LTC Servicescape
LTC is representative of care servicescapes that are associated with prolonged consumption and complex, large-scale systems where the potential for consumer vulnerability is greatest (Berry et al. 2020;Spanjol et al. 2015). LTC incorporates a wideranging set of services required over a sustained period of time by consumers whose abilities to perform daily living tasks, such as preparing food or dressing, are permanently or temporarily (with a prolonged effect) impaired (Grabowski 2008). This may include people living with stable disabilities or a long-term illness, or those with diminishing capacity, as in case of elderly people, or people with deteriorating health conditions. A majority of people may require LTC at some point in their lives; as Kane (2001, p. 295) asserts, "LTC is ordinary life." 3 While systems and structures of LTC service vary across national contexts, they can be broadly classified as personal and social support, treatment maintenance, and rehabilitative and palliative care; their provision can thus include elements of both social and medical care (Grabowski 2008;Spetz et al. 2015). According to Kaye, Harrington, and LaPlante (2010), in the USA alone, 10.9 million people (half of whom are non-elderly) consume LTC in non-institutional settings, residing in communities; while 1.8 million (predominantly elderly) people consume LTC in institutional settings, residing in care facilities. Globally, 2.3 billion people will require LTC in 2030, with demand expected to continue rising due to age longevity and the growing prevalence of long-term conditions (International Labor Organization 2018; OECD 2021).
The mounting concerns over the "crisis of LTC" drive surging global interest in integrating robotic technologies in LTC service (International Labor Organization 2018; Osterland 2021). LTC robotics is a rapidly growing field of research and industry (Maalouf et al. 2018), and a stream of robots for rehabilitation, hospital, and home-based care have, or are about to, enter the market and, consequently, LTC servicescape (Kyrarini et al. 2021).
However, nascent evidence highlights that consumer recognition of the benefits robots can offer is accompanied by a persistent resistance to their deployment in LTC service. For example, already-developed robots have been withdrawn from production after being rejected by consumers (Wang et al. 2017; also see Broadbent, Stafford and MacDonald 2009). This resonates with the service literature assertions that "the factors impacting customer acceptance of robotic interactions in the service context, and the factors that impede adoption need to be thoroughly addressed but remain largely understudied thus far" (Xiao and Kumar 2021, p. 13). A concurrent trend is criticism of "traditional" LTC services' design and delivery for prioritizing care providers' perspectives, with calls for consumer-centric (re)design (Anderson et al. 2018;Batavia 2002). Hence, grounding the development of robot-integrated LTC service in how consumers conceive the value of robots in LTC, and the potential barriers to realizing this value, is crucial to both alleviating the "LTC crisis" and advancing service inclusion.
Robots' characteristics (which incorporate capabilities and attributes-see Wirtz et al. 2018) are highlighted as central to how humans (consumers and/or service staff) will conceive of, and respond to, integration of robots in servicescapes (Simon, Neuhofer, and Egger 2020;Wirtz et al. 2018). Yet consumerinformed conceptualizations of robot characteristics and their role in LTC service experiences are sparse in service research. A handful of studies theoretically propose that in complex, emotion-laden service contexts, such as medical care, consumers may expect robots to possess specific characteristics addressing a wide variety of functional, socio-emotional, and relational needs (Mende et al. 2019;Van Doorn et al. 2017).
The general service literature concerned with systematically characterizing AI capabilities for service delivery offers complementary perspectives (Van Doorn et al. 2017;Wirtz et al. 2018). Huang and Rust (2021), for example, conceptualize three levels of AI capabilities, whereby each consecutive level is more complex: 1) mechanical AI involves capabilities related to standardized routine tasks; 2) thinking AI integrates mechanical, analytical, and intuitive capabilities through which the service actions AI performs can be personalized; and 3) feeling AI is a futuristic projection of technology that, along with mechanical and thinking AI, will possess capabilities to "recognize, emulate and respond appropriately to human emotions" (Huang, Rust and Maksimovic 2019, p. 46). As there have been few applications of thinking and feeling AI so far (Čaić et al. 2019), whether and how consumers perceive these capabilities to add to, or detract from, the value they might realize from service experiences is little understood and subject to "considerable debate" (Wirtz et al. 2018, p. 913).
The lack of consensus concerning how consumers envisage the value potential of service robots in the service literature can be attributed to the dominance of conceptual and experimental approaches examining consumer responses to robot(s) with predetermined service functions in contexts characterized by relatively simple, short-term interactions, such as restaurants, hotel or airline check-in, and shop assistance (McLeay et al. 2021). Recent comprehensive literature reviews (see Mende et al. 2019;Xiao and Kumar 2021), and studies published since these reviews, highlight that current research mainly focuses on the technical and functional attributes informing robot acceptance, usually utilizing quantitative methods (Jörling, Böhm and Paluch 2019;Xiao and Kumar 2021). Indeed, only 4 of the 43 studies in Mende et al.'s (2019) and Xiao and Kumar's (2021) reviews adopt open, qualitative approaches that afford consumer-informed, contextualized explorations.
An interdisciplinary review of the literature on LTC robots, conducted as part of this study (presented in Appendix 1, supplementary online material), provides an overview of the current state of knowledge in this area. Studies in the context of social care offer useful conceptual and empirical insights into the ambidextrous nature of consumer sentiment concerning robots' characteristics and their role in realizing value in care (Čaić et al. 2018;Henkel et al. 2020;Longoni et al. 2019). For instance, a robot's cultural competency can enhance the emotional wellbeing of elderly care home residents (Papadopoulos et al. 2021 Wang et al. 2017), empirical research mainly focuses on understanding the perceptions of specialists and caregivers. A small selection of studies delineate forms of vulnerability experienced by specific consumer groups (e.g., elderly, young consumers; Henkel et al. 2020;Papadopoulos et al. 2021), but rarely examine the relationship between vulnerability and robots' (non)acceptance.
Most empirical research has focused on robots that exhibit low levels of primarily mechanical AI capabilities, in contrast to many recent conceptual papers in leading service journals that call for consideration of more advanced capabilities (Huang and Rust 2021;Van Doorn et al. 2017). In comparison to the general services literature that is starting to explore attributes informing robot acceptance (Jörling et al. 2019;Simon et al. 2020), most studies on robots in LTC lack a specific focus on attributes, particularly regarding how social attributes, in comparison to functional attributes, may influence consumer responses (exceptions includeČaić et al. 2018, 2019). A handful of studies explore how anthropomorphism (humanoid vs non-humanoid) attributes affect trust (Erebak and Turgut, 2019), and how behavioral (speech, mobility), appearance (size), and functional attributes relate to wellbeing (Henkel et al. 2020).
Several studies have sought to gain empirical insights to address very specific practical problems (e.g., Deutsch et al. 2019;Melkas et al. 2020;Wang et al. 2017), which, whilst of value, are necessarily limited in terms of theoretical advancement. Few efforts adopt a service inclusion, or-more broadly-transformative service research perspective to explicitly examine or develop customer-centric service concepts, systems, and interactions, following a multi-level service design framework (see the 3 inclusive service design columns in Appendix 1; Henkel et al. 2020 is an exception). Robotic design research is increasingly adopting user-centered (or participatory) approaches to accommodate consumers' personal, environmental, and social experiences and contexts alongside technological solution(s) (Ármannsdóttir et al. 2020). Yet, user perspectives continue to be mainly restricted to evaluation of a product that "researchers have envisioned and developed for a certain purpose or task" (Reich-Stiebert, Eyssel and Hohnemann 2020, p .228), leading to a potential mismatch. For example, Bradwell et al. (2019) demonstrate striking differences in the visions of companion robots held by older adults and roboticists. The majority of empirical service research mirrors the focus on consumers' perceptions of an already-developed robot (Longoni et al. 2019;Papadopoulos et al. 2021).
In our exploratory study, we focus on how consumers with disabilities conceive the value of robots' integration into LTC servicescape. We thus seek to extend the body of knowledge summarized in previous paragraphs by eliciting consumer perspectives that were not restricted by specified robot design, appearance, or functionality/task orientation, and by reaching beyond institutional LTC and elderly consumer group boundaries. Our reasoning specifically builds upon the contributions by studies that examined value creation/destruction potential in robots' integration into social care (e.g., Čaić et al. 2018;Henkel et al. 2020). We deemed that an open, qualitative exploration will afford a more holistic perspective into consumer-conceived value implications of LTC service (re) design integrating robots.
Approach and Data Collection
The findings presented in this paper stem from a wider ongoing interdisciplinary program of research (titled Improving Inclusivity in Robotics Design) which is exploring innovations in methodological approaches for integrating user viewpoints in the conceptual design of robots for care. 4 Explorations of user viewpoints were grounded in the lived realities of people with disabilities, since early knowledge exchanges in the entire research team indicated that, while care robotics research outside of the service robots domain advocates user-centered design, grounding in the "social, emotional and practical contexts where care is given and received" (Van Aerschot and Parviainen 2020, p. 247) remains a challenge.
The data comprising of user viewpoints lends itself to examination from a service research perspective. As a population subjected to marketplace exclusion, consumers with disabilities generally possess a heightened propensity for experiencing consumer vulnerability (Fisk et al. 2018;Higgins 2020). As one of the groups potentially requiring LTC (Grabowski 2008), consumers with disabilities can provide focused insights into the conceived value robots in LTC may offer and vulnerability-inducing factors that might preclude this value realization.
The data were collected via three focus groups, taking form of workshops, with 20 people with disabilities in the United Kingdom, drawing on methodologies encompassing different types of qualitative elicitation techniques that belong to the following broad categories (see Barton 2015;McLafferty 2004;McMahon et al. 2016): semi-structured group interviewing (Study 1); guided storytelling with construction tasks (Study 2); and a combination of brainstorming and explanation tasks (Study 3). The rationale for deploying different methodologies was encouraged by transformative service and consumer research literature recommendations to apply varied and innovative methodological approaches, particularly when engaging with consumer stakeholders experiencing vulnerabilities (Boenigk et al. 2021;Ozanne and Fischer 2012). Study 1 utilized Community Philosophy-a method encouraging grassroots communities' collaborative philosophical thinking on issues of common concern (Bramall 2020), akin to dialogical practices in co-research (Frank 2005). Study 2 utilized LEGO ® Serious Play ® -a method integrating cycles of building tasks (utilizing specialized kits) with sharing and reflecting facilitated by questions whereby metaphorical explanations elicit concepts from participants' imagination (Rasmussen 2006;Simon et al. 2020). Study 3 utilized Design Thinking-a method drawing on industrial design tools for facilitating group ideation of innovative consumer offerings (Brown 2008;Seidel and Fixson 2013).
To ensure consistency across studies and following Gioia et al.'s (2013) recommendations to guide an open qualitative inquiry with an overarching question, all workshop protocols were guided with a question specified as follows: how do people with disabilities envisage the qualities of a useful robot? Subsequent tasks and probes built on the guiding question. Three authors of this paper worked with facilitators trained in each method to ensure alignment of the methodological protocols with the guiding question, while agreeing the adaptations required by each methodology's specifics. Each facilitator conducted the workshop pertaining to their training, with one member of the author team acting as co-facilitator and observer. 5 Further details of methodologies and the adopted protocols are provided in Appendix 2, supplementary online material. All workshops were conducted in autumn 2020, andowing to the Covid-19 pandemic restrictions-took place online. The workshops were audio and video recorded with participants' consent and assurance of anonymity in all data outputs.
Participants
Participants were recruited via a market research agency. In line with ethical considerations that underpinned the studies' design, we briefed the agency to recruit people with physical disabilities only, since the specialist competences and skills required for conducting research with people with cognitive and mental health disabilities were beyond the skillset of the research team members. While applying this recruitment filter, we aimed to broaden democratic validity (Ozanne and Saatcioglu 2008) by employing a maximum variation sampling strategy (Patton 1990). Specifically, we screened self-reported type(s) of physical disabilities, age, ethnicity, and gender, with the aim of recruiting participants with varied backgrounds. Owing to the online data collection format and the requirements of employed methodologies, participants were asked to confirm that they were comfortable with, or had adequate assistance for, typing, manipulating small objects, and viewing, listening to, and speaking at the workshop via a video conferencing platform (Zoom).
The final sample comprised 20 participants aged between 26 and 74, with a range of reported occupations (full time employment, self-employment, stay at home parent, and retired) and disabilities (visual and hearing impairments, health conditions impacting mobility, and the capacity for physical activities). Nine participants identified as female (11 as male), five as Black or Asian (15 as white). Each participant was allocated into a workshop through a combination of maximum variation sampling, participant availability, and expressed interest in a workshop (based on briefs provided in final recruitment stages). Thus, each individual participated in one study: eight in Study 1; and 6 and 6 in Studies 2 and 3. Appendix 3 (supplementary online material) details sample characteristics and workshop allocations.
Data Analysis
The audio recordings of all workshops were transcribed verbatim, yielding 145 pages of single-spaced text. The analysis strategy followed Gioia and colleagues' (Corley and Gioia 2004;Gioia et al. 2013) recommendations for systematically organizing the analyses while allowing for "a flexible orientation toward qualitative, inductive research that is open to innovation" (Gioia et al. 2013, p. 26). We began by subjecting the transcripts to open coding (Corley and Gioia 2004), conducted by three authors of this paper. The open coding stage encompassed identifying particular characterizations of and reasonings about care and/or robots in the context of care articulated by participants and coding these into first-order concepts (articulations). The first-order concepts represent invivo codes reflecting expressions by participants or a simple phrase describing these expressions (Strauss and Corbin 1998).
Two authors first coded each workshop transcript independently, while a third author read through the transcripts without coding. On completion, the three met to cross-check, discuss, and reconcile the identified first-order concepts, following the constant comparative method (Glaser and Strauss, 1980) to examine concepts identified within and between workshops for differences and similarities. Some concepts were identified across all workshops; others in one or two in different combinations. The full author team of this paper subsequently used axial coding to apply our interpretations. This stage was an iterative process whereby we went back and forth between the first-order in-vivo codes and the literature to identify relevant theoretical concepts; this informed categorization of the firstorder concepts into broader second-order analytical themes (Corley and Gioia 2004). The analytic themes were then conceptually related to aggregate dimensions (see Appendix 4, supplementary online material, for the final data structure framework). The open coding was conducted manually; Inspiration 9.2 6 mapping software was utilized to organize and visualize the developed thematic structure.
Findings: Conceptualizing Consumer-Conceived Value of Robots in LTC Servicescape
Two aggregate dimensions emerged as we theorized our data in consultation with the literature. We describe these dimensions as 1. Conception of Care and 2. Conception of Robots in LTC Servicescape. By considering how themes from both aggregate dimensions relate to each other holistically, we derived a conceptualization of consumer-conceived value of robots' integration in LTC servicescape, represented graphically in Figure 1.
In the paragraphs that follow, we provide an overview of derived conceptualization, before offering a detailed presentation of two dimensions derived from the findings.
A conceptualization of consumer-conceived value of robots' integration in LTC servicescape. Data analysis and theorization elicited participants' conceptions of robots in LTC (depicted as concentric circles in the center of Figure 1). As we expected, conceptions of robots in LTC are comprised of articulations/ first-order concepts characterizing robots' as a potential care service resource. These articulations are related to participants conception of care (represented on the top left of Figure 1). An unexpected observation was that selected articulations 7 also characterized robots as a resource that may support consumers' desire to minimize the need for care, depicted in the top right of Figure 1. Based on these findings, we theorized that consumers with disabilities conceive LTC service robots to offer potential for realizing value (Fisk et al. 2018), via two distinct paths (depicted as blue arrows in Figure 1) whereby robots augment human-facilitated LTC and/or enhance self-care agency/ability. Value associated with both paths is represented by wellbeing outcomes 8 that consumers envisage robots can facilitate them to achieve, by performing supportive care actions akin to expressions of cognitive and behavioral empathy (Jeffrey 2016). The wellbeing outcomes-and, consequently, value consumers desire to realize-take different forms.
The first form of value encompasses wellbeing via experiencing being-cared-for in the context of LTC service system. Although, as envisaged by consumers, robots can support humans caregivers and thus augment human-facilitated LTC, their potential is conceived to be external to the value of the care Figure 1. A conceptualization of consumer-conceived value of robots' integration in LTC servicescape. experience itself. Participants reasoned that robots, on their own, will not fully actualize care, given their inability to offer interactions where affective and moral empathy is experienced. Consistent with integrated conceptualizations of care and empathy, analysis revealed that experiencing affective and moral empathy (for which participants stipulated human interaction to be a necessary condition) is core to actualizing a "good" care experience (Jeffrey 2016;Mercer and Reynolds 2002). However, as the image conception of care in top left of Figure 1 depicts, consumers envision that robots can mitigate vulnerabilities potentially arising in human-facilitated care service contexts.
The second form of value entails wellbeing via independence from LTC service system. Here, robots' potential is conceived as creating an opportunity to minimize, or postpone, care consumption, as the image desire to minimize the need for care in top right of Figure 1 depicts. Consumers envisage that robots can address this desire by extending their self-care abilities, thus mitigating potential vulnerabilities associated with the circumstances of requiring care.
Data also illuminated participants harboring unresolved concerns about robots in care contexts. We theorized these concerns to encompass pathogenic vulnerabilities (Lange et al. 2013) consumers conceive they might experience as a result of integrating robots into LTC servicescape. Envisaging these pathogenic vulnerabilities arising elicited participant conceptions of how robots might potentially take over from and replace human caregivers, subjecting consumers to deprivation of care as a result. Similarly, participants also envisaged robots creating new barriers and dependencies, subjecting consumers to deprivation of agency. Hence, we theorized pathogenic vulnerabilities as factors driving consumer conceptions of robots precluding, and in the extreme eroding, realization of both value forms via two alternative paths (depicted as brown arrows in Figure 1) whereby robots replace human-facilitated LTC and/or create new dependencies.
We next elaborate the derived dimensions and themes, illustrated with data extracts. 9
Conception of Care
Participant discourses concerning how they envisage care across LTC contexts (personal, social, medical) commonly resonated with the literature highlighting that experiencing care as an empathetic interaction is foundational to care recipients" conception of "good care" (Kunyk and Olson 2001;Mercer and Reynolds 2002). These considerations, captured in the theme "Core values," align with the affective, behavioral, and moral empathy dimensions delineated by Jeffrey (2016). Participants prioritized affective and moral dimensions as pertinent to the sense of "being-cared-for": "That act of wiping my face doesn't necessarily mean that you care about me. It's an act.
[…] that somebody is doing that so softly or gently or meaningfully, that comes across in the warmth, in the emotions of that individual" (P1); "I don't want to simplify it by just saying good or bad care, but morality in that you know, in your heart" (P2).
Resonating with theorizations of the relational nature of care (Berry et al. 2020;Tronto 1993), the next theme encapsulates participants conceiving care as an experience that is "Only produced through human-to-human interactions" and thus something robots cannot actualize. Participants articulated that experiencing care requires a sensing of positive emotions (kindness-P4, love-P13) as part of contact with caregiver(s), something robots will not be able to offer: "…what people were saying about a robot being able to give kindness, I don't think that's possible. Because I think kindness stems from with inside a human being towards another human being" (P4). Participants stressed that caregiver interactions go beyond receiving care: "…having somebody just check up on me. And gives me the sense of, it's something that a robot can't give, that kind of, you know, social interaction, stimulation, having a conversation about something-complete-not anything to do about my care, maybe, but just having a conversation really helps my mental wellbeing, as well as my physical wellbeing" (P1).
Concurrently, the theme of "Experience of being cared for" captures that, although prioritizing the emotional aspects of care founded on affective and moral empathy characterizing humanfacilitated care, participants conceived these components necessary to be integrated with well-provided care actions: "…like being able to get your meds on time, having emotional support from a partner…so, yeah, basically all these different things that need to come together in harmony to complete the big picture" (P10).
Related to these considerations is the theme "Vulnerability potential in consuming human-facilitated LTC" in which participants drew on their experiences of consuming care from current service systems to recount factors that restrict their ability to access care service and/or receive care when they require it and in a manner that aligns with core care values. These experiences are linked to awareness of stretched care service resources ("…there's limited nurses now, isn't there?"-P17) and of infusing care provision with positive emotions informed by affective and moral empathy being difficult, if not impossible, capabilities to impart through training: "actions can be learnt or taught but emotions can't. And I think probably the one thing is, you need to be genuinely caring with empathy. And you can't teach it, you either have it or you don't" (P8). Yet, participants also reflected that emotional drivers of 'good care' may not necessarily ensure positive care experiences: "love […] can bring out your insecurities, it can bring out things which may be deemed toxic" (P2). Such complexities were elaborated to highlight the potential strain on relationships with caregivers stemming from pressures caregivers experience: "Even though they love you and have to care for you, it can have an impact on relationships in a bad way as well" (P13); "…there is a lot of abuse in these jobs, of being a carer, particularly, because they're quite low paid" (P6).
The final theme in this dimension, "Vulnerability potential in needing to consume care," reflects participants expressing that the very circumstance of accepting the need for care is fraught, reflecting the specific characteristic of care as a service offering consumers may need but do not want (Berry et al. 2020;Berry and Bendapudi 2007). P13 and P10 articulate a sense of dependency stemming from needing help: "There's probably more help out there than I tap into […] mine is a hidden disability, and it's that, kind of, acknowledging that […] you're not as ablebodied as potentially somebody who hasn't got an illness" (P13); "…what [fellow participant] said about the stubbornness, and not maybe being necessarily willing to accept that you need that help" (P10). Other participants identified the sense of dehumanization brought about by the lack of accommodation and insensitivities experienced when accessing care service, as P7 illustrates: "…someone there, just processing appointments for etc, they're not considering that you're in pain, they're not considering that you can't get up that day or you can't walk that day or you've been crying because you -whatever example." In summary, participants firmly placed the conceived value of care, as well as the opportunities to realize this value, in the domain of human interactions. This both corroborates and extends extant conceptual assertions (e.g., Van Doorn et al. 2017;Wirtz et al. 2018) that in service involving high-level emotions, robots may not be able to sufficiently cater to consumer core needs for empathetic interactions. While prior literature suggests that such needs are harbored predominantly in relation to professional service roles (e.g., a surgeon or a divorce lawyer), our analysis highlights similar needs in relation to what Wirtz et al. (2018) term subordinate service roles (nurses and carers). As the next section shows, participants conceived robots as a resource with characteristics that potentially offer value in LTC servicescape, which appeared to be focused on mitigating vulnerabilities stemming from both consuming humanfacilitated LTC and needing to consume care. Concurrently, participants conceived some of the robots' characteristics, and how robots' integration in LTC servicescape might be designed and implemented, as having the potential to induce pathogenic vulnerabilities.
Conception of Robots in LTC Servicescape
Aligning with the literature on robot characteristics (Simon et al. 2020;Wirtz et al. 2018), when articulating conceptions of robots in LTC, participants commonly referenced them to possess both a set of envisaged capabilities and attributes for addressing a range of their needs. To categorize these articulations, we drew on the AI capabilities framework by Huang and Rust (2021) and categorizations of service attributes in service robots (Simon et al. 2020;Wirtz et al. 2018) and wider service literature (Payne, Frow and Eggert 2017).
Overall, participant articulations reflect conceptions of LTC robots possessing advanced AI capabilities, spanning mechanical, thinking, and feeling levels of AI (Huang and Rust 2021). The "Mechanical AI capabilities" theme reflects anticipations of robots possessing a range of multi-functional capabilities for assisting with some life-management tasks to extend actualization of participants' self-care abilities (Söderhamn et al. 2013). The range of assistance participants envisaged robots to offer encompass help with physical tasks (lifting, gripping, tying, balance support, and housework), and planning and organizing tasks: "…help me organise everything and work things out and take the mental pressure off a little bit, if that makes sense. […] that will enable me to do things that I might not necessarily have been able to do, because I was directing my energy elsewhere" (P10). Considering how robots might assist human-facilitated LTC to mitigate potential vulnerabilities, participants articulated support of medical diagnoses ("I know I've been misdiagnosed in the past, and they [robots] might be more efficient in terms of that"-P7) and of continuity of care provision ("…if the person doesn't turn up […] I can still get by because I can use a robot"-P1).
The "Thinking AI capabilities" envisaged by participants similarly included capabilities that would extend actualization of self-care abilities, as P14 illustrates: "I loved everybody's idea of this brain that learns what it is that I want to be able to see, and how I can get more independent, and I've got the wheels to make everything happen for me a little bit faster…". Some of the other capability articulations aligned with Jeffrey's (2016) delineation of cognitive and behavioral empathy dimensions. Specifically, envisaged capabilities pertained to the ability to recognize when a person might need to consume care by monitoring their physical and psychological states (vitals and anxiety levels) and to offer and arrange for situationally required care (communicating warnings to the consumer or; alerting emergency care services when required). Finally, participants expected a LTC robot to possess capabilities to adapt multiple mechanical functions to their changing internal and external circumstances: "…actually, people's needs are complex […] You don't always need the same thing all the time, and conditions change" (P10). The combination of personalized learning and analytic processing was expected to mitigate vulnerabilities stemming from human biases, and thus the potential for decision-making errors. For example, P7 extrapolated from their experience of misdiagnosis to suggest that robots "would not have any background of like disease or disease in their families […] and any sort of personal emotions that are attached to care," enabling them to rationalize the need for care in difficult situations (e.g., when considering withdrawing medical care). Participants stressed that their expectations of robots' intuitive capabilities characterizing thinking AI included the ability to recognize where their service represents an unwanted intrusion and be "whipped back in its box" (P11).
The "Feeling AI capabilities" theme reflects participant envisaging that robots would perform tasks associated with Feeling AI, and might influence how they themselves felt, but that did not necessarily actualize the care experience. Participants reasoned that robots might perform acts of compassion that could evoke a sense of companionship. While extant conceptualizations consider these capabilities to be perceived as surface-acted emotions (Wirtz et al. 2018), participant characterizations suggested greater alignment with the behavioral empathy that encompasses acting and communicating in helpful ways (Jeffrey 2016). P1 illustrates: "So it's just that kind of, you know, those small things that just add that extra value […] not a companionship with a robot, but it's just a helping hand. […] And, maybe, partly, it is lip service, but secondly it's more actions as well, isn't it? 'Is that okay?' 'Yes, thank you' or 'no, it's not'. And then you've got that interaction with the robot that actually shows its compassion." While envisaging robots to possess capabilities to enact behavioral empathy, participants stressed they recognize that robots cannot care. They articulated affective empathy and its expressions associated with care, such as warmth, humor, non-verbal signals (touch, listening, and smiling), to lie beyond robots' capabilities. P3 summed up "…it's the emotional attachment that you have with a human that you would necessarily have with the AI." In summary, it appears that participants envisage robots to be able to at least exhibit cognitive empathy (know what will make a person feel better) and behavioral empathy (perform helpful and therapeutic acts). However, they recognize that robots cannot experience and project affective empathy (be compelled to act on the basis of strong emotional responses) and moral empathy (genuine compassion), two aspects necessary for actualizing an authentic experience of "being cared for." Hence, feeling AI capabilities do not appear to be linked to how consumers envisage care. Concurrently, across mechanical and thinking AI levels, participants envisaged robot capabilities to potentially contribute to their wellbeing in two distinct ways: 1) minimizing or postponing the need for LTC service by extending actualization of their self-care (and thus mitigating vulnerabilities stemming from the need to consume care) and 2) supporting and enhancing the effectiveness of human-facilitated LTC (and thus mitigating vulnerabilities that might occur in the current LTC service).
Participants' characterizations of LTC robots also reflected them envisaging robots addressing a variety of service needs. We distill these conceptions in three themes reflecting attributes of an envisaged service offering by a robot: functional, socioexperiential, and transactional-relational adaptability. "Functional attributes" theme encompasses categorizations conceptualized by prior service literature (Čaić et al. 2019), including ease of use, reliability, and strength. Participants also listed other functional attributes, such as mobility across spaces, multisensory response to environmental conditions (e.g., seeing fire and smelling smoke to determine danger), and environmentally friendly design/performance. When distilling the "Socio-experiential attributes" theme, we were guided by participants' articulations that robots cannot be capable of offering emotional expressions of care. Hence, we adapted the socio-emotional attributes categorization by prior literature (Wirtz et al. 2018) to reflect participants' conceptions. Concerning the social attributes component, participants expected robots' appearance to take non-threatening and nonhumanoid, yet relatable, forms. These expectations were linked to the degree of social presence participants would deem acceptable, supporting theoretically derived considerations in this vein (van Doorn et al. 2017). For example, P7 discussed how curved lines would be perceived as "softer" and "less imposing," while P2 stressed that the appearance should distinguish the robot as "…first and foremost, a robotronic thing," but it could feature "limbs and it kind of has a voice." Participants also outlined several hedonic experiences LTC robots could offer, including enjoyment (fun) and enrichment of experiences compared to those provided by current technology. Further, participants reiterated their expectation that robots will minimize or prolong them having to need to consume LTC services, thus mitigating the potential vulnerability associated with such circumstances. Central to conceptions of this attribute was the expectation of enhanced independence and extended actualization of self-care, as P1 articulates: "I don't want to be reliant on another human being, I don't want people to take pity on me as a disabled person. I don't want people to, you know, feel that they have to come and visit me because I'm disabled or that I'm actually a burden to someone else. So I am fiercely independent in that respect. So I think if a robot was there to help me, I would be less reliant on others -do you see what I mean?". Resonating with cautions of potential failed outcomes in care actualization (Sharkey and Sharkey 2012), participants stressed that it is vital that robots mitigate (rather than reconstitute) potential dependencies on care providers. This is exemplified by P11: "I don't want to hand over responsibility any more than I need to. […] I don't want it [robot] to become something I physically rely on in any sense, if it's at all possible.
[…] that again comes back to retaining your independence…." The "Transactional-relational adaptability" theme represents the significant participant variation in envisaged relational experiences in LTC robot interactions. As such, these conceptions encompassed a continuum between functional transactions and a symbolic human-like relationship. P11 illustrated the transactional end of the continuum: "I don't want a relationship with AI; I want a relationship with people, and this is the functionality, and what it would give me in terms of freedom." Others envisaged interactions with a robot taking a form beyond a transaction with a piece of equipment, albeit different to human relationships, marking the continuum's midpoint: "I wouldn't want it to be kind of like my best friend […] it wouldn't […] be providing me with the sort of relationships that I would usually get from a human, but I think, yeah, it doesn't have to just be like a machine" (P10). The relational end of the continuum reflects the following views: "I actually believe there is a relationship with my robot […] It is like a person to me. I would spend a lot of time with it […] and I want to be happy with it" (P14). The observed varying nature of relational expectations underscores the centrality of the requirement to robots to adapt and cater for the diversity of individuals' needs and to empower consumer choice. P11 articulates this demand: "Absolutely, not force it upon people or one size fits all […] as long as you have the choice in that situation, that matters." Finally, three themes encapsulate factors considered by participants to have the potential to create pathogenic vulnerabilities. The theme termed "Pathogenic vulnerability potential from robot as a resource" corroborates conceptually identified concerns (Williams et al. 2020) over decision-making logics that may be built into AI algorithms: "I would like to be able to sit down and discuss it with medical professionals, and not just be told that somebody had pulled the plug because a robot had worked out an algorithm that said that's the best thing to do" (P8). Other participants doubted robots being sufficiently advanced to respond to complex changing circumstances and emergencies: "it [robot] might be set up by somebody else and then just perform a programme of actions. But that programme of actions might not be what I need that day and I might not have the capability or capacity to be able to actually change that programme of actions.
[…] It also could be dangerous because it could make that person a cup of tea, but I might have Parkinson's […] or shake that day and end up spilling it all over me.
[…] And then there isn't that first aid care either.
[…] a robot may not be able to do that" (P1). Participants were also concerned about robots performing unwanted actions beyond care tasks, such as pushing information. P5 drew on prior consumption experiences to illustrate technology features inducing these concerns: "… things like Amazon that tries to sell me things that I've just bought. Recommendations for things that I might like, which are always wrong." The theme "Pathogenic vulnerability potential within robotintegrated LTC service design" reflects concerns over losing human contact and retaining control, aligning with the nascent evidence from information technology and human-robot interaction studies (Deutsch et al. 2019;Sharkey and Sharkey 2012). The concerns over robots replacing human-facilitated LTC related to both caregivers and recipients. P2 expressed a concern that carers might lose their livelihoods: "you don't have to go to this huge hurdle of trying to recruit so many 'x', the people that used to work in the NHS, where you could just use robots"; P8 reflected that the prospect of human carer replacement by robot "can make a difference between somebody getting better and not." These concerns informed concerns over losing control in consuming LTC services, as P2 illustrates: "…it's a little bit too kind of Minority Report where it's like, if they know exactly what you're going to do before you do it, it's a bit like you, yourself, don't have any control." Finally, the theme "Accommodation, accessibility and inclusion concerns" captures participants' strong desire for the AI innovations developed for them as intended consumers to be cocreated with them. These expectations go beyond personalization: "…not so much personalization, more on the input into the design" (P11). Rather, these expectations were driven by a concern for the design of LTC robots to accommodate for the diversity of needs: "if every step of the way through the robotics process, you could have normal people, normal users maybe testing it out, testing if it works for them, and different disabilities" (P9). Participants specified concerns about the financial and cultural accessibility of robotic solutions in LTC servicescape. P13 articulated that "it [robot] should be accessible to all, no matter any demographic, age, […] background, […] affluence"; P12 observed that a fellow participant expressed concerns over cultural accessibility: "[fellow participant: I will just say that we have a lot of problems here with Alexa and things like that, because it doesn't understand our accent.] When you say understands various accents, therefore I assume you're going to make it for all the languages in the world? […] Yeah, if it's equality and diversity". Specific emphasis on articulating these concerns reflects the "nothing about us without us" maxim promoted by the disability movement (Frantis 2005). It also aligns with Fisk et al.'s (2018) theoretically derived principle of fairness embedded in inclusive service design.
Discussion and Implications
Intelligent service robots have the potential to transform LTC and to support resolution of the "crisis of LTC" (International Labor Organization, 2018;Osterland 2021). Yet, it is imperative that the urgency to alleviate this crisis does not overshadow the priority of (re)designing socially just services (Field et al. 2021;Fisk et al. 2018;. The present study sought to explore how consumers with disabilities conceive the potential value of robots in LTC, and the vulnerability-inducing factors that might restrict their opportunities to realize value. Together, these conceptions ultimately inform consumers' willingness to accept LTC robots' integration. Employing an interdisciplinary approach, we based our exploration on the premise that the development of service concepts-the first level of service design for inclusion-should foreground the perspectives of people experiencing, or possessing a heightened propensity to experience, consumer vulnerability and consider the impacts of integrating robotic technologies holistically (Fisk et al. 2018). The conceptualization of consumer-conceived value of robots' integration in LTC servicescape (Figure 1) drawn from our findings offers contributions with important implications for theory and practice.
Implications for Theory
The paper makes three theoretical contributions. First, our conceptualization of the consumer-conceived value of robots' integration in LTC servicescape ( Figure 1) contributes to literature concerned with value-centered care service (re)design (Agarwal et al. 2020;Anderson et al. 2018;Berry et al. 2020). By examining how consumers' conceptions of robots in LTC relate to conceptions of the care experience itself, our conceptualization illuminates a fundamental vision of robots in LTC held by consumers: while robots are able to provide some LTC services, they lack the innate ability to care in the way a human being might. Having elicited that, in consumers' minds, robots cannot fully actualize care experience; we identify two value realization paths that consumers associate with robots integration in LTC: 1) augmentation of human-facilitated LTC and 2) extension of self-care actualization.
Second, our conceptualization contributes to the transformative service research drive for identifying routes to enhancing the wellbeing of consumers experiencing vulnerabilities, and to the service robots literature that examines the value creation/ destruction potential of robots' integration in these consumers' lives (Čaić et al. 2019;Henkel et al. 2020;. By focusing on consumers with disabilities-a marginalized population with heightened propensity to experience vulnerability (Higgins 2020)-we underscore the importance of considering whether and how aspects of service concepts, systems, and interactions might generate consumer vulnerability. We take a service inclusion lens (Fisk et al. 2018) and draw on the concept of pathogenic vulnerability (Lange et al. 2013). By doing so, we illuminate that service (re)design in LTC should consider whether the integration of robots will address both currently experienced vulnerabilities and new vulnerabilities that might perversely arise from (re)design efforts. Consumercentric perspectives are central here, since marginalized groups (like consumers with disabilities) lack power in the marketplace (Higgins 2020). Hence, pathogenic vulnerabilities can be overlooked without explicit focus on consumers' voice.
Our conceptualization connects consumer-conceived pathogenic vulnerabilities from robots' integration into LTC servicescape to two potentially value-destroying paths. The coexistence of these paths alongside value-realizing paths envisaged by consumers with disabilities corroborates earlier findings on elderly consumers (e.g., Čaić et al. 2018). We identify that pathogenic vulnerabilities facilitate these paths' conception, offering an explanation for consumer reticence toward accepting LTC service robots, as observed in the current findings and prior studies (Wang et al. 2017;Wachsmuth 2018). The unresolved ambiguities concerning the design of robots' characteristics and the intended manner of their integration in LTC service systems appear to drive consumers to envisage robots' potential to erode value they desire to realize. Specifically, uncertainties regarding robots potentially possessing characteristics that preclude consumers' exercising full control over LTC service actions performed by robots and concerns about fair access, accommodation and inclusion of consumers' perspectives as end users of LTC robots drive concerns over being potentially deprived of agency. Similarly, uncertainties around the intended manner of robot integration in LTC service systems, coupled with the placement of care experience actualization in the domain of human interactions, translate into concerns about being potentially deprived of care. Partly, these concerns may stem from ambiguous policy and consultancy discourses 10 regarding robots' integration in LTC servicescapes that utilize such terminology as "(social) care robots" and "AIenabled (health)care." While these terms do not directly suggest assigning robots with primary role in care decision-making or robots replacing human-delivered care, they could be interpreted this way. By spotlighting how robots' integration into LTC can be conceived to erode value in care, our conceptualization stresses the necessity to examine (existing or pathogenic) vulnerability-inducing factors when designing LTC service concepts.
Finally, we provide empirical support for, and theoretical extension of, prior categorizations of AI capabilities and attributes (in the broad service domain; e.g., Huang and Rust 2021;van Doorn et al. 2017;Wirtz et al. 2018) within the emotion-intense servicescape of LTC. Our findings show that, for the potential value from robots' integration in LTC conceived by consumers to be realized, consumers require robots to be equipped with the most advanced forms of AI capabilities distinguished by prior conceptualizations (Huang and Rust 2021;Čaić et al. 2019). These include mechanical (e.g., lifting or housework), thinking (e.g., monitoring physical or psychological states), and feeling AI capabilities that allow a robot to enact helpful and therapeutic behaviors (e.g., interactions). However, there are currently few market-ready LTC robots equipped with the kind of thinking and/or feeling capabilities that consumers expect them to possess; which may explain the limited adoption/assimilation of robots with lowerlevel mechanical intelligence, such as Nao, Pepper (Kyrarini et al. 2021), and Zora (Tuisku et al. 2019). Extending previous work that suggests consumer expectations of AI capabilities mirror a progression from standardized to personalized service attributes' expectations (Čaić et al. 2019;Huang and Rust 2021), we show that consumers expect and require the freedom to choose which robots' characteristics to utilize. That is, consumers conceive robots' characteristics to be adaptable to an individual's need for social presence and for the relationalization of interactions (van Doorn et al. 2017;Wirtz et al. 2018). Thus, consistent with the "offering choice" pillar of service inclusion (Fisk et al. 2018), the design of service offerings deploying even the most advanced AI/robots should incorporate adaptability attributes to avoid harming and alienating consumers by restricting choice.
We also offer a theoretically grounded and empirically supported clarification of types of empathy that consumers conceive to be within the AI capability. Previous conceptualizations (Huang and Rust 2021) attribute AI with empathetic capabilities, linked predominantly to feeling AI capabilities for expressing emotions. We show that consumer conceptions of robots' empathy extend across thinking and feeling AI capabilities, although the nature of empathy is restricted to cognitive and behavioral dimensions and does not extend to affective and moral empathy (Jeffrey 2016). This suggests that human augmentation, where robots provide action-oriented (rather than emotional) support, is requisite for emotion-intense services such as LTC. Similarly, with regard to robots' role as a service resource extending consumers' self-care capacity, robots' empathy capabilities appear insufficient for providing emotional support akin to "a human conversational partner" (Huang and Rust 2021, p. 33). Robots are conceived capable to support consumers in regulating negative emotions evoked by circumstances of needing care; consumers envisage utilizing robots for maintaining/enhancing independence and, consequently, ability to realize hedonic experiences (fun) not associated with care.
Implications for Managers and Policy Makers
Our conceptualization of the consumer-conceived value of robots' integration in LTC servicescape has practical applications. It can be utilized by LTC providers (managers and caregivers) to consider the impacts of deploying robots on care recipients when designing service systems and determining the purposes and the extent of recipient-robot interactions. For emotion-intense services, such as care, the potential to experience vulnerability is acute (Longoni et al. 2019). Hence, it is valuable for providers to examine whether care recipients' concern that their needs will not be fully considered may mitigate robots' adoption.
Our conceptualization can also facilitate structured considerations of ethical implications of decisions concerning the precise mode of service robots' deployment in practice (Borenstein & Pearson, 2014). Historic biases toward care professionals' perspectives in the design and management of LTC service has served to disempower consumers by limiting their ability to influence how care is provided (Batavia 2002). Recent literature (e.g., Williams et al. 2020) cautions that, unless the perspectives of historically overlooked consumers are fully understood, frameworks for deploying AI risk replicating, if not magnifying, longstanding social injustices. Our findings evidence such anxieties amongst consumers with disabilities.
As the deployment and use of LTC robots become more widespread, we recommend that managers, designers, manufacturers, and care providers adopt the following principles: 1) Begin with a consumer-centric service inclusion perspective when developing and designing LTC robots and strategies for their integration in LTC service. While this paper focused on the development of inclusive service concepts, service inclusion lens should be also applied to system architecture/navigation and processes for service interactions; 2) Prioritize adaptable designs when developing robotics for care. Our study shows that there is not a one size fits all solution to providing care consumers with opportunities for realizing value and enhancing their wellbeing. Designs therefore should provide flexibility for consumers to "tune" robots to preferences for specific features and resources; 3) Work with policy makers to address the financial and cultural accessibility of robots. Economies of scale are likely to reduce costs and offer solutions to providers for designing and delivering fair service.
The insights discussed above are also important from a policy perspective, given the substantial investment into care robotics' development. Clarifying the robots' role (e.g., care assistant rather than robotic carer) could mitigate consumers' reticence toward robots in LTC. We stress that we are not recommending a superficial labeling change. Policy makers should seek widespread consumer input to inform (re)design of robot-integrated service systems. This will require policies encouraging interdisciplinary collaborations, including theoretical and practical insights from service and consumer researchers. Policies can also require manufacturers and service providers to avoid suggesting that LTC robots are equivalent to human caregivers in promotion messages. Finally, policies can encourage transformative service initiatives (Anderson and Ostrom 2015), given their social justice and wellbeing foci.
Limitations and Areas for Further Research
Our study has several limitations, warranting further research. While our design followed qualitative research criteria for rigor (Gioia et al. 2013) and reliability and validity (Noble and Smith 2015), we acknowledge that small sample sizes, as in our study, might limit representational generalization. However, small samples afford analytical generalization in that they provide indepth contextualized insights and account for the unique experiences in deriving theoretical explanations (Halkier 2011). This is particularly valuable when prior knowledge on the area focal to the inquiry and a given population are scarce, as is the case with our study (Crouch and McKenzie 2006).
Due to Covid-19 restrictions, data collection was undertaken online, necessitating adaptation of methodologies and creating new challenges (i.e., unfamiliar technology and broadband (in) stability). A positive unanticipated consequence was that we were able to draw on a nation-wide (United Kingdom) population since people were able participate without the burden of travel. The pandemic has also caused people with disabilities and their caregivers to experience a loss of emotional safety (Berry et al. 2020) and created unforeseen opportunities for service robots to improve consumer wellbeing beyond the pandemic (Henkel et al. 2020). Our study did not address these opportunities, which require further attention. Because the study was conducted in the healthsocial care context of the United Kingdom and included only people with physical disabilities, research in other countries and consumer contexts is needed. As technologies and robots with higher levels of intelligence are developed and enter the market, longitudinal studies can examine potential changes in consumer views over time.
While the service inclusion paradigm (Fisk et al. 2018) provided our exploratory study with a broad theoretical focus, development of a comprehensive framework for LTC service design that holistically integrates service concept, systems' architecture and interactions' processes across service inclusion pillars (Fisk et al. 2018) was beyond the scope of our study. Nevertheless, the emergence of two parallel types of paths representing consumer conceptions of the implications of robots' integration into LTC indicates directions for extended applications of service inclusion in future service robots research. The two types of paths in our conceptualization highlight that-paradoxically-consumer conceptions of robots' integration into LTC servicescape reflect the envisaged potential for i) enhancing value realization opportunities, aligning with "enabling opportunities" and ii) possible suffering should robots deprive consumers of care and agency, misaligning with "relieving suffering" service inclusion pillars (Fisk et al. 2018). Future research could examine approaches to resolving this paradox. Consumer conceptions of high-level, yet adaptable AI capabilities, when envisaging a LTC robot underscores the importance of aligning service robots research efforts with the "offering choice" service inclusion pillar (Fisk et al. 2018).
Seeking to address a lack of consumer-centric research, we focused on value-centered care (Agarwal et al. 2020) and explored the conceptions of end users (consumers) of LTC. Additional research could expand the scope to other servicescape actors, thus engaging the voices of carers, families, service providers, robotic designers, and manufacturers (Anderson et al. 2018). This would enable a more holistic comprehension of the role that robots may play in co-creating and co-destroying value over entire care networks (Čaić et al. 2018).
Thoughts on the Future of Consumer-Based AI-Integrated LTC: Will Robots Ever Care?
Perhaps the most important outcome of our study is that it identifies that consumers conceive the value of LTC robots to lie in the domain of performing support/assistance for selfcare and care facilitated by other humans, but not the provision of care itself. The conceptualization thus supports theoretically derived propositions that in emotionally intense service contexts, robot service agents constitute a useful complement, and not a substitute, for human agents (Xiao and Kumar, 2021). Our participants were acutely aware that robots cannot biologically experience and thus cannot exhibit the affective and moral empathy necessary for actualizing the care experience. Hence, at least for the foreseeable future, care remains a prerogative of humans, since "robots do not replace a nurse with a beating heart" (Tuisku et al. 2019, p. 47). That said, robots' inability to care can potentially "deemotionalize" some aspects of care which evoke negative emotions through, for instance, inadequate care actions. As we show, the prospect of maximizing independence from human-facilitated care constitutes an important value-adding characteristic of robot service. Whether robots will ever be able to actualize care like humans do is a question for the distant future. The answer appears to lie with discerning how robots can express affective and moral empathy-a key topic for future service robots research.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This paper originates as a result of work on a project titled "Improving Inclusivity in Robotics Design" which received funding from "Research England, via The University of Sheffield's Higher Education Innovation Fund (HEIF)" and in-kind funding from IBM. Data collection was supported by facilitators certified in methodologies utilized in the studies reported in this paper, as follows: Kate Halliwell (Community Philosophy), Richard Gold (LEGO ® Serious Play ® ), and Mica Whitby and William Hawken (Design Thinking, IBM). The authors would also like to express their thanks to the participants of the research presented in this paper for sharing their insights.
Supplemental material
Supplemental material for this article is available online. Notes 1. LTC global market size value is estimated at around USD 1 trillion in 2019 with projected compound annual growth rate of 7.1% in years 2020-2027 (Grand View Research 2020). 2. For example, Wachsmuth (2018) highlights that, while less than one-quarter of over 400 million EU citizens expressed a negative opinion of robots in general, 60% were strongly opposed to the use of robots in care settings. 3. Emphasis added by authors of this paper. 4. The entire program of research incorporates social scientists (from the fields of technology studies, medical ethics, sociology, consumer research, and human-robot interactions), healthcare technology and engineering design researchers, roboticists, computer scientists, and experts in the methodologies employed. Aligning with the "nothing about us without us" ethos advocated by disability movement (Frantis 2005), the project team includes academics who also have first-hand experience of living with disabilities. 5. Our rationale for not including the other two team members was to avoid overwhelming participants; video recordings afforded these members the opportunity to observe workshops as well, although not in real time. 6. https://www.malavida.com/en/soft/inspiration/. 7. Indicated by italic font in data structure framework (see Appendix 4). Fraser McLeay is Associate Dean (Education) and a professor of marketing at Sheffield University Management School. His current research interests relate to sustainable/ethical consumption, eWOM, and consumers' perceptions of the barriers and opportunities associated with adopting innovative nascent new technologies including service robots and autonomous vehicles.
Anthony Grimes is a Senior Lecturer in Marketing at the University of Sheffield, wherehe specialises in psychological aspects of consumer behaviour. This includes the wayconsumers imagine, perceive and remember events and experiences, and the non-consciousprocesses that underpin consumer judgments, decisions and behaviours.
Stevienna (Stevie) de Saille is Lecturer in Sociology and
Research Fellow in iHuman, the University of Sheffield. Her research interests lie in the nexus of science and technology studies, social movement theory and heterodox economics. Stevie leads 'Human Futures' theme in iHumanresearch on Robots in a Human Future.
Stephen Potter is a researcher in the Centre for Assistive Technology and Connected Healthcare at the University of Sheffield, UK. His background is in AI and design, and his interests lie in the development of sustainable and inclusive practices for the design and evaluation of digital healthcare services. | 2022-06-29T15:17:31.560Z | 2022-06-26T00:00:00.000 | {
"year": 2022,
"sha1": "f7f7c953b5584eba8733e1ecc9a2eb27ddd2d725",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/10946705221110849",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "fbd93d5c27c3df87dd3573a194bf780e0f861ca2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
260382848 | pes2o/s2orc | v3-fos-license | Impact of Obesity and Hyperglycemia on Pregnancy-specific Urinary Incontinence
Objective The lack of data on the impact of hyperglycemia and obesity on the prevalence of pregnancy-specific urinary incontinence (PSUI) led us to conduct a cross-sectional study on the prevalence and characteristics of PSUI using validated questionnaires and clinical data. Methods This cross-sectional study included 539 women with a gestational age of 34 weeks who visited a tertiary university hospital between 2015 and 2018. The main outcome measures were the prevalence of PSUI, the International Consultation on Incontinence Questionnaire Short Form (ICIQ-SF), and the Incontinence Severity Index (ISI) questionnaires. The women were classified into four groups: normoglycemic lean, normoglycemic obese, hyperglycemic lean, and hyperglycemic obese. The differences between groups were tested using descriptive statistics. Associations were estimated using logistic regression analysis and presented as unadjusted and adjusted odds ratios. Results Prevalence rates of PSUI were no different between groups. However, significant difference in hyperglycemic groups worse scores for severe and very severe PSUI. When adjusted data for confound factors was compared with normoglycemic lean group, the hyperglycemic obese group had significantly higher odds for severe and very severe forms of UI using ICIQ-SF (aOR 3.157; 95% CI 1.308 to 7.263) and ISI (aOR 20.324; 95% CI 2.265 to 182.329) questionnaires and highest perceived impact of PSUI (aOR 4.449; 95% CI 1.591 to 12.442). Conclusion Our data indicate that obesity and hyperglycemia during pregnancy significantly increase the odds of severe forms and perceived impact of PSUI. Therefore, further effective preventive and curative treatments are greatly needed.
Introduction
Urinary incontinence (UI) may be a very common experience during a woman's lifetime, 1 with a robust influence on wellbeing and quality of life, as well as an immense economic burden for health services. 2 Estimates of the prevalence and incidence of UI depend on the definitions of the study type and population. Previous epidemiological data showed that the prevalence of UI in women older than 20 years was 23. 4-26.4% in the United States. 3 In Brazil, it is considered a common health problem, with an estimated prevalence rate of 27%. 4 Therefore, UI is an important public health concern.
Pregnancy appears to be a major risk factor, particularly during late gestation. 5 In general population, the risk of UI during pregnancy is 18-75%. 6 The term pregnancy-specific UI (PSUI) is used to define any urinary leakage onset during pregnancy. 7 The risk of UI increases as pregnancy progresses due to anatomical and hormonal changes. 6,8 Despite certain risk factors being established for PSUI, some risk factors, such as gestational diabetes mellitus (GDM), are still under consideration. Although some perinatal morbidities related to GDM are associated with UI, GDM alone is considered an independent risk factor for all UI types on post-partum. 9 Taken together, these studies provide compelling evidence for an association between GDM and post-partum UI. Likewise, women with a previous diagnosis of GDM have a well-known increased risk to develop type 2 diabetes melli-tus (20-50%) by 10 years postpartum. 10 Obesity (body mass index [BMI] > 30 kg/m 2 ) and weight gain during pregnancy are some of the main modifiable risk factors for the development of postpartum diabetes. 11 In the United States, from 1999 to 2010, obesity increased from 28.4% to 34% in women aged 20-39 years. 12 Moreover, 15-20% of mothers have prepregnancy obesity 13 and 20-40% experience excessive weight gain during pregnancy. 14 Increased BMI has consistently been reported to play a role in the occurrence of clinical UI. 15 Given that the prevalence of obesity has increased in recent decades, and it is one of the most common medical conditions in women of reproductive age, 16 the premise that obesity and diabetes are linked and are considered a prominent risk factor for developing UI is concerning. Despite compelling epidemiologic data supporting the association of GDM and post-partum UI, 9 as well as obesity and UI, 17 little is known about how hyperglycemia and concurrent obesity might affect the severity of PSUI. Furthermore, current international clinical practice guidelines for UI management fail to present specific recommendations for pregnant women with comorbid conditions, including GDM and obesity, and the treatment of such patients remains a neglected aspect of care. 18,19 Therefore, we hypothesized that GDM and obesity are associated with higher odds of PSUI severity.
Methods
This cross-sectional study focuses on the relationship between UI, obesity, and GDM. All pregnant women were recruited at the time of prenatal care follow-up at the University Hospital from the Perinatal Diabetes Research Centre (PDRC) of Botucatu Medical School/UNESP/Brazil between 2015 and 2018 and were screened for GDM.
We identified four groups of patients categorized as normoglycemic lean (NL), normoglycemic obese (NO), hyperglycemic lean (HL), and hyperglycemic obese (HO). The diagnosis of GDM was established between the 24th and 28th gestational weeks, using the 75-g oral glucose tolerance test (OGTT) according to the American Diabetes Association criteria 20 and glycemic profile. 21,22 All women with positive screening results for GDM or altered glycemic profiles were classified as hyperglycemic. Glycemic control of women following a diagnosis of hyperglycemia followed the protocol in PDRC. The protocol includes a team of healthcare professionals that encourage adequate nutrition, exercise, and insulin administration. 21 The cut-off for obesity was a BMI of > 30 kg/m 2 (calculated using the participant's height and weight). 23 The inclusion criteria were restricted to women with singleton pregnancies who underwent an OGTT between 24 and 28 weeks of pregnancy with a new onset of urinary leakage during pregnancy. Pre-pregnancy UI, known type 1 or type 2 diabetes mellitus, preterm delivery (< 37 weeks of gestation), multiple pregnancies, known fetal anomaly, or any clinical condition that may have jeopardized the health status of the woman were considered as the exclusion criteria.
Data on baseline information (age, parity, pre-pregnancy and current BMI, weight gain during pregnancy, educational level, marital status, fasting glucose, and glycosylated hemoglobin) were collected during the interview at of 34 weeks of gestation and medical records assessment. The Brazilian version of the Incontinence Severity Index (ISI) was used to categorize incontinence severity. 24 The multiplicative score is based on two questions assessing the frequency and volume of incontinence. 25 Women were also asked to complete the Brazilian version of the International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form (ICIQ-UI SF). 26 The ICIQ-UI SF comprises three scored items and one non-scored item, making it possible to assess the prevalence, severity, interference in daily life, and type of UI. 26 The ICIQ-UI SF score ranges from 0 to 21. Scores on the perceived impact of those reporting UI are set from '0' as not at all to '10' as a great deal. One non-scored item of the ICIQ-UI SF includes eight answers and is a self-diagnostic item to understand the participant's perception of the cause and type of leakage. A form completed immediately after birth was used to record the labor process, mode of delivery, and neonatal birth profile.
The primary outcome was the PSUI prevalence among the groups. UI was classified according to the International Continence Society guidelines for stress UI (SUI) (involuntary leakage on effort or exertion, sneezing, or coughing), urge UI (UUI) (involuntary leakage accompanied by or immediately preceded by urgency), and mixed UI (MUI) (involuntary leakage associated with urgency and exertion, effort, sneezing, or coughing). 27 Secondary outcomes were the prevalence of SUI, UUI, and MUI, as well as the frequency of UI, amount of leakage, the ISI score, the ICIQ-UI score, and perceived impact of UI.
SAS version 9.4 for Windows (Statistical Analysis System Institute Inc., USA) was used for statistical analyses. Clinical features are presented as frequencies and percentages or as means with standard deviations. Differences between groups were tested using chi-square or analysis of variance followed by the Tukey-Kramer analysis. A logistic regression model was used to assess the association between GDM and obesity and UI. Only clinical features with a p-value < 0.05 were included in the adjusted logistic regression analysis (age, gestational age, parity, previous newborn weight, hypertension, newborn weight, and classification).
This study was approved by the Research Ethics Committee of the institution (CAAE: 41570815.0.0000.5411). All patients were informed about the purpose of the study, and those who agreed to participate signed a consent form before recruitment.
Results
Among the 563 women eligible for recruitment, 539 (95.7%) agreed to participate in the present study. Among these patients, 172 participants were included in the NL group (31.91%), 113 in the NO group (20.97%), 109 in the HL group (20.22%), and 145 in the HO group (26.90%). Baseline characteristics differed between groups, including clinical features such as age, gestational age, parity, previous newborn weight, hypertension, newborn weight, and classification. The background variables of the study population are shown in ►Table 1.
The overall prevalence of PSUI was 70.87% (n ¼ 382), with no difference in the prevalence or type of UI between groups (►Table 2). However, the HO group had more frequent (p < 0.0001) and more abundant (p ¼ 0.0009) higher scores for the perceived impact of UI (p < 0.0001), ICIQ-UI SF (p < 0.0001), and ISI (p < 0.0001) questionnaires (►Table 3).
►Table 4 shows the logistic regression analysis with unadjusted and adjusted UI. Surprisingly, when adjusted for age, gestational age, parity, previous newborn weight, hypertension, newborn weight, and classification, the hyperglycemic group had significantly higher odds of UI severity than the other groups in the study. Furthermore, these groups presented a higher perceived impact of UI, ISI, and ICIQ-UI SF severe scores.
Discussion
To the best of our knowledge, this is the first study to assess the influence of obesity and hyperglycemia on the odds of PSUI severity. This cross-sectional study assessed the 28 With respect to the baseline characteristics of the present study, this cohort represented the underlying population characteristics of women with hyperglycemia during pregnancy. Advancing maternal age has been recognized as a major risk factor for the development of hyperglycemia during pregnancy. 29 The other risk factors greater parity, increased BMI, and hypertension. 30,31 Our data indicate these risk factors in the present cohort of the hyperglycemic groups. Such risk factors are also associated with an increased risk of developing UI. 6,32 In our study, although women in the HO group presented lower weight gain during pregnancy, which may be related to the fact that they received the treatment at PDRC, the symptoms related to UI appeared to be more severe than those in the other groups.
According to Daly et al., 33 21.7% of the population studied presented women with new-onset leakage who were continent in the 12 months before pregnancy. Brown et al., 34 found that the most common PSUI is SUI, characterized by unintentional loss of urine during physical movement or activity (e.g., sneezing, coughing, running, or heavy lifting). The pathophysiology of PSUI is multifactorial and yet to be understood. It has been implicated that hormonal and mechanical changes may play an important role. 35 In our sample, there was no difference in the prevalence of the UI types between the groups. Studies showed that irrespective of the type, UI has detrimental effects on the quality of life in $54.3% of all pregnant women 36 and the quality of life of pregnant women with incontinence worsens with increasing gestational age to term. 37 Our sample presented higher prevalence of PSUI rates (70.87%) when compared the general literature. However, this corresponds with a similar study with smaller sample size, in the same gestational period (i.e., 34-38 weeks of gestation) the prevalence rate was 60.5%. 38 Further research is needed to explore the differences in prevalence of PSUI in multicentric and multi-ethnic groups.
Our findings show that women with a BMI of ! 30 kg/m 2 are significantly more likely to report less frequent inconti-nence episodes and amount of leakage, moderately perceived impact of UI, and slight to moderate UI severity. A large longitudinal study that enrolled 10,098 women who were followed up as of 28 weeks of gestation found that high prenatal BMI increased the risk of SUI in late pregnancy (OR: 1.037; 95% CI: 1.020-1.054). 39 Overweight and obesity are considered major modifiable risk factors for UI in young and middle-aged women. 40 Previous studies have shown that middle-aged women with obesity are 3.1 times more likely to have severe UI than women with BMI in the normal range. 41 These differences might be related to the different types of inquiries used to address UI symptoms and study designs. Anatomical changes in patients with obesity assessed by ultrasonography showed that bladder neck descent was more evident in women with obesity than in women with normal weight. 42 A high BMI increases intra-abdominal pressure, resulting in an imbalance between vesical pressure and urethral closure, triggering urine leakage. 15,43 The first study to report the prevalence of UI in women with GDM was conducted by Kim et al. 44 They recruited 228 women with GDM; 49% reported weekly or more episodes of incontinence during pregnancy and 50% after delivery. 44 Another cross-sectional study found that GDM was an independent risk factor (OR: 2.26; 95% CI: 1.116-4.579) for PSUI, and PSUI was a risk factor 2 years post cesarean section UI (OR: 4.992; 95% CI: 1.383-18.023). 45 A large study 9 recruited 6653 women who were followed up for 2 years postpartum to investigate the association between GDM and postpartum UI. They demonstrated that women with GDM were more likely to report SUI (OR: 1.97; 95% CI: 1.56-2.51), UUI (OR: 3.11; 95% CI: 2.18-4.43), and MUI (OR, 2.73; 95% CI: 1.70-4.40). 9 Furthermore, another study showed that the occurrence of PSUI, the severity of UI, and the negative impact of UI on the quality of life are increased in women with hyperglycemia during pregnancy. 38 Recent studies 46,47 conducted in animal models and pregnant women have aimed to identify and quantify the morphological changes in the rectus abdominis muscles due to hyperglycemia during pregnancy. Changes in the fiber type, fiber area, and collagen content have been reported and may be related to diabetic myopathy.
The strengths of this study include the use of validated questionnaires that enable the identification of the type, frequency, severity, and perceived impact of UI. The International Consultation on Incontinence recognized that ICIQ questionnaires are grade A (high-quality) measurement instruments for assessing UI. 48 Another strength of our study is the use of a database with the glycemic values of the participants and the established diagnostic criteria for GDM and obesity. An important limitation is the limited number of participants that could have powered our results and the lack of an objective measure of UI assessment, such as bladder diaries, pad test, and/or urodynamic test, to compare with our subjective measures.
Conclusion
The results of the present study show that hyperglycemia during pregnancy is an independent risk factor for PSUI. The logistic regression models showed that when compared with the normoglycemic lean women, women who are obese and have hyperglycemia during pregnancy are more likely to experience severe and very severe PSUI with important perceived impact on daily life. The findings from our study provide information on PSUI in volunteers at the third trimester of pregnancy screened for hyperglycemia, and such findings are directly relevant to clinical practice. Such risk factors are preventable, manageable, and even curable, and healthcare professionals should perform evidence-based treatment.
Contributors
All authors were involved in the design and interpretation of the analyses, contributed to the writing of the manuscript, and read and approved the final manuscript.
Conflicts to Interest
The authors have no conflicts of interest to declare. | 2023-07-27T13:04:32.565Z | 2022-09-15T00:00:00.000 | {
"year": 2023,
"sha1": "b3d631979bdcef9bb8cd4ef26fcf4ad623b52469",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d58cefba8f88d797e62d743ea93542d6a30dad62",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
63124742 | pes2o/s2orc | v3-fos-license | Data intensive ATLAS workflows in the Cloud
This contribution reports on the feasibility of executing data intensive workflows on Cloud infrastructures. In order to assess this, the metric ETC = Events/Time/Cost is formed, which quantifies the different workflow and infrastructure configurations that are tested against each other. In these tests ATLAS reconstruction Jobs are run, examining the effects of overcommitting (more parallel processes running than CPU cores available), scheduling (staggered execution) and scaling (number of cores). The desirability of commissioning storage in the Cloud is evaluated, in conjunction with a simple analytical model of the system, and correlated with questions about the network bandwidth, caches and what kind of storage to utilise. In the end a cost/benefit evaluation of different infrastructure configurations and workflows is undertaken, with the goal to find the maximum of the ETC value.
Introduction
In the context of this paper, Cloud computing only covers the usage of Infrastructure as a Service (IaaS) from public/commercial providers. The potential benefits are difficult to quantify, since the Cloud's impact on a workflow's performance is not well understood. An additional difficulty is that there are many different workflows that are run simultaneously on each of the Worldwide LHC Computing Grid (WLCG) sites and therefore on the Cloud (as a site extension, or site of its own). The different workflows include Monte-Carlo simulations, which are not data intensive and running them on the Cloud is mostly understood. Another type of workflow is physics analysis, since these consist of user generated code, the performance is difficult to evaluate. This paper therefore focuses on raw data reconstruction workflows. The findings can be easily translated into any other data intensive workflow, as long as its resource usage and requirements are known. Section 2 takes a look into a reconstruction job, highlighting the required information. Once the workflow is fully understood, there might be optimisations that can be applied in order to increase the performance. One such optimisation is the CPU overcommittment which is detailed in section 3. Overcommittment means to have more processes running simultaneously than there are CPU cores available. In order to predict how a Cloud site will perform, the model in section 4 is being created. The idea is that the model will represent a graspable output metric, answering many questions in an understandable fashion. This can provide answers about choosing the best Cloud configuration, how to apply optimisations, which Cloud provider to choose, or which workflows to run on an infrastructure. Combining the concepts of the previous sections, section 5 gives some example applications of the Model. The paper concludes with sections 6 and 7.
ATLAS Job profile
As already mentioned, this paper focuses on ATLAS experiment [1] raw data reconstruction. In this case Athena(MP) version 20.7.6.7 was used (the exact command can be found in the supplementary data). The job was run on a Virtual Machine (VM) as it would be the case on a Cloud site. The VM was on a hypervisor that was not used by other users, so there was no interference from other jobs. The VM had 8 virtual cores, 32 GB of RAM and a spinning disk. The input data was downloaded from a remote location, before the job was executed. Figure 1 shows the utilisation of the network, CPU, disk and memory. The plotted data stems from the sar command (part of the sysstat package) which was executed and its output recorded every five seconds. The plot can be split into several parts, according to different processes happening within the job. The first separation is into transformations (red letters below plot). Each transformation can be split into several substeps according to their resource usage. The (black) numbered areas show distinct resource usages.
The first area depicts the input data download, it shows high network activity (as the job has not started, the other values are not plotted here). The very short second area shows the setup period, where the code and conditions are loaded -not much CPU is used, therefore disk and network activity can be observed. In the third area the first data processing takes place and all eight available CPU cores are fully used. Some disk writing activity is going on throughout this step. The RAWtoESD transformation consists of areas one, two and three. The memory footprint is shown and the 32 GB are more than enough to accommodate the job. In the fourth area, the ESDtoAOD transformation is set up, which is processed in the fifth area. These results are then merged in the sixth area. Another output is produced in the ESDtoDPD transformation. In the end all the final mergings take place. Merging has a special significance, because it uses only a single CPU core but uses a large portion of the overall time. In this case, merging means only 12.5 % of the computing power of the machine is used.
The AthenaMP (MultiProcess) framework that is used to run on multicore VMs, was introduced to save a substantial amount of memory compared to running multiple Athena jobs in parallel. The memory requirement of the processes is well below the total available memory. The processes need even less RAM than can be seen in the plot, which can be easily demonstrated by reducing the available RAM below the peak memory usage from the plot. It can be observed that the job starts to swap out pages, meaning the system transfers some data which is stored in RAM to the disk (disks are much slower than RAM). These pages are not read back in (light swapping) so the job is barely slowed down. A significant slowdown can be observed only after lowering the available RAM even further. Then the swapped out pages are actually needed and heavy swapping in and out of pages slows down the job significantly. This threshold is generally reached at around 1.3 GB RAM per core.
CPU overcommittment
CPU overcommittment is a resource optimisation technique. It means to send more work (parallel running jobs) to a computing resource than there are cores. Currently, CPUs are not used 100% during a workflow execution, due to IO-wait (especially for a Cloud site without local storage) and the ATLAS multicore workflow concept AthenaMP (serial merging steps, see job profile). Putting additional workload onto the VMs could make use of the time the CPUs are idle, while at the same time increasing the memory footprint. This trade-off of higher CPU utilisation vs. lower RAM requirements can be hard to implement on a static infrastructure. In the context of Cloud computing, the option to add or remove RAM exists by design, but affects the cost. Figure 2 shows tabular results from tests performed on the same VM as in section 2. The VM has eight cores and an LHCONE [3] network connection. The available RAM is reduced by having a background application locking 16 GB. The different scenarios that have been tested against each other are the variation of processes (not overcommitted vs. overcommitted), the available amount of RAM (16 or 32 GB) and the different locations of the data (BNL or local). The overcommitting factor of two simplifies the comparison due to AthenaMP restrictions. The data from BNL was read event by event during job execution (events are independent snapshots from the detector containing the results of particle collisions). Local data means it is already in the storage of the VM. Since it takes very little time to download the data from the local site to the storage of the VM (wrt. the job's duration), there is no differentiation between data on VM or local storage. Therefore this test depicts a best vs. worst case comparison. The results in Figure 2 show that overcommitting is very good in latency and/or serialisation hiding, meaning it reduces the overhead when reading data on the fly. Since more processes need more RAM, overcommitting is RAM dependent. Given enough RAM, even the local data scenario benefits from overcommitting (due to the profile of the job). After demonstrating that there are benefits, the ideal RAM-to-core ratio is of interest. It can be obtained by testing many possible scenarios, or by applying the Model from section 4. The Model automatically gives the cost/benefit ratio, whereas it would be tedious to include e.g. hardware cost or Cloud market prices by hand.
Workflow and Infrastructure Model
The Model was created in order to answer questions like: For this next Cloud procurement, how much bandwidth is required between the Cloud provider and the data centre?
In order to answer this, many parameters have to be considered, for example the type of workflow (data intensive?), the speed of the CPU and many more. These parameters were either related to the infrastructure or to the workflow and were kept independent of each other. Soon more questions came up that could be answered by the model, or an improved version thereof. In particular the overall performance impact of Cloud site configurations, as well as workflow modifications, was of interest. The Model takes the plethora of workflow and infrastructure parameters as input and generates one graspable output metric. This metric that best describes the performance is ETC = Events/Time/Cost, where Cost depends on the Cloud provider. In Figure 3, some of the different in-and output parameters are depicted. On the top left the workflow input parameters are depicted. The field "HEPSPEC*s" makes it evident that they were chosen infrastructure independent, as HEPSPEC06 (HS) [2] is a universal benchmark rating particular to High Energy Physics (HEP). In Figure 1, the job was split into several transformations, according to the different areas. The Model splits each workflow the same way, whereas multiple jobs in a workflow are seen as series of their consisting transformations. These transformations can differ significantly from one another, as can be seen from the profile. An additional benefit of this modular approach is that the workflow can consist of multiple jobs and the Model accommodates them in the same way. Even switching between different job/workflow compositions is straightforward by adding and removing the necessary transformations.
For different use cases, the overall result can be more than Events/s/CHF. When searching for optimisations on an existing infrastructure, cost does not play a role and the result can be Events/s, which is a measure for the physics throughput (favours fast, usually more expensive hardware). If there is no time pressure, the infrastructure should be optimised for events/CHF, producing physics as cheap as possible (favours cheap, usually slower hardware). In addition, any input metric can become the result, which is useful for cases where infrastructure requirements are unknown, e.g. bandwidth (see section 5.2). This provides answers to questions like: There is a fixed budget, which Cloud infrastructure should be acquired in order to maximise the processed events? Which combination of bandwidth, caches and Cloud storage is the most beneficial? Similarly: What combination of workflows (combination of Simulation, Reconstruction and Analysis) is the best to run on this Cloud?
The model has been designed in a very general way, so that all ATLAS workflows, all the other experiments' workflows and even non-physics workflows can be described by it.
Model description
In Figure 3 the Model concept is sketched. The Model takes and combines the different workflow and infrastructure parameters in order to calculate the desired output metric. A better understanding of the Model can be gained when looking at a more detailed view in Figure 4. The Model consists of a five layer structure, where each layer is combined mathematically into the next layer. In the end all results have to be determined from the workflow duration T ime One W orkf low (layer 4), which is a linear combination of the substeps (layer 2) of each transformation (layer 3): T ime One W orkf low = T ransf ormations Substeps Duration. The substeps correspond to the splitting that has been done in Figure 1. This value is added to the transformation and therefore workflow duration. Additional considerations: Instead of downloading the data, it could be read event by event during job execution, which is negatively affected by high latencies (add "event access time"). Hereby, the same bandwidth constraints apply, but the network usage should look relatively flat, whereas the download scenario could look more spiky. This means the prediction is less precise for the download scenario (especially in the beginning, when all jobs download data at the same time -until the downloads are spread apart).
In the same manner as for the StageIn T ime all values from the second layer are determined, the detailed description of each item follows.
A similar calculation as for the stage-in duration is done for the stage-out duration (StageOut T ime), replacing input with output and download with upload.
The startup duration (Startup T ime) consists mainly of loading the code into memory, some small checks on the input data and retrieving the ATLAS metadata. This substep has a short duration compared to the other parts and is not influenced heavily by the infrastructure, therefore it is considered to be constant.
The generally most time consuming substep is the processing: P rocessing T ime = (CP U time overall + CP U wait time + Idle T ime)/N r Cores, where CP U wait time: Time Swapping happens when a transformation requires more RAM than is currently available. Additional complexity enters, because a differentiation between light and heavy swapping has to be made (see section 2). The RAM discrepancy increases the transformation's duration (not necessarily in a linear fashion). Another transformation might not be affected by the same RAM limitations, because of a smaller memory footprint. There has been some effort in describing the exact behaviour mathematically, but since the goal is to find the maximum event throughput which will never be in this region, it is not included in the Model. The heavy kind of swapping is penalised severely by setting the Swap T ime to a high value implemented (as being constant) so far. Since the major difference in speed is not between different disks of the same type, but between SSD and HDD, the implementation will probably choose between two scenarios (fast, slow). Validation and clean-up are not infrastructure dependent and even shorter than the start-up substep and they are also considered constant. All the durations of the substeps (layer 2) of all transformations (layer 3) sum up to the overall duration (layer 4). Before continuing to the final result (layer 5), as mentioned in section 3 one purpose of the model is to investigate and test optimisation scenarios. In order to get further results for the overcommittment scenario, the Model has to be modified. The memory limitation discussed for the processing is a crucial part, because overcommitting increases the overall memory footprint.
In case overcommittment happens, part of the additional workload can make use of the time the CPUs are idle, resulting in a higher CPU utilisation. The major change is the aggregation of all CPU utilisation and idle times, to consider them for the whole workflow (instead of for each substep). The overall processing time for overcommittment P rocessing T ime OC is determined by: P rocessing T ime OC = T ransf ormations (CP U time overall + CP U wait time + Idle T ime − OC F actor * (nr processes − N r Cores)/nr processes * (CP U time overall + CP U wait time))/(N r Cores). The overcommittment factor OC F actor is a measure of which fraction of the overcommitted processes can be computed in parallel with the non-overcommitted processes (making use of the idle and CPU wait time). It is percentage based and ranges from 0 to 1. A value of one means it is possible to use all of the CPU wait and idle time to compute the additional processes. If there is no overcommittment (nr processes ≤ N r Cores), the overcommittment factor OC F actor = 0. The overall CPU time (CP U time overall) as well as the CPU wait time (CP U wait time) have to include the overcommitted processes. The first part of the equation is the same as before. What changes is that the time fraction that is gained from overcommitting through parallel processing is subtracted. The difficulty is to determine the overcommittment factor. Work is under way to describe it in dependence of the input parameters.
In order to get to the overall duration of the workflow (layer 4), the processing time is added to the sum of all the other substeps (layer 2). The ETC result (layer 5) is calculated in the following way: Events time cost = (N r Evts * machines)/(T ime One W orkf low * Cost machine sec). This metric is especially useful when considering Cloud providers. Some Cloud providers charge for data transfer. Including this cost reduces the events/cost ratio for all infrastructure configurations by the same amount.
The Model is kept as simple as possible. For most parameters it is possible to go further into detail. This may be done once the Model is validated and has error estimations. There is a trade-off between keeping it simple and making it accurate. The benefits of a simpler model are that it is applicable without expert knowledge of the workflow and infrastructure and that it is more accessible to other experiments/users. Especially for Cloud computing, where some infrastructure aspects are unknown, a less complex model may be the only option. Disadvantages are that it might not be applicable to some special cases/configurations.
Overcommittment
One possible Model application is to find the best optimisation parameters. In the case of overcommittment, the parameter space that has to be explored is the amount of RAM against the number of processes. The variation of RAM is necessary, because the memory footprint changes according to the number of processes. Additional RAM comes at a cost, which is modelled the following way: There is a flat budget and fixed amount of cores per VM. Additional RAM therefore means less budget for other parts of the infrastructure (CPUs), which means fewer VMs. The Model has been adapted accordingly, whereas the RAM-to-CPU pricing ratio has been taken from the pricing scheme of a Cloud provider. This can be adapted to a particular provider, or be exercised over several providers (hardware types) to get a comparison. Figure 5 shows the result of the Model. The maximum ETC value represents the configuration of processes/RAM that should be taken in order to maximise throughput and minimise duration and cost. This plot gives a continuous depiction of all possible scenarios, something that would have taken a long time to fill with hundreds of measurements. In the estimation (of the unfinished Model) in figure 5, the current 2-to-1 ratio of RAM [GB] to CPU [core] would not be the optimum. The maximum of ∼ 5863 Events/s/CHF lies at 14 GB RAM per machine with an overcommittment of 11 processes/machine.
Bandwidth estimation
Another application is to predict the overall bandwidth requirement of a Cloud site. In principle any of the input parameters can be modelled. In this example, the concern was that the bandwidth between the Cloud provider and external storage would not be enough, meaning CPUs would be constantly idle, because they are waiting for the slowly downloading input data. The question to be answered was, whether a Cloud site consisting of 4000 CPU cores can run ATLAS raw data reconstruction efficiently, while being connected by a 10 Gb WAN link. The Model showed that the link would be enough for the expected specification (1000 x 4 core VMs, 116 HS*s/evt CPU time, 850 kB/evt input, 2701 Evts per Job and 1417 kB/s instantaneous network read). However, it was not a hundred percent clear what kind of infrastructure the Cloud provider would supply. In addition, it has happened before that the ATLAS experiment changed their software or job configuration or the data itself changed. In order to understand these scenarios, some parameters were varied and thereby their impact evaluated. This could help to prepare for future or worst-case scenarios. Figure 6 shows the result.
The black horizontal line depicts the 10 Gb/s bandwidth limit (Bandwidth Limit). Various parameters of the Raw data reconstruction workflow were varied in order to include future changes to the workflow or the input data. The plot shows that within the chosen range, the size of the events (Size Event), the number of events per job (Number Events) and the instantaneous bandwidth (Network InstRead) do not have such a high impact on the bandwidth requirement (Network Traffic) as to make it exceed its limit. The instantaneous bandwidth is important when reading the input file event by event.
The variation of the CPU time per event (CPU Time Event) on the other hand, could become a problem if it goes below ∼ 30 HS*s. This could happen either if the events become less complex, which is unlikely as the complexity is rising along with the pileup. The second factor which could reduce the processing time is to have faster CPUs. Regarding the technological evolution of the last years, the scale of this progression is too small for this to have that high of an impact.
Future Work
The Model is in the process of being validated. This will be achieved by modelling and then comparing the data gained from recent CERN Cloud procurements with data from the CERN computing centre and personal VMs (controlled environment) at CERN and Göttingen. Furthermore there will be error estimations, which will point to the largest error sources, that may be eliminated by a more in-depth description. In addition more optimisations will be investigated, especially scheduling and caching.
Conclusion
The concept of Cloud computing brings challenges but also opportunities. Flexible hardware allows hardware adaptations to the workflows, like overcommitting. Understanding the possible gains of using Cloud computing or different optimisation techniques can be difficult. Therefore it is important to have a deeper knowledge of the workflow itself. The Model can help to describe and choose different workflow and infrastructure combinations, as well as the "best" commercial Cloud provider. The Model depicts correlations between parameters and finds the impact they have on each other. It assists when planning for future changes or worst case scenarios. | 2019-02-16T14:32:38.042Z | 2017-01-25T00:00:00.000 | {
"year": 2017,
"sha1": "6a5f3f0d64b8252e6949fd91e0fd9a701b14adff",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/898/6/062008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9285f45774620996edfd3dc0b191da8c5a5e0b03",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221938029 | pes2o/s2orc | v3-fos-license | A Sporadic and Lethal Lassa Fever Case in Forest Guinea, 2019
Lassa fever is a rodent-borne disease caused by Lassa virus (LASV). It causes fever, dizziness, vertigo, fatigue, coughing, diarrhea, internal bleeding and facial edema. The disease has been known in Guinea since 1960 but only anectodical acute cases have been reported to date. In January 2019, a 35-year-old man, a wood merchant from Kissidougou, Forest Guinea, presented himself at several health centers with persistent fever, frequent vomiting and joint pain. He was repeatedly treated for severe malaria, and died three weeks later in Mamou regional hospital. Differential diagnosis identified LASV as the cause of death. No secondary cases were reported. The complete LASV genome was obtained using next-generation sequencing. Phylogenetic analysis showed that this strain, namely the Kissidougou strain, belongs to the clade IV circulating in Guinea and Sierra Leone, and is thought to have emerged some 150 years ago. Due to the similarity of symptoms with malaria, Lassa fever is still a disease that is difficult to recognize and that may remain undiagnosed in health centers in Guinea.
Introduction
Lassa fever is a haemorrhagic fever due to Lassa virus (LASV) that was discovered in 1969 in Nigeria [1,2]. The disease is endemic in West Africa, particularly in Guinea, Sierra Leone, Liberia, Mali, Côte d'Ivoire, Togo, Benin and Nigeria. The majority of cases are registered in Sierra Leone and Nigeria, while the other countries have registered sporadic cases [3][4][5]. In Guinea, the disease is widespread in the south, as evidenced by seroprevalence rates in humans ranging from 20% to 40% in contrast to the 3% to 8% observed in the north [6]. Ecological studies on rodent reservoirs show a similar distribution of the Lassa virus [7][8][9][10][11].
However, Guinea has very few reported acute human cases, and only a few studies have a posteriori described acute Lassa fever cases in the prefectures of Kindia, Faranah, Kissidougou, Guekedou, Macenta and N'Zérékoré in 1992 and 1996-1999 [12][13][14]. Therefore, the emergence of one acute case in 2018 in Macenta [15] and another one in 2019 in Kissidougou [16] has prompted great interest, necessitating immediate investigation. The epidemic potential of Lassa fever was recently revised by the Word Health Organization, which has listed Lassa fever among the top five zoonoses to be monitored, along with Ebola, MERS-CoV, Nipah and Zika (https://www.who.int/teams/blueprint). In this article, we describe both the epidemiological and molecular investigation of the Kissidougou case and his contacts.
Field Investigation
Field investigations were managed by the Agence Nationale de Sécurité Sanitaire (ANSS) in consultation with members of the Ministry of Health, the Gamal-Nasser University of Conakry, the Word Health Organization (WHO), the NGO Médecins sans Frontières (MSF) and the Centre of Disease Control (CDC). A first investigation took place 6-7 February 2019 in the prefecture of Kissidougou where the patient lived. A second investigation took place 6-7 February 2019 in the Mamou prefecture where the patient died ( Figure 1). Contacts in health centres and hospitals were identified and their blood was collected for PCR testing. The tests were carried out in the laboratory on 31 January 2019 for the patient and from 8 to 9 February 2019 for the contacts (Table S1). All subjects and the family of the deceased patient gave their informed consent. The protocol has been approved by the Ethics Committee for Health Research in Guinea (permit n • 129/CNERS/16, approval 11 October 2016).
Field Investigation
Field investigations were managed by the Agence Nationale de Sécurité Sanitaire (ANSS) in consultation with members of the Ministry of Health, the Gamal-Nasser University of Conakry, the Word Health Organization (WHO), the NGO Médecins sans Frontières (MSF) and the Centre of Disease Control (CDC). A first investigation took place 6-7 February 2019 in the prefecture of Kissidougou where the patient lived. A second investigation took place 6-7 February 2019 in the Mamou prefecture where the patient died ( Figure 1). Contacts in health centres and hospitals were identified and their blood was collected for PCR testing. The tests were carried out in the laboratory on 31 January 2019 for the patient and from 8 to 9 February 2019 for the contacts (Table S1). All subjects and the family of the deceased patient gave their informed consent. The protocol has been approved by the Ethics Committee for Health Research in Guinea (permit n°129/CNERS/16, approval 11 October 2016). [12] and this study.
Molecular and Serological Diagnosis
The presence of antibodies against HIV-1/HIV-2 and yellow fever virus (YFV) was tested using the INSTI kit (Biolytical, Richmond, Canada) and ELISA (Pasteur Institute, Dakar, Senegal) respectively. The presence of the Hepatitis B virus (HBV) was tested using a rapid screening kit (CapitalBio Technology, Beijing, China). The presence of the Ebola virus (EBOV) and LASV was investigated by reverse transcription-polymerase chain reaction (RT-PCR). Viral RNA was extracted from serum samples using the QIAmp viral RNA kit (Qiagen, Hilden, Germany). In Conakry, the EBOV test was carried out by real-time RT-PCR, using the RealStar ® Filovirus Screen RT-PCR Kit 1.0 (altona Diagnostics, Hamburg, Germany) on a Smart Cycler II system (Cepheid, Sunnyvale, CA, USA). The LASV test was carried out using a conventional RT-PCR targeting the glycoprotein (GP) with the primers LVS36+ (5 -ACCGGGGATCCTAGGCATTT-3 ) and LVS339-(5 -GTTCTTTGTGCAGGAMAGGGGCATKGTCAT-3 ) [18]. The EBOV and LASV tests were performed on both the patient and contacts in Conakry (Table S1). A confirmatory real-time RT-PCR LASV test using the RealStar ® Lassavirus RT-PCR kit 2.0 (altona Diagnostics) on the Rotor-Gene Q platform (Qiagen, Hilden, Germany) was performed on the serum sample of the patient in our satellite laboratory in Gueckedou. The LASV-specific RT-PCR kit targets both the S-and L-segments of LASV.
LASV Sequencing
The serum sample from the patient was dried on a filter paper and sent to the Bernhard Nocht Institute for Tropical Medicine in Hamburg, Germany. Extraction and metagenomic library preparation for next generation sequencing (NGS) on the Illumina platform were performed as described previously [5]. Briefly, viral RNA was extracted directly from dried serum on filter paper using the QIAmp viral RNA kit (Qiagen), further digested with DNAse (TURBO DNase, Thermo Fisher Scientific, Carlsbad, USA), randomly reverse-transcribed and amplified using a Sequence-Independent Single-Primer Amplification (SISPA) approach. The Illumina sequencing library was prepared using the Nextera XT v2 Kit (Illumina, San Diego, USA) with 1 ng of SISPA-amplified cDNA, according to the manufacturer's instructions, with a total of 12 cycles in the library amplification PCR, and further sequenced on a 2 × 300 bp Illumina MiSeq run. Majority consensus was obtained with bases called at a minimum depth of 20x and a support fraction of 70%. Any base location that did not fulfil the depth and support fraction was assigned an "N" IUPAC ambiguity notation. The complete sequences were submitted to GenBank under the names and accession numbers LASV/GUI/KIS-2019-L-segment #MT861993 and LASV/GUI/KIS-2019-S-segment #MT861994.
Phylogenetic Analysis
The nucleotide sequences of full-length glycoprotein precursor (GPC), nucleoprotein (NP) and polymerase (L) were aligned separately in three data sets including 41 sequences for each. The sequences were chosen to be representative of their clusters after a preliminary analysis done with 64 NP sequences only and elimination of similar or clone sequences. The alignment of nucleotides was realized according to the position of amino acids in the protein alignment. The phylogeny was inferred by the Bayesian Markov Chain Monte Carlo (MCMC) method implemented in the BEAST software, version 1.10.1 [19]. With the view to performing a time-calibrated phylogeny, the parameters were set in BEAUTI as follow: the tip dates at the year level, the substitution model as GTR + gamma and codon partition with positions 1, 2, 3, the clock model as strict or uncorrelated relaxed. A coalescent tree with a constant size population was set as prior. The length of the chain was 10 million, with echo states and log parameters every 10,000 steps. The xml files issued from BEAUTI were run in BEAST and checked in TRACER. After checking the effective sample size to be above 200 for all the parameters, the consensus trees were obtained in TreeAnnotator, and then visualized through FigTree (BEAST packages, https://beast.community/programs).
The Case Description
On 7 January 2019, a 35-year-old man consulted a clinic in Kissidougou because of fever accompanied by chills, dizziness, very frequent bilious vomiting and joint pains. Malaria was diagnosed and the patient was treated with paracetamol and quinine. Despite complying with the malaria treatment, symptoms remained unchanged for one week. He returned to the same clinic, which further referred him to the regional hospital located in Kissidougou on 21 January 2019. At the end of the consultation, a diagnosis of malaria and typhoid fever was made. He received a perfusion and a recommendation of hospitalization was made. However, the patient refused to be hospitalized and came back home. After three days of treatment, and feeling that there was no improvement, he went to another health centre. Three days later, he decided to travel to Mamou, where he stayed two days with his family. Noting that there was no improvement in his status, his parents decided to bring him to the regional hospital of Mamou on 28 January 2019, where he was admitted at 2 p.m. to the emergency room at the Epidemic Treatment Centre (CT-EPI). At the time of admission, the symptoms were as follows: fever and chills, dizziness, vomiting, joint pain, diarrhoea and prostration. A diagnosis of severe malaria was given and he received an intensive supportive antimalarial treatment (rehydration, artesunate, ampicillin, paracetamol, dicynone). At 3 p.m., the patient started bleeding with a cough tinged with blood and the epistaxis started. The physician in charge of the intensive care unit raised the suspicion of a viral haemorrhagic fever (VHF) such as the Ebola virus disease. A blood sample was taken and packed for dispatch to the reference laboratory in Conakry. The patient died at 10 p.m. with a picture of toxic-infectious shock and diffuse bleeding. The body was transported to the morgue and transferred to the Red Cross for a dignified and secure burial.
Laboratory Diagnosis, Field Investigation and Contact Tracing
On 30 January 2020, the laboratory received the blood sample of the case and proceeded to various testing. Differential diagnosis included malaria, serology of HIV and YFV, and acute infection by HBV, EBOV and LASV. Only the presence of the LASV was confirmed on 30 January 2020 by conventional LASV RT-PCR. It was further re-tested by real-time LASV RT-PCR, and LASV Ct values of 36.0 for the S-segment and 24.5 for the L-segment were found. According to the equation of the standard curve for the GPC assay (y = 0.293x + 14.324 where y = log10 (RNA copies/mL plasma) and x = Ct), we can estimate the number of copies/mL to be 5.97E+03. The confirmation of a Lassa fever case launched field investigations and contact tracing activities. The investigation in Kissidougou revealed that the patient was an entrepreneur who was a timber trader. Before falling ill, he had spent a few days in Dandou, a village 32 km from Kissidougou, where he frequently went to do business. A total of 41 individuals in contact with the case in Kissidougou (n = 7), Mamou (n = 32) and Conakry (n = 2) were identified by the field teams, and further sampled for LASV RT-PCR as per Ministry of Health National Strategy guidelines in the case of VHF suspicion even in the absence of symptoms. None of them were found positive for EBOV or LASV by RT-PCR (Table S1).
Phylogeny
Phylogenetic analysis shows that the LASV strain from the case, further named the Kissidougou strain, belongs to clade IV of the Lassa mammarenavirus genus. This clade includes LASV known to circulate in Faranah (Upper Guinea) and in Macenta (Forest Guinea). Trees based on the combined complete glycoprotein (GP) and nucleoprotein (NP) (Figure 2A) and polymerase ( Figure 2B) show that the Kissidougou strain is very closely related to a LASV strain identified in Liberia in 1981 (LIB-807987). Nucleotide similarities between these two strains are 88.9% for GP, 87.8% for NP and 86.7% for the polymerase. The amino acid translation gives similarities of 96.9% for GP, 96.0% for NP and 93.5% for the polymerase. The time of the most recent common ancestor (TMRCA) of the Kissidougou and LIB-807987 cluster is estimated at 139 (95% Highest Posterior Density (HDP) interval: 119-160) years using the GPC and NP, and at 155 (95% HPD interval: 139-170) years using the polymerase. In addition, the analysis on the partial NP, including four more sequences published in Bowen et al. 2000 [14], further supports that the Kissidougou cluster is different from that of Faranah, Macenta and N'zérékoré ( Figure S1).
Identification of the Kissidougou Sub-Lineage
In Guinea, numerous LASV sequences have been largely described in rodents, notably in the region of Faranah (221 sequences), Kindia (6 sequences) and Guékedou (9 sequences) [9][10][11]20,21]. However, only 5 sequences are derived from humans, 2 complete from Faranah and Macenta [22,23], and 3 partials from Kissidougou and Nzérékoré [14]. Our report represents thus the sixth description of a LASV strain isolated from humans in Guinea. It demonstrates the circulation of LASV in the surroundings of Kissidougou, which was already observed in 1960 among missionaries living in Telekoro near Kissidougou. Indeed, that historical study revealed that three individuals who had fever with long convalescence, with or without deafness, harboured LASV neutralizing antibodies [17]. The child of one of the missionaries also had LASV-neutralizing antibodies without signs of a febrile episode. From 1996 to 1999, an investigation among patients hospitalized in Kissidougou revealed that six of them were Lassa seropositive [12]. They came from villages around the town (Figure 1). This therefore indicates that LASV has been circulating for 60 years in the Kissidougou area. Phylogenetic analysis of the polymerase indicates an older origin, dating back 155 (95% HPD interval: 139-170) years.
The strain described here is clearly related to the sequence LIB-807987 observed in 1981 in Liberia. It may be thus speculated that trades, including those related to the timber trade between Forest Guinea and Liberia, and commercial business favour LASV circulation between Kissidougou and Liberia. The case was, indeed, a logger merchant who used to sell wood in Forest Guinea without prior travel history to Liberia. Yet, a clear geographical origin (i.e., Liberia or Guinea-Kissidougou) for this sub-cluster remains speculative due to a lack of information about LIB-807987, as well as a lack of additional sequences. Thus, the name "Kissidougou cluster" is to highlight that it is different to that of the Macenta cluster but that it is also still part of the LASV known to circulate in Guinea. The Macenta cluster also includes three sequences, which, although identified in Liberia, originate from Macenta, Guinea, a town bordering Liberia. For example, the case described in 2018, the sequence of which is referenced under number LIB-LF18040 (MH215285 in Wiley et al. 2019), was a woman working at SOGUIPAH, near Macenta. She was diagnosed in Ganta, the neighbouring town in Liberia. The other large "Liberia" cluster, composed of sequences identified in, and originating from Lofa, Bong and Nimba counties in Liberia, forms a cluster independent from that of Guinea. Altogether, the description of this new strain allows us to speculate about a "Macenta" and a "Kissidougou" cluster circulating in Forest Guinea. These clusters appear to be more recent than the ones observed in Faranah, Upper Guinea, and in Liberia. This last country is probably the entry point of the virus in the Mano River region [24].
Strengths and Weaknesses of the Guinean Health System
In three weeks, the patient went through two health centres, one in Kissidougou and one in Mamou. Moreover, despite returning to the same health care facility in Kissidougou at a one week interval with worsened symptoms, no suspicion of VHF was considered. Even following his admission in a critical state at the CT-EPI of Mamou, the diagnosis of severe malaria continued to be considered. This indicates that the accurate and rapid clinical diagnosis of VHF in endemic areas is challenging, despite the 20142-016 Ebola virus disease epidemic in the country. This could be improved by providing regular awareness training on VHF case definitions and access to adequate communication tools (e.g., posters, flyers, drawings etc.).
On the other hand, the turnaround time for differential diagnosis was short, around two days (Table S1). This included sample collection, transportation to the laboratories in Conakry and Guéckédou, analysis, results interpretation and reporting to the ANSS and WHO. Since the Ebola outbreak, the diagnostic chain has greatly improved and two days are now required to provide a diagnostic result of samples collected in remote areas. Similarly, field investigations and contact tracing activities, which have been in place since the creation of the ANSS in July 2014, four months after the start of the Ebola crisis in Guinea, have been rapidly launched following the alert and no secondary cases have been identified. Finally, the absence of secondary infections at the health centres visited by the case also indicates adequate infection prevention and control practices.
Conclusions
This case report shows that early identification of VHF cases remains challenging in remote areas. Regular awareness training may facilitate the implementation of improved field surveillance and early detection. The molecular description supports a new sub-lineage of LASV in Forest Guinea. | 2020-09-27T13:05:33.147Z | 2020-09-23T00:00:00.000 | {
"year": 2020,
"sha1": "fcda78a697b1f04c9ce4f1a0d0e6b3488d3b652f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/12/10/1062/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9255b1ad69539eb025d948074a55cce50911bf5b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34282498 | pes2o/s2orc | v3-fos-license | Commentary triggered by the Individual Participant Data Meta-Analysis Consortium study of job strain and myocardial infarction risk
Mika Kivimaki initiated the Individual Participant Data (IPD) Meta-Analysis Consortium, which currently has 50 members. The Consortium recently published several research reports on the relationship between job strain (high psychological demands and low decision latitude at work), on the one hand, and cardiovascular disease and its risk factors, on the other hand. Since IPD repre- sents a novel way to conduct epidemiological research collaboration and as some of the findings from the IPD Consortium have been criticized, this commentary aims to address the rationales behind the approach and discuss some of the main criticisms of the Consortium.Researchers must tackle many problems when inter- preting associations between a psychosocial work envi- ronment factor and a health outcome. First of all, work environment factors belong to a "distal" rather than a "proximal" group. In other words, the closer one gets to a biological mechanism relevant for disease development, the more likely it is that a relevant association will be strong. For instance, small samples are needed to establish an individual "brain" factor associated with depression or emotional exhaustion - simply because the brain factor is more or less depression. Factors related to work orga- nization, on the other hand, are more "distal" since there are many factors that influence the relationship between the environment and the body's organs. Accordingly, it is sometimes difficult to obtain sufficient statistical power for the establishment of an undisputable association. For instance, the long "distance" between job strain and the outcome, myocardial infarction (MI), explains why we should expect a weaker association than in the study of "proximal" factors, for instance myocardial metabolism in relation to MI. Nevertheless, on a societal level, job strain is very important since it affects many working people, with a prevalence in the working population (in the IPD Consortium study's operational definition) of around 15%. Accordingly, if an unequivocal association is established, it is of major importance to those respon- sible for work organization and interventions designed to improve working conditions. However, large samples are needed to establish unequivocal proof.Since Karasek introduced his demand-control model (1), there have been many studies of the association between job strain and risk of MI. These studies have become increasingly sophisticated over the years. In addi- tion, there is accumulated indirect evidence from longitu- dinal studies of the relationship between job strain, on the one hand, and blood pressure variations and endocrine, metabolic, and immunological parameters, on the other hand. The results from these studies give us a plausible physiological explanation of the assumed relationship between job strain and MI risk.A reason why this research field has attracted strong attention is that MI is an undisputable illness outcome. The study of MI risk, therefore, serves as a good scientific model for studying the relationship between job strain and adverse health outcomes in general.Establishment of and rationale behind the IPD ConsortiumThere have been divided opinions about the relationship between job strain and risk of MI. The main reason for the controversy has been that, despite the relatively large size of several of the published cohort studies with number of observation years often in the range of 50 000, the statistical power has been too small for an unequivocal establishment of an association. As a result, Mika Kivimaki invited a number researchers, who had included psychosocial job factors in their study protocols and had or had not published results on the relationship between job strain and MI risk, to establish the IPD Consortium. Including unpublished cohorts was important as this provided a possibility to address the problem of publication bias - the tendency of research- ers and journals to publish only positive findings that can lead to inflated associations. …
Mika Kivimäki initiated the Individual Participant Data
(IPD) Meta-Analysis Consortium, which currently has 50 members. The Consortium recently published several research reports on the relationship between job strain (high psychological demands and low decision latitude at work), on the one hand, and cardiovascular disease and its risk factors, on the other hand. Since IPD represents a novel way to conduct epidemiological research collaboration and as some of the findings from the IPD Consortium have been criticized, this commentary aims to address the rationales behind the approach and discuss some of the main criticisms of the Consortium.
Researchers must tackle many problems when interpreting associations between a psychosocial work environment factor and a health outcome. First of all, work environment factors belong to a "distal" rather than a "proximal" group. In other words, the closer one gets to a biological mechanism relevant for disease development, the more likely it is that a relevant association will be strong. For instance, small samples are needed to establish an individual "brain" factor associated with depression or emotional exhaustion -simply because the brain factor is more or less depression. Factors related to work organization, on the other hand, are more "distal" since there are many factors that influence the relationship between the environment and the body's organs. Accordingly, it is sometimes difficult to obtain sufficient statistical power for the establishment of an undisputable association. For instance, the long "distance" between job strain and the outcome, myocardial infarction (MI), explains why we should expect a weaker association than in the study of "proximal" factors, for instance myocardial metabolism in relation to MI. Nevertheless, on a societal level, job strain is very important since it affects many working people, with a prevalence in the working population (in the IPD Consortium study's operational definition) of around 15%. Accordingly, if an unequivocal association is established, it is of major importance to those responsible for work organization and interventions designed to improve working conditions. However, large samples are needed to establish unequivocal proof.
Since Karasek introduced his demand-control model (1), there have been many studies of the association between job strain and risk of MI. These studies have become increasingly sophisticated over the years. In addition, there is accumulated indirect evidence from longitudinal studies of the relationship between job strain, on the one hand, and blood pressure variations and endocrine, metabolic, and immunological parameters, on the other hand. The results from these studies give us a plausible physiological explanation of the assumed relationship between job strain and MI risk.
A reason why this research field has attracted strong attention is that MI is an undisputable illness outcome. The study of MI risk, therefore, serves as a good scientific model for studying the relationship between job strain and adverse health outcomes in general.
Establishment of and rationale behind the IPD Consortium
There have been divided opinions about the relationship between job strain and risk of MI. The main reason for the controversy has been that, despite the relatively large size of several of the published cohort studies with number of observation years often in the range of 50 000, the statistical power has been too small for an unequivocal establishment of an association. As a result, Mika Kivimäki invited a number researchers, who had included psychosocial job factors in their study protocols and had or had not published results on the relationship between job strain and MI risk, to establish the IPD Consortium. Including unpublished cohorts was important as this provided a possibility to address the problem of publication bias -the tendency of researchers and journals to publish only positive findings that can lead to inflated associations.
The IPD Consortium received financial support from research foundations in the UK, Sweden, Denmark, France, and Finland. The process started with the goal of testing the hypothesis that job strain is associated with MI risk. Research was divided into several stages, the first of which was to scrutinize questions regarding job strain in the participating cohorts. As expected, there were differences in formulations and response categories and also differences in the number of questions related to psychological demands and decision latitude. However, methodological work overseen by "judges" assessing the similarity between items in the different cohort questionnaires made it possible to establish which cohort questionnaires should be included as well as the minimum number of questions for each dimension in the total IPD study with acceptable precision for the two variables (ie, psychological demands and decision latitude). This was done and the results published (2) before the next phase started. Since the job strain variables in the different studies had been homogenized, it was possible to establish the median values for both demands and decision latitude within each cohort. According to the literature's most common operationalization of job strain, the group with demand above median and, concomitantly decision latitude below median, was defined as the exposed group and the remaining population as the non-exposed. This enabled all individuals (almost 200 000) to be combined in a large cohort study. The average follow-up time was 7.5 years. The assessment of standard risk factors was treated in the same way -criteria were established before the analyses of the association between job strain and MI risk began. Two published articles (3,4) described the scientific process in the IPD Consortium.
Published findings
The findings of the IPD Consortium have been published in The Lancet (5) and the Canadian Medical Association Journal (CMAJ) (6). They show a consistent and statistically significant age-and sex-adjusted relationship between job strain and MI risk, which remains even after further adjustments for country, socioeconomic position, and standard risk factors. The findings also show that exclusion of subjects with a short time lag between job strain assessment and onset of MI does not affect the statistical significance of the association -addressing the problem described above with possible vague pre-heart disease illness symptoms possibly influencing the job description. The findings also addressed a second question that had been raised in previous research (7): the risk associated with a combination of high psychological demands and low decision latitude was greater than the risks associated with each one of these two exposures. Finally, the IPD papers provided a response to the question regarding publication bias: Yes, there seemed to be some effect of publication bias since the odds ratio was lower in the unpublished versus published studies. However, even in the unpublished studies, there was a statistically significant relationship between job strain and subsequent MI risk.
Criticism of the IPD findings
Accordingly, the IPD Consortium has delivered clear responses to several of the questions that have been debated ever since Karasek introduced the model. The problem created by a "distal" relationship was solved by means of strict homogenization of the assessments in many cohorts so they could be used as one study. The findings were positive despite many conservative measures safeguarding against factors that could give rise to inflated relationships. Nevertheless, the IPD Consortium has been criticized for several reasons (8)(9)(10). The most critical voices have come from researchers involved in work-intervention research. The critical points can be divided into two groups.
Standard risk factors' role in the relationship between job strain and MI. The first and most severe criticism relates to the way in which the role of the standard risk factors for heart disease (diabetes, high blood pressure, smoking, body mass index) has been presented in The Lancet and CMAJ. The latter article was based only on those cohorts that had full information about relevant lifestyle factors (102 000 participants), and the empirical conclusion has not been debated: job strain adds independently to illness risk even when standard risk factors have been taken into account. And, conversely, standard risk factors add independently to risk regardless of job strain. However, the independent association between job strain and MI risk is much weaker than the corresponding association between the standard risk factors and MI. No one could criticize the IPD Consortium for publishing that finding in itself. However, the practical interpretation formulated in The Lancet, "Our findings suggest that prevention of workplace stress might decrease disease incidence; however, this strategy would have a much smaller effect than would tackling of standard risk factors, such as smoking" (5, p1491) has not been endorsed by all members of the IPD. In my mind, the conclusion goes beyond the analyses of the IPD study. I tried to change these formulations but of course in a large group of researchers (46 authors for The Lancet article) one has to compromise, and it becomes practically difficult to discuss changes particularly in the final phase before publication. Critics of the IPD Consortium argue this statement could be used as a justification among clinicians (cardiologists, occupational physicians, and general practitioners) for disregarding the patient's psychosocial working conditions entirely. So what is the role of standard risk factors in the relationship between job strain and MI risk?
In its studies (11,12), the Consortium has shown that job strain is significantly associated with diabetes, physical inactivity, and obesity as well as to a summarized Framingham risk factor score. According to a recent systematic review (13), job strain is associated with elevated blood pressure, one of the classical standard risk factors. There are divided opinions regarding this relationship with regard to blood pressure assessed in the classical way (at rest in the "doctor's office") but the relationship is more established for blood pressure that has been automatically monitored during normal daily activities (14). As has been pointed out by several authors, long lasting excessive stress may increase the risk of metabolic syndrome, which in itself increases heart disease risk. In addition, excessive intake of carbohydrates and cigarette smoking could be regarded as "external" efforts that an individual could make when the body's own energy mobilization is failing. Accordingly, part of the relationships between job strain and lifestyle factors may mediate the relationship between job strain and MI risk. Thus, the relationships between standard risk factors and job strain make it very difficult to interpret analyses of job strain effects of risk after adjustment for such factors since some of the effect of job strain goes through them.
Adjusting for lifestyle factors is valuable, but one should avoid far-reaching conclusions from the results due to this complexity. There is also a difference between the effect of job strain on different parts of the "ladder of lifestyle risk factors". A closer look at the CMAJ table describing the relationship between number of lifestyle risk factors in groups with and without job strain reveals that there is indeed very little added risk going from "no job strain" to "job strain" (from 2.62 to 2.69) when participants (14 000 participants) have at least two of the lifestyle risk factors. But for those with no lifestyle risk factors (N=55 000) and those with only one such risk factor (N=33 000), job strain increases the risk from 1.00 (reference group) to 1.27 and from 1.47 to 1.87, respectively. From a population perspective, we should pay attention to these risk increases in the lower end of the lifestyle risk ladder. The prevalence of cigarette smoking varies vastly between countries. Therefore smoking is a risk factor, for example, with less potential payoff for cardiovascular prevention in Sweden than in many other countries since there are already few smokers in the risk ages in Sweden. Accordingly, it is difficult to make generalized statements regarding the possible pay-off of the potential effects of lifestyle versus work environment interventions.
Working conditions and lifestyle. The most important argument concerning working conditions and lifestyle, however, is based upon clinical experience: employees with a poor work environment may be unwilling to follow lifestyle health promotion advice. In contrast, when subjects feel that their working conditions improve they may feel more motivated to follow individual health promotion advice. Admittedly there is little scientific hard core evidence for this. However, of relevance is my research group's observation in a controlled job-intervention study (a year-long program for improvement of psychosocial know-how among managers) showing that, after a year's intervention, decision authority had developed statistically significantly more favorably among the intervention group's employees (reporting to managers in the experimental group). At the same time, the morning plasma cortisol concentration (related to the metabolic syndrome) as well as the plasma concentration of the liver enzyme gamma glutamyl transferase and (among women only) the serum concentration of triglycerides had improved among employees in the intervention but not the control group. These findings indicate that an improved psychosocial work environment may contribute to an improved risk factor profile among the employees (15). Sorensen has discussed the effect of occupational class on smoking habits and its potential role for anti-smoking propaganda (16). It should also be pointed out that -with the use of terminology introduced above -lifestyle factors should be regarded as "more proximal" (thus likely to be associated with stronger direct associations with heart disease risk) in relation to the illness processes than psychosocial work environment factors.
Studies showing that subjects with job strain who have suffered a MI are more likely than others during follow-up to experience a new cardiac event (17)(18)(19) illustrate that job strain is of importance to heart disease.
The interrelationships between work environment and lifestyle risk factors should be studied preferably with more sophisticated statistical models. Karasek has more recently expanded his demand-control theory and discussed the complex nature of these relationships (20). He points out that the key factor in relationships that gives rise to long-term stress reactions at work is the control of storage and release of energy in the task of maintaining stable physiological self-regulation. He notes that in order to resolve current scientific dilemmas, one should ideally study multiple and linked levels of both proximal (even more proximal than we currently study) and more distal (for instance work organization and global economy) factors. Karasek and collaborators (21) have recently described the design of a possible large-scale study that would take these intermediary mechanisms into account.
According to the IPD Consortium's article in the CMAJ, "For many people, avoidance of stress at work is unrealistic. The absence of strong evidence for effective interventions to reduce job strain therefore raises the challenge of identifying additional approaches for dealing with the health impact of stress in the workplace" (6, p764). This has caused tension among the readership. In mass media and comments from cardiologists following the IPD publications, this seems to give the impression that organizational job interventions have no scientific basis (although this was emphatically not stated in the articles). While it is true that large-scale controlled intervention studies aimed at reducing job strain on a collective organizational level, with subsequent follow-up of health effects providing strong evidence for an effect on heart disease risk, have not been published, there are still publications indicating this could be worth a closer look. For a review, see La Montagne et al (22). Bourbonnais et al's study (23) showed beneficial three-year effects of a work organization improvement program on the mental health of employees, with no similar change in the control group. Another study by Bond and Bunce (24) randomizing six offices to intervention and control groups, respectively, showed that mental health, sick leave, and self-rated productivity improved after one year in the intervention but not control group; this study also showed that increased perception of control was the mediating factor. In the study that my own group (15) performed on the effects of improved management on employee health, parameters related to the metabolic syndrome and decision authority reportedly improved for the employees in the intervention but not control group.
Strength of the association between job strain and MI.
The second area of criticism leveled at the IPD findings relates to statements about the strength of the association between job strain and MI risk. It is true that in some of the IPD texts (for instance in the abstract of The Lancet article), the formulations regarding the studied associations were expanded to work stress in general. This is clearly a mistake since the analysis presented was on the effect of job strain -which is only one aspect of workplace stress. The IPD researchers are involved in research on other kinds of workplace stress, such as effort-reward imbalance (25), job insecurity (26) organizational justice (27), and leadership (28,29), and the IPD Consortium plans to add exposures (effort-reward imbalance, job insecurity, and long working hours) to its study of job strain. This will enable the group to examine whether different kinds of psychosocial adverse factors compound risk. There are indications of this in previous literature (30,31).
Another question arising in the critique is whether adding health outcomes to one another would provide a more useful analysis of the total impact of the psychosocial work environment. The IPD Consortium has already published a study showing that job strain is associated with increased diabetes risk (11), while an ongoing study examines whether job strain is prospectively related to the risk of stroke. A study of job strain in relation to depression is also being planned. Therefore, in the future, we will see studies from the IPD group addressing the extent to which an adverse work environment (with several exposures) adds to the risk of developing any of these illnesses that are metabolically related to one another.
Critical methodological questions have been made regarding the effect of the job strain assessment in the IPD study. A careful analysis of the questionnaires in the different cohort studies was performed and decisions were made regarding "sufficient homogeneity". The statistical analyses showed that little information was lost with the applied criteria. However, it could still be that the standard Job Content Questionnaire which has been the international standard for job strain assessment is a better questionnaire than the methodological compromise that we achieved. Precision may have been lost due to this and, according to my understanding, it is more likely that this has given rise to random error than systematic overestimation of risk.
Another important point is that there has only been one assessment of job strain (at start of follow-up) in the IPD study. In a previous study, Kivimäki et al (32) have shown that stable job strain lasting through at least two measurements with an interval of two years is a stronger predictor than one measurement alone. People change jobs and stop working. Accordingly, precision in predictions will improve when there are several longitudinal measurements.
An additional problem is that subjects with the worst job conditions may not participate in this type of study. This is a methodological problem that is not only confined to the IPD Consortium studies. "Overstressed" subjects may have difficulties participating in our studies for practical reasons (irregular work exposure for instance). Also, those who have already developed a disease or illness caused by adverse working conditions will not be included. We know too little about the effects of this kind of non-participation. There is reason to believe that it may give rise to an underestimation of true risks, as discussed by Collins et al (33). Another limitation is that our study represents a Western European perspective. In the European countries participat-ing in our study, work democracy has been an important topic in societal discussions for several decades. It is possible that the relationships would be different in Southern Asian, South American, and African countries for instance.
The IPD Consortium also examines health outcomes other than cardiovascular disease. The first publication in this group (34) focused on colorectal, lung, breast, and prostate cancers. No significant associations were found between job strain and the incidence of these forms of cancer.
Concluding remarks
The IPD Consortium has great potential to provide solid knowledge in the field of psychosocial factors and health. IPD represents a new collaborative approach. Participating researchers have to search ways of homogenizing their exposure and outcome assessments in order make it possible to form very big cohorts in which every participant contributes equally. This is an improvement on the usual meta-analysis procedure. A possible drawback of the need to compromise is that the final number of questions that can be used is more limited than in the original study cohorts, possibly decreasing precision. Another issue is the complexity of managing collaboration between so many researchers. The IPD Consortium has presented good opportunities for everyone to provide input during the production of articles, but under time constraints (eg, the final phase of The Lancet article) it is difficult, if not impossible, to include opinions from 46 coauthors. Perhaps a formalized structure would have helped (eg, a small group could be democratically elected by all the participating researchers to act on their behalf in "critical situations").
The fact that there is an independent relationship between job strain and MI risk already provides an important rationale for employers to deal with psychosocial stress, regardless of the size of the associationsimply because MI is a serious illness which gives rise to suffering, risk of death, and productivity loss both for the company and society as a whole. It should be pointed out that there are a number of other valid reasons for the employer to deal with employee stress, not the least financial ones (see, for instance, 35). Psychosocial stress at work may cause much more financial damage than employers know. The IPD Consortium will, in the near future, examine the total effect of job strain, job insecurity, and effort-reward imbalance on important health outcomes that to some extent have a shared etiology (depression, diabetes, stroke, and MI). One critique of the IPD Consortium is that our strategy may lead to over-interpretation of "bits and pieces" of findings. From that point of view, a joint analysis of all the exposures and outcomes already from the start would have been better. However, the IPD Consortium has worked very rapidly and effectively with its scientific process. Transparency has been key in its empirical analysis. A requirement that publications would have had to include all the possible exposures and outcomes in one analysis from start would have delayed the process considerably (possibly for several years) and may not have been feasible given the financing terms. However, the current intense debates put even more pressure on the scientific community to join forces to unravel the "proximal/ distal" controversy. The IPD Consortium has shown an interesting example of the strong potential of scientific collaboration. | 2018-04-03T02:33:44.698Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "cdb638009d7349c180f8a84dbeeb708ba391fe27",
"oa_license": "CCBY",
"oa_url": "https://www.sjweh.fi/download.php?abstract_id=3406&file_nro=1",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "854d307ac7873c7119eb8bc9a1eb7b987865419b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234242291 | pes2o/s2orc | v3-fos-license | The Application of Online and Offline Blended Teaching Mode in the Course of "Book Design"
This paper takes the book design course as an example to analyze the application of online and offline teaching mode. According to the characteristics of the book design curriculum, the researchers adjust the curriculum design, refine and expand the content of the book design, and optimize the ratio of in-and extra-curricular hours and content distribution. Offline teaching is based on practical teaching, and online teaching is based on knowledge points, micro-class videos and resource links. At the same time, a problem feedback mechanism and a learning interactive discussion link are set up, which not only retains the offline traditional face-to-face teaching mode, but also develops online digital teaching mode, being able to effectively maximize their respective advantages. These adjustments in teaching mode solve the problem of insufficient understanding of offline knowledge points by students. Through online resources, students can review repeatedly, and finally learn and understand, to achieve the integration of knowledge points, improve learning efficiency, strengthen independent learning ability, and realize curriculum optimization.
I. INTRODUCTION
With the continuous development of technology and the expanding demand for talents, online education, as a way of learning, emerges as the times require and plays an increasingly important role in education. Online education platform is not only for students but also for teachers without the limitation of regions. As long as the teacher is well-equipped, he can teach everywhere. The mode of online education + offline education can effectively maximize their respective advantages.
DISTRIBUTION
There are 84 class hours of book design course, including 56 class hours for the theory and 28 class hours for the practice.
Chapter 1 is the overview of book design, and there are 8 class hours for the study. Through the study of this chapter, students can master the brief history of art development of book design, understand the origin, development and evolution of books, know the basic concepts and basic knowledge of books, and have the overall understanding of book design. The key point and difficult point is the brief history of the development of Chinese book art. The teaching contents include a brief history of the development of Chinese book art and the preparation of book design.
Chapter 2 is the overall design of the book, and there are 18 class hours for the study. Through the study of this chapter, students can understand the concept transformation from book binding to book design, know the binding form, basic structure and the artistic law of format design of books, and master the basic functions, design characteristics and design law of the structures of each part of books. The focus is on the binding style and structure of books. The difficulty is the structure of books. The teaching contents include the role of the whole book design, the requirements of the whole book design, the binding style of the book, the structure of the book, the artistic law of the format design and the artistic law of the text design.
Chapter 3 is the visual language of books, and there are 18 class hours for the study. Through the study of this chapter, students can basically know the visual elements of books, understand the principles of color application of books, the rules of text arrangement, graphic design language, the characteristics of printing materials, and the design methods of catalog, chapter, section and page number. The key points are font selection and chart design in books. The difficulty is the design of charts in books. The teaching contents include illustrations, words, colors and materials.
Chapter 4 is the form design of books, there are 10 class hours for the study. Through the study of this chapter, students can understand the form of paper, the production process and printing process of books, master the effect of using different printing materials, different printing processes and different binding methods, know the form characteristics of modern books, and reasonably select printing process to improve the quality of books. The focus is on the form of paper and the form of printing. The difficulty is the form of paper and printing. The teaching contents include the form of paper, the form of printing and the form of modern books.
Chapter 5 is the innovation of book design, and there are 6 class hours for it. Through the study of this chapter, students can master the design principles of concept books and e-books, temporarily conceal the design functions, excavate the expression forms of book design, and take a real concern as the starting point of book design, so that book design has unlimited possibilities. The key points and difficulties are all about the design of concept book. The teaching contents include the production and development trend of concept books and e-books.
Chapter 6 is the practice of book design, and there are 24 class hours for it. Through the study of this chapter, students can independently complete the data collection and research analysis in the early stage, review and select design schemes under the guidance of teachers, understand the "character" of different types of books, and master the design methods and spiritual connotation of different books. The key points are the determination of the book design style and the expression of the book design connotation. The difficulty is to express the connotation of book design. The teaching contents include case analysis of excellent books, practical operation of proposition book design and course summary.
KNOWLEDGE BEFORE CLASS
At present, there are three microlecture videos of knowledge points of this course, which are the format design principle of book design, the overview of book design and the basis of book design. According to the characteristics and teaching tasks of this course, it is planned to complete the microlecture videos of revealing the knowledge points during the project period, each of which takes about 6-10 minutes.
The first chapter consists of two sections. The first section includes the overview and definition of book design. The required teaching resources include teaching materials, teaching plans, courseware, microlecture videos. And the second section includes the history of book design. The required teaching resources include teaching materials, teaching plans, courseware, microlecture videos.
There are two sections of the basis of book design in the second chapter. The first section includes the market positioning methods and steps of book design. The required teaching resources include teaching materials, teaching plans, courseware, microlecture videos. The second section includes the format and design points of book design. The required teaching resources include teaching materials, teaching plans, courseware, microlecture videos.
There is only one section for the third chapter, which is about the elements of book design. And the required teaching resources include teaching materials, lesson plans, courseware and microlecture videos.
The fourth chapter includes two sections of the format design of book design. The first section includes the definition of format design of book design, format paper, grid and column settings. The required teaching resources include teaching materials, teaching plans, courseware, microlecture videos. The second section includes the format design principle of book design. The required teaching resources include teaching materials, teaching plans, courseware, microlecture videos.
The fifth chapter includes two sections of font design of book design. The first section is the development history of font of book design. The required teaching resources include teaching materials, teaching plans, courseware, microlecture videos. The second section is the font design principle of book design. The required teaching resources include teaching materials, teaching plans, courseware, microlecture videos.
LEARNING ACTIVITIES
In this part, students will make the design of task list for self-regulated learning. Also, teachers will give the learning resources and tasks. (see "Table I" and "Table II Through the teaching, students can master the design methods and skills of book design, the design characteristics, text arrangement, graphic expression, printing technology and other knowledge of different kinds of books, focus on the basic laws of book design, and be able to use the knowledge and skills learned to carry out book binding design. Suggestions on learning methods independent exploration; group cooperation; comment; screening and induction, training students' practical ability of innovative design. Advance notice of classroom learning form Module 1: learning in group and solving problems Students communicate in pairs to solve the doubts in selfregulated learning; Module 2: cooperative exploration Students are divided into groups according to the design tasks assigned by teachers to discuss and form a design plan; Module 3: comments Students show cooperative learning results in groups and teachers make the comment.
Confusion and suggestion
This item is filled in by students after self-regulated learning. Completing the following learning tasks by watching the microlecture video the definition of book design, the development history of book design, the market positioning principle of book design, the format and page size setting of book design, the elements of book design structure, the format design principle of book design, the font design principle and application characteristics of book design, and the color application skills of book design Confusion and suggestion This item is filled in by students after self-regulated learning.
The area of question answering is set up on the course website. Students can reflect the problems encountered in the learning process by leaving messages. The project team will give the answers in time.
A. The mode of interactive classroom activity with students as the main body
At the beginning of the course, new knowledge is introduced, the scope of knowledge is expanded, and students' interest in learning is improved. In the class, the knowledge of "the most beautiful book in the world" is introduced. And students' interest in learning can be improved. With the question "the evaluation standard of the most beautiful book in the world", students can learn about book design. The design intention is obvious. A good beginning is half done. This design can effectively stimulate students' interest in learning and enable them to enter into new exploration of knowledge with strong spirit.
B. The mode of situational classroom learning activity
It is necessary to set a theme scene. By explaining the development of book design and market positioning, students carry out business simulation in the given scene, such as making the preliminary preparation of book design, reading the manuscript in order, understanding the target audience, going deep into the market, experiencing the survival of the fittest, consulting materials, inspiring the creation, making full communication, and contributing to the success of the design. In this way, students will participate in the whole design process of book design.
The design intention is shown. Effective communication and exchange is an effective way for students majoring in advertising to express their design ideas. Therefore, teachers should create real interactive scenes when designing activities, so that students can easily integrate into the activities, personally experience the market positioning of book design, and improve their innovation and practice ability.
C. The mode of cooperative classroom learning activity
After the class is over, teachers give the teaching project of "designing a book of their own". The students are divided into groups to discuss in class, prepare for the preliminary design, draw a sketch, and determine the preliminary design scheme, which exercises the group cooperation ability of the students and helps to improve the practical ability.
The design intention can be seen clearly. On the basis of changing educational ideas and actively promoting the reform and innovation of curriculum teaching content, teachers have boldly tried and explored, adopted project modular teaching, and constantly promoted the innovation of teaching methods and teaching means, forming a curriculum form of "theory learningcase analysispractice simulationability improvement" as a chain.
Advances in Social Science, Education and Humanities Research, volume 515
D. The mode of classroom learning activity using students' PPT Through the demonstration and explanation of the contents of the design scheme, students can not only improve the ability of independent design, but also exercise the ability of expression. After the demonstration, the teachers put forward the advantages and disadvantages of the scheme, and give the modification opinions. After repeated modification, students determine the final design scheme and put it into the design link.
The design intention is obvious. According to the completion of the project and the PPT reported by the students, teachers will publicly comment on each student's works with the large-screen projector, so as to promote the effective exchange of learning situation. After the course, teachers will make the corresponding summary and rearrange the course content, so as to prepare for the subsequent courses.
VI. EXTRA-CURRICULAR WORK
This course takes six weeks, including the contents of five chapters. Teachers will explain one chapter each week. At the same time, teachers will give extracurricular homework. First, it is necessary to set up a topic. In the first two weeks, teachers will arrange the scheme design of book design, including preliminary market survey, target audience positioning, price positioning, design style positioning, etc. And students will complete the preliminary scheme design according to the topic. In the third and fourth weeks, teachers will arrange the PPT demonstration of the scheme. Each student shows the content of the design scheme, design planning and sketch conception to the teacher and classmates in turn, and the teacher gives suggestions according to the actual situation of the students. In the fifth and sixth weeks, students should perfect the design scheme, enter the design link, and complete the homework.
RESOURCES
The auxiliary learning resources include one teaching outline, one teaching schedule, one teaching plan, one copy of "book design" (editor in chief: Xiao Wei, Zhang Li, Press: Hefei University of Technology Press), PPT courseware for all courses, Focusky animation courseware, microlecture video, exhibition of excellent students' works, exhibition of winning entries.
VIII. EVALUATION CRITERIA AND METHODS
The usual performance accounts 40% of the total score, and the final score accounts 60% of the total score. Among them, the usual performance is mainly on the attendance, learning attitude and weekly classroom exercises and homework for what teachers have taught. The final grade is to test the content of practical operation, i.e. book making, etc.
IX. CONCLUSION
Online education is the product of the development of science and technology era. Pure online education refers to the teaching online, so that people can learn knowledge anytime and anywhere without leaving home. It can be said that online education has a great impact on traditional education, breaking the traditional mode. And the network has become the main transmission tool. In such a trend, it is easy for online education to have the development. Online education overcomes the main disadvantages of offline education, that is, online education is no longer limited by time, space, age of the educated and educational environment In addition, the cost of online education is significantly lower than that of offline education without expensive rent, water, electricity and other expenses, which can be seen from the increasing number of education websites. Online education breaks the barriers of space and time, facilitates the learning of the public and enables more busy people to enjoy the power of knowledge.
The teaching form in special period brings the reform. It is required to change the way of preparing lessons, the teaching mode, and the state of mind. The torrent of change is unstoppable. Teachers must be willing to make changes and adapt to the needs of the development of education in the new era. At the same time, teachers can collect data, extract knowledge points, design cases and improve data, write new teaching plans, make new courseware, record microlecture video, and use teaching platform efficiently. Online teaching has a long way to go. It is an inevitable teaching method in the future. Adopting online and offline blended teaching mode is not only the continuation of traditional teaching mode, but also the innovation of Internet teaching in the future. It is necessary to start from the actual situation of students, continue to expand effective teaching mode, and change the single "injection" teaching into the "online and offline interactive" teaching of two-way communication between teachers and students. Combining with the characteristics of art students, it is better to advocate heuristic, participatory and discussion teaching methods, and fully mobilize the enthusiasm and initiative of students' learning, so that students can get self-education and promotion in the process of active participation. | 2021-05-11T00:06:28.306Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "2bb88fc80e06519aee36852ea825413abcd927bc",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125950784.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "76e8614f84a01e442291b136ffb37dee44daf47f",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
} |
235229460 | pes2o/s2orc | v3-fos-license | The Role of G Protein-Coupled Receptors (GPCRs) and Calcium Signaling in Schizophrenia. Focus on GPCRs Activated by Neurotransmitters and Chemokines
Schizophrenia is a common debilitating disease characterized by continuous or relapsing episodes of psychosis. Although the molecular mechanisms underlying this psychiatric illness remain incompletely understood, a growing body of clinical, pharmacological, and genetic evidence suggests that G protein-coupled receptors (GPCRs) play a critical role in disease development, progression, and treatment. This pivotal role is further highlighted by the fact that GPCRs are the most common targets for antipsychotic drugs. The GPCRs activation evokes slow synaptic transmission through several downstream pathways, many of them engaging intracellular Ca2+ mobilization. Dysfunctions of the neurotransmitter systems involving the action of GPCRs in the frontal and limbic-related regions are likely to underly the complex picture that includes the whole spectrum of positive and negative schizophrenia symptoms. Therefore, the progress in our understanding of GPCRs function in the control of brain cognitive functions is expected to open new avenues for selective drug development. In this paper, we review and synthesize the recent data regarding the contribution of neurotransmitter-GPCRs signaling to schizophrenia symptomology.
Introduction
Schizophrenia is one of the most severe psychiatric disorders with the onset typically observed in late-adolescence or early adulthood. While the lifetime prevalence is approximately 1%, regardless of sex, race, or country, the first-degree relatives are ten times more susceptible to develop schizophrenia symptoms than the individuals in the general population [1]. The disease tends to present three main clusters of symptoms: cognitive, positive, and negative. One cluster is usually dominating over another, albeit the prevalence may change over time. The cognitive deficits are often manifested the earliest, long before the onset of the disease in the prodromal stage, and may be visible in the childhood or early adolescence. They can be classified into nonsocial (deficits in verbal fluency, memory, problem solving, speed of processing, visual, and auditory perception) or social, the latter associated with the facial emotion perception and understanding the self and others [2]. The spectrum of positive symptoms includes hallucinations, delusions, suspiciousness, abnormal excitement, and hostility. Among negative symptoms, the most frequently observed are paucity in speech, blunting of affect, loss of motivation, inability to focus on relevant issues, social isolation, apathy, and anhedonia [3]. Negative symptoms Figure 1. The classical G protein signaling pathways. GDP-guanosine diphosphate, GTP-guanosine triphosphate, ACadenylyl cyclase, cAMP-cyclic 5′-monophosphate, PKA-protein kinase A, PLC-phospholipase C, PIP2-phosphatidylinositol 4,5-bisphosphate, IP3-inositol-1,4,5-trisphosphate, PKC-protein kinase C, DAG-diacylglycerol, ER-endoplasmic reticulum.
Upon ligand binding, the receptor undergoes conformational changes and facilitates the exchange of GDP with GTP in the Gα subunit. Activated Gα-GTP subunit dissociates from heterodimeric Gβγ complex and triggers the activation of key effectors responsible for the generation of second messengers. Depending on the nature of Gα subunit, activation of GPCRs may result in changes in intracellular cAMP, Ca 2+ , diacylglycerol (DAG), or inositol 1,4,5-triphosphate (IP3) level that regulate distinct downstream signaling cascades. DAG may bind to and activate protein kinase C (PKC). The Gαs and Gαi exert their effect on protein kinase A (PKA) through the modulation of adenylyl cyclases (ACs) activity, thus regulating the rate of cAMP production. The Gβγ dimer has regulatory and signaling functions, serving as modulator for variety of ion channels and protein kinases, for instance, protein kinase D and phosphatidylinositol-3-kinase [15,18,19]. The IP3 diffuses from plasma membrane compartment to the ER where it binds IP3 receptors ulti-If the hyphen in this figure should be minus, please update if possible. Figure 2. The dopamine system. Binding of dopamine to D1 or D5 receptors activates PLC signaling pathway that triggers Ca 2+ release from the cisterns of endoplasmic reticulum and has a stimulatory effect on adenylyl cyclase that induces cAMP increase. When acting through D2, D3, or D4 receptors, dopamine exerts an inhibitory effect toward adenylyl cyclase leading to decrease in intracellular cAMP.
Abnormal activity of the DA system has been widely implicated in schizophrenia. In schizophrenic patients, the expression of D1 receptors was reduced in prefrontal cortex as determined using PET imaging, which has been linked to development of dysfunction in working memory [45,46]. On the other hand, mRNA levels of D1 receptors were elevated in the temporal and parietal cortex of schizophrenic patients, which may be correlated with auditory hallucinations [47]. Over 40 years of research on the D1 receptor have thoroughly validated its utility as a promising drug target. Recent advancement of new ligands such as drug-like non-catechol D1R agonists and positive allosteric modulators demonstrated that selective modulation of D1 receptors activity may be effective in a treatment of neuropsychiatric disorders including schizophrenia [48].
However, one of the most convincing evidence of disturbances in dopaminergic transmission in schizophrenia comes from the clinical efficacy of first generation and atypical antipsychotics, all being antagonists or partial agonists of D2 receptors [49,50]. The use of these medications is frequently a trade-off between alleviating psychotic symptoms and the risk of sometimes severe adverse effects. The atypical antipsychotics such as clozapine and olanzapine tend to cause metabolic syndrome, whereas first-generation antipsychotics, especially those bound to dopaminergic neuroreceptors, are associated with movement disorders [51]. This indicates the need of searching for novel antidopaminergic agents. Brexpiprazole, for instance, exhibits low risk of D2 receptor sensitization, is well- Figure 2. The dopamine system. Binding of dopamine to D1 or D5 receptors activates PLC signaling pathway that triggers Ca 2+ release from the cisterns of endoplasmic reticulum and has a stimulatory effect on adenylyl cyclase that induces cAMP increase. When acting through D2, D3, or D4 receptors, dopamine exerts an inhibitory effect toward adenylyl cyclase leading to decrease in intracellular cAMP.
However, one of the most convincing evidence of disturbances in dopaminergic transmission in schizophrenia comes from the clinical efficacy of first generation and atypical antipsychotics, all being antagonists or partial agonists of D2 receptors [49,50]. The use of these medications is frequently a trade-off between alleviating psychotic symptoms and the risk of sometimes severe adverse effects. The atypical antipsychotics such as clozapine and olanzapine tend to cause metabolic syndrome, whereas first-generation antipsychotics, especially those bound to dopaminergic neuroreceptors, are associated with movement disorders [51]. This indicates the need of searching for novel antidopaminergic agents. Brexpiprazole, for instance, exhibits low risk of D2 receptor sensitization, is well-tolerated, and has low side effects in patients with schizophrenia Moreover, it may have a lower risk for producing rebound symptoms associated with D2 receptor and 5-HT 2A receptor sensitization when switching from other antipsychotics such as risperidone [52,53]. The most recently approved, first-in-class antipsychotic-lumateperone-combines the synergy of the drug's affinity for 5-HT 2A receptors at low doses, dose-dependent presynaptic D2 receptors agonism, postsynaptic D2 antagonism, and selectivity to mesolimbic and mesocortical areas for a wide range of symptoms associated with schizophrenia [54].
The involvement of D2 receptors in pathogenesis of schizophrenia is further supported by the data from transgenic mouse models. It has been shown that overexpression of this receptor in the striatum leads to the deficits in inhibitory neurotransmission and dopamine sensitivity in the prefrontal cortex in mouse [55]. Administration of genetic construct encoding enzymes related to the synthesis of dopamine-tyrosine hydroxylase and guanosine triphosphate cyclohydrase-into the substantia nigra pars compacta of adolescent animals resulted in enhancement of dopamine production and appearance of schizophrenia-like behavior [56]. Similarly, administration of dopamine-like drugs such as amphetamine or methylphenidate evoked a hyperlocomotion state in animals and exacerbated psychotic symptoms in schizophrenic patients [57]. It has also been suggested that dopamine D3 receptors may be involved in the regulation of cognitive functions and motor coordination [58]. In line with that, the selective antagonists of these receptors, but not D2 receptors, enhanced social novelty discrimination and novel object recognition in rats, while overall having pro-cognitive effects [59].
Several studies have investigated a possible link between dopaminergic receptor polymorphisms and schizophrenia. A positive correlation was demonstrated between S311C polymorphism of D2 receptor and the response to atypical antipsychotic agents, such as risperidone [60]. The other reports have investigated an association between D3 receptor polymorphism-S9G and occurrence of schizophrenia, however, the results were not consistent [61,62].
The disturbance in dopamine system may be also associated with several mechanisms that involve signaling by other neurotransmitters. For example, Kapur and Seeman demonstrated that pharmacological antagonist of N-methyl-D-aspartate (NMDA) receptor, ketamine, has a strong affinity for D2 receptors [63]. Several studies showed that single dose of ketamine (25 mg/kg, i.p.) increased dopamine release in the prefrontal cortex of rats and repeated administration increased basal dopamine concentration [64,65]. Similarly, MK-801 increased extracellular levels of dopamine and dopamine turnover in the prefrontal cortex and striatum whereas phencyclidine (PCP) in the nucleus accumbens, amygdala, and prefrontal cortex [66]. Although these observations indicate NMDA receptor hypofunctioninduced changes in dopaminergic system, they do not explain whether they arise from direct effects over dopamine receptors or indirect action of the drugs via glutamatergic signaling. It has been demonstrated that NMDA dysregulation may provoke psychotic effects at least partially impacting dopamine receptors [67]. There are also indications that dysfunctional dopaminergic signaling in schizophrenia may lie in altered expression or function of dopamine receptor-interacting proteins (DRIPs) [68]. DRIPs play a crucial role in the regulation of intracellular activity of individual dopaminergic receptors in the brain, e.g., their biosynthesis, membrane localization, and signaling [68]. It was reported that one of DRIPs, neuronal calcium sensor I (NCS-1) was upregulated in the DLPFC of schizophrenic brain [69]. The effect of interaction between the D2 receptor and NCS-1 is a control of receptor desensitization and its half-life in the plasma membrane after ligand biding [69]. Such specific relationship between D2 receptor and NCS-1 indicates the crucial role of DRIPs in the regulation of dopamine receptors density and provides a link between abnormalities in the brain dopamine system and defects in Ca 2+ homeostasis in schizophrenia.
Nonetheless, all the studies done in preclinical models and in humans collectively suggest that the dysregulation of neurotransmitter systems in the pathophysiology of this disorder is significantly more complex and not limited to only abnormalities in the expression and functioning of dopamine receptors.
Adrenergic Receptors
It is commonly known that norepinephrine (NE), also called noradrenaline (NA), as widespread neuromodulator of all cell types in the CNS, orchestrates brain functions, including arousal, stress responses, anxiety, executive control, and also memory consolidation by transmitting its biological signals via α-and β-adrenergic receptors (ARs) [70]. ARs are classified into three groups: α1 (α1A, α1B, α1D), α2 (α2A, α2B, α2C), and β (β1, β2, β3) receptors, all of which are members of the G-protein coupled receptor family but exhibit distinct physiological and pharmacological profiles ( Figure 3). The α1 receptors through the Gq signaling pathway increase PLC activity and generate IP3 and DAG to amplify intracellular calcium mobilization [71]. All three β-AR subtypes are prototypic Gs coupled receptors and their stimulation affects intracellular cAMP accumulation Cells 2021, 10, 1228 7 of 33 and PKA activation [72,73]. In addition, β2 and β3 receptors may couple to Gi protein and influence ERK/MAPK pathway [74], whereas stimulation of Gi/o-coupled α2-ARs suppresses intracellular cAMP signaling and attenuates calcium release, thus inhibiting signal transduction [75]. The ARs are mainly found post-synaptically but α2and β2 receptors can also exert autoreceptor function at presynaptic terminals of noradrenergic neurons [76,77]. The signal transduction of the NE system in neurons has been extensively reviewed elsewhere [78].
are classified into three groups: α1 (α1A, α1B, α1D), α2 (α2A, α2B, α2C), and β (β1, β2, β3) receptors, all of which are members of the G-protein coupled receptor family but exhibit distinct physiological and pharmacological profiles ( Figure 3). The α1 receptors through the Gq signaling pathway increase PLC activity and generate IP3 and DAG to amplify intracellular calcium mobilization [71]. All three β-AR subtypes are prototypic Gs coupled receptors and their stimulation affects intracellular cAMP accumulation and PKA activation [72,73]. In addition, β2 and β3 receptors may couple to Gi protein and influence ERK/MAPK pathway [74], whereas stimulation of Gi/o-coupled α2-ARs suppresses intracellular cAMP signaling and attenuates calcium release, thus inhibiting signal transduction [75]. The ARs are mainly found post-synaptically but α2-and β2 receptors can also exert autoreceptor function at presynaptic terminals of noradrenergic neurons [76,77]. The signal transduction of the NE system in neurons has been extensively reviewed elsewhere [78]. In general terms, the positive symptoms of schizophrenia are exacerbated by selective and indirect noradrenaline receptor agonists such as ephedrine, clonidine, and desipramine, while antagonists, such as yohimbine, propranolol, and oxypertine may ameliorate these symptoms [79]. Although no specific mechanism has yet been confirmed, growing body of evidence indicates that NE signaling through α-AR can contribute to cognitive deficits observed in schizophrenia [80].
It is believed that moderate levels of NE engage high affinity postsynaptic α2-ARs, whereas increased concentrations of this catecholamine, probably released from the locus coeruleus (LC) during stress, impair PFC cognitive function via α1-adrenoceptors [81]. Birnbaum and colleagues observed that administration of potent activator of PKC or indirect stimulation of PKC with α1R agonist can result in a loss of prefrontal cortical regulation involving disrupted cognitive performance and spatial working memory in rats and monkeys [82]. From a pharmacological perspective, specific α2-AR agonists, administered alone or in combination with antipsychotics may enhance neurocognitive functions but also reduce positive and even negative schizophrenia symptoms leading to potentially high clinical relevance for treatment of this disorder. For instance, administration of cloni-Cells 2021, 10, 1228 8 of 33 dine to patients with schizophrenia improved stimulus filtering by normalization of both their sensory gating (P50) and sensorimotor gating (PPI) deficits to such levels that were not significantly different from levels of healthy controls [83,84]. Interestingly, the NE system can modulate PPI independently of 5-HT 2A neurotransmission and even compensate deficiency of serotonergic system, which seems to be evolutionary advantageous for maintaining enhanced protection against sensimotor gating impairments [85]. Likewise, the manipulation of noradrenergic activity by guanfacine, another α2 receptor agonist, ameliorated cognitive impairments of schizophrenic patients when used as an adjunctive treatment with neuroleptics [86].
Both NE and DA are important components of the arousal systems and their complementary action is needed for proper PFC function [87]. The high levels of D1 stimulation has been demonstrated to increase the production of cAMP, thereby opening hyperpolarizationactivated cyclic nucleotide-gated (HCN) cation channels near the synapse and detuning of spatial information processing [88]. In schizophrenia, disturbed stimulation of α2-AR located on the apical dendrites of cortical pyramidal cells may affect dynamics of the HCN channels in cortical pyramidal cells leading to increased hyperpolarization-activated currents and reduced apical amplification [89,90]. As a result, G s -mediated excessive cAMP upregulation, which has also been observed in hippocampal CA1 pyramidal cells via noradrenergic suppression, may reduce neuronal firing in the PFC leading to impairing cognitive operations [91]. In an animal study, α2A-adrenoceptor inhibition of cAMP signaling via guanfacine blocked the opening of HCN channels, strengthening the connectivity of the PFC networks related to WM [92,93]. Numerous reports have highlighted the potential involvement of the β-adrenergic receptor in memory consolidation, in particular, toward modulating hippocampal long-term potentiation (LTP) [94], and behavioral memory of mammals through cAMP-PKA signaling [94,95]. However, there is currently insufficient evidence regarding the effectiveness of beta blockers as an adjuvant therapy for the treatment of schizophrenia as reviewed by Cochrane and coworkers [96].
Treatment of patients with adjunctive antidepressants that act on NE activity, for instance duloxetine or mirtazapine, enhanced beneficial effects of atypical antipsychotics (clozapine, risperidone) and relieved negative symptoms of schizophrenia supporting the role of this neurotransmitter in the disease development [97][98][99]. However, recent work has uncovered that haloperidol, risperidone, olanzapine, and clozapine may potently regulate peripheral NE, which may be relevant to drug metabolism-related side effects, e.g., hyperglycemia [100].
Finally, single nucleotide polymorphisms (SNPs) can be also implicated in the etiology of schizophrenia: two SNPs in the promoter region of the α1A-adrenergic receptor (ADRA1A) gene [101], or interactive effect of α2A-adrenergic receptor (ADRA2A) gene polymorphism and methylenetetrahydrofolate reductase (MTHFR) gene polymorphism [102], which may additionally aggravate the low-dopamine state [103].
Cholinergic Receptors
Muscarinic acetylcholine receptors (mAChRs) are metabotropic receptors that become activated upon binding of neurotransmitter acetylcholine (ACh). Upon activation of the neuron, ACh is released from the synaptic vesicles into the synaptic cleft where it binds to presynaptic and postsynaptic receptors or is inactivated by the enzyme cholinesterase [104]. There are five subtypes of muscarinic receptors, designated as M 1 -M 5 , that can be further subdivided into two groups depending on their functional properties [105]. Stimulation of M 1 , M 3 , and M 5 receptors, that are expressed postsynaptically across many brain regions and coupled to G i / o G-type proteins, initiates the cascade of PLC-dependent reactions related to formation of DAG and IP 3 ( Figure 4). The M 1 receptor is a predominant subtype detected mainly in cortical and hippocampal neurons whereas neuronal M 3 and M 5 subtypes are present at low levels and their role is relatively little known. By contrast, M 2 and M 4 muscarinic receptors interact with G i and G o -type G proteins and negatively influence adenylyl cyclase, thus inhibiting formation of cAMP [106,107]. In the cerebral cortex and hippocampus, M 2 receptors have been reported to localize at both cholinergic and non-cholinergic presynaptic terminals [108,109]. The M4 receptors are found at the presynaptic terminals of cholinergic interneurons within the striatum [110] and they also seem to be present in the medium spiny neurons of the direct pathway [111]. In contrast to the nicotinic cholinergic receptors, mAChRs act slower but exert potentially more sustained synaptic response acting through second messengers.
Stimulation of M1, M3, and M5 receptors, that are expressed postsynaptically across many brain regions and coupled to Gi/o G-type proteins, initiates the cascade of PLC-dependent reactions related to formation of DAG and IP3 ( Figure 4). The M1 receptor is a predominant subtype detected mainly in cortical and hippocampal neurons whereas neuronal M3 and M5 subtypes are present at low levels and their role is relatively little known. By contrast, M2 and M4 muscarinic receptors interact with Gi and Go-type G proteins and negatively influence adenylyl cyclase, thus inhibiting formation of cAMP [106,107]. In the cerebral cortex and hippocampus, M2 receptors have been reported to localize at both cholinergic and non-cholinergic presynaptic terminals [108,109]. The M4 receptors are found at the presynaptic terminals of cholinergic interneurons within the striatum [110] and they also seem to be present in the medium spiny neurons of the direct pathway [111]. In contrast to the nicotinic cholinergic receptors, mAChRs act slower but exert potentially more sustained synaptic response acting through second messengers. In schizophrenia, altered cholinergic neurotransmission is intimately linked to the defective cognitive functions associated primarily with cortical and hippocampal regions. Post-mortem studies consistently reported transcriptional and proteomic alterations in M 1 and M 4 receptors in the hippocampus [112,113] prefrontal and frontal cortices [112,[114][115][116], and also cingulate cortex [117,118] of schizophrenic patients. Conversely, potentiation of the central muscarinic system by M 1 mAChR's positive allosteric modulator (PAM), completely restored defective long-term depression as well as impairments in the cognitive function and social interaction in PCP-treated mouse model of schizophrenia [119]. Interestingly, no significant differences in the density of M 2 and M 3 receptors between cortical regions of schizophrenic and control subjects have been observed [120]. It has been recently demonstrated that acetylcholinesterase inhibitors (AChEIs) or similar agents increasing ACh level may be effective in the treatment of visual hallucinations in individual clinical cases [121,122]. On the other side, the results of many clinical studies [123][124][125] did not show any improvement of schizophrenia symptoms by AChEIs or similar agents increasing ACh level. These results could suggest that contribution of the central muscarinic receptor system to schizophrenia deficits may not arise from disturbances in ACh level but rather involves far more complex changes underlying neuropathology of this disorder [126]. In non-psychotic individuals, administration of anti-muscarinic agents such as atropine or scopolamine evoked dose-dependent impairments in cognitive and psychomotor function including attention, learning process, working, and declarative memory [127][128][129].
Novel drugs targeting the allosteric binding site in mAChRs helped to extend our knowledge about the role of these receptors in Ca 2+ -dependent signal transduction in the brain and they turned out to be promising in the treatment of psychotic symptoms commonly observed in patients with schizophrenia. One of the modulators with procognitive action is AC-260584, a potent agonist at the M 1 receptor, that may mediate calcium responses and ERK1/2 activation in specific brain areas involved in learning and memory formation, such as the hippocampus, prefrontal and perirhinal cortex [130]. A previous report suggested that ACh may control the LTP induction in CA1 hippocampal pyramidal neurons by stimulating M 1 receptor and leading to Ca 2+ release from IP 3sensitive stores [131]. Moreover, the regulation of synaptic plasticity and cognitive function by muscarinic system can result from tuning the activity of non-glutamatergic postsynaptic ion channels including voltage-or Ca 2+ -gated channels [132,133]. Consistent with these findings, administration of 77-LH-28-1, another allosteric agonist of M 1 receptor, led to M 1 receptor-dependent inhibition of calcium-activated potassium (SK) channels, promoting the induction of NMDAR-dependent LTP [134]. The M 1 receptors via signaling cascade linking cAMP-PKA and PI3K-Akt-mTOR may also be critical for the activation of postsynaptic AMPA receptors needed for the LTP [135,136].
The synaptic AMPA receptors and mTOR signaling pathways have been demonstrated to be significantly disrupted in schizophrenia [137,138]. The function of muscarinic system in the modulation of altered synaptic transmission may precipitate or exacerbate certain symptoms of psychiatric disorders. Interestingly, Jeon et al. revealed that muscarinic blockade of D 1 receptor-induced cAMP production was abolished in striatal neurons of D1-M4-KO mice model underlining physiological relevance of M 4 receptors in dopaminedependent behaviors and representing another potential therapeutic target in the treatment of schizophrenia [139].
Serotonergic Receptors
Serotonin (5-hydroxytryptamine, 5-HT) is one of the most extensively studied neurotransmitters, acting through distinct G protein coupled receptors (GPCRs) and ligand-gated ion channels [140]. The last two decades of research described at least fifteen 5-HT receptors subtypes, which are grouped into seven families (5-HT 1 -5-HT 7 ) [141] based on the specific biochemical signaling pathways, as presented in Table 1 [142]. All subtypes have a distinct expression pattern across the central nervous system (Table 1). In the human brain, almost all serotonin receptors subtypes are found, except for 5-HT 5b , and they play an important role in the modulation of cognitive and behavioral functions [140].
The considerable evidence for alterations in serotonin level in schizophrenia comes from pharmacological data. D-lysergic acid diethylamide (LSD), which is structurally similar to serotonin, induces psychotomimetic effects in non-psychiatric controls [143]. Further investigations demonstrated that LSD causes hallucinations through its agonistic effect on the 5-HT 2A receptors subtype [144]. To support it, the group of González-Maeso demonstrated that 5-HT 2A knock-out mice were unsusceptible to the neuropsychological effects of serotonergic psychedelics [145,146].
The 5-HT 2A receptors are present in high density in brain regions which are implicated in the pathophysiology of schizophrenia and play a key role in cognition, perception, and emotion regulation [147]. A large number of studies points to alterations in frontal cortical 5-HT 2A receptor binding in schizophrenic patients and the reduction in receptor density in schizophrenic brains compared to healthy individuals [148]. Furthermore, a new generation of antipsychotic drugs act through serotonin receptor-based mechanism [149]. They exhibit low prevalence of side effects and the effectiveness against both positive and negative symptoms. Despite intensive studies, the molecular and neurochemical bases of atypical drugs action have long been a matter of debate. It has been postulated that a high 5-HT 2A vs. dopamine D2 receptor occupancy is characteristic for atypical drugs and majority of them including clozapine, olanzapine, risperidone, or ziprasidone are characterized by high affinity for 5-HT 2A receptors [150]. However, not 5-HT 2A receptor antagonism per se but a combined blockage of D2 and 5-HT 2A receptors is believed to confer the efficacy of a second-generation antipsychotics [151,152]. Indeed, the atypical antipsychotics are frequently characterized by their combined action for the antagonism of 5-HT 2A and D2 receptors [153]. Studies have shown that this treatment strategy can efficiently reduce the negative and cognitive symptoms as well as minimize the side effects [154]. Additionally, equilibrium between 5-HT 2A and D2 receptor occupancy is crucial for minimizing extrapyramidal symptoms and improving efficacy in a treatmentresistant schizophrenia [155,156]. These have been assessed by several studies showing the beneficial effects of antagonism of 5-HT 2A and D2 receptors, notably using single or saturating doses of haloperidol [152,157,158], and recently in rats chronically treated with haloperidol alone or in combination with MDL-100,907, a selective antagonist of 5-HT 2A receptor [159]. Several findings have also pointed to the biological significance of serotonin receptor 2A gene in schizophrenia, but the results are inconclusive. For example, Sern-Yih Cheah and coworkers showed three potential risk factors for schizophrenia: the down-regulated 5HT 2A mRNA levels in the PFC, hypermethylation of 5HT 2A promoter CpG sites (cg5, cg7 and cg10) and genetic correlation with 5HT 2A genotypes for rs6314 and rs6313 [147]. On the other hand, postmortem study on untreated schizophrenic patients demonstrated up-regulation of 5HT 2A receptor density in the PFC [160]. In addition to genetic variations in 5HT 2A , environmental factors can be also associated with 5HT 2A gene expression. There are multiple lines of evidence to demonstrate that 5-HT 2A receptors and metabotropic glutamate type 2 (mGlu2) receptors interact with each other and form functional complexes in brain cortex [160][161][162]. It has been demonstrated that the density of 5-HT 2A /mGluR2 complex in the cortex of schizophrenic individuals is dysregulated [154]. The functional role of these complexes has also been studied in animals. For instance, stimulation of cells express-ing functional 5-HT 2A /mGluR2 heterocomplexes with mGluR2 agonist activated Gq/11 proteins by the 5-HT 2A receptors and this activation was abolished in 5-HT 2A knockout mice [161]. The mGluR2 knockout mice were resistent to the behavioral effects of hallucinogenic drugs [163], which suggests that 5-HT 2A /mGluR2 complex may be obligatory for neuropsychological responses to hallucinogens. The postmortem studies demonstrated upregulation of 5-HT 2A receptor and downregulation of mGluR2 receptor [160], a pattern that may predispose psychosis.
Moreover, postmortem and neuroimaging studies also support a role of serotonergic system in the pathophysiology of schizophrenia [164]. Yasuno and coworkers showed decreased 5-HT 1A receptor binding in the amygdala, which may underlie the affective components included in schizophrenia symptoms [165]. Moreover, it has been demonstrated that atypical antipsychotic drugs enhance dopamine release in the prefrontal cortex through postsynaptic 5-HT 1A activity [166]. This observation may be essential for choosing an optimal treatment strategy, in which negative symptoms and cognitive deficits in schizophrenia have been linked to decreased function of dopaminoceptive neurons.
The 5HT 2C , 5HT 6 , and 5HT 7 receptors are also considered as pharmacological targets in the treatment of psychosis and cognitive deficits in schizophrenia [167]. For instance, the interaction of clozapine with 5HT 6 receptors improves cholinergic signaling and may be helpful in the treatment of neurocognitive defects [168]. The anatomical distribution of 5HT 7 receptor subtype in the human brain together with the reduction of mRNA levels of this receptor in the prefrontal cortex of schizophrenic individuals as well as the genetic correlation between 5HT 7 receptors and schizophrenia emphasize their role in the development of this disorder [169]. A growing body of evidence indicates that schizophrenia has a strong neurodevelopmental component [170,171]. Therefore, it is highly plausible that the disease can be influenced by 5HT 6 and 5HT 7 receptors or other GPCRs controlling key neurodevelopmental processes.
Furthermore, the results of multiple studies demonstrated an association between serotonin receptor polymorphism and disease susceptibility for schizophrenia. The T102C polymorphism of the 5-HT 2A receptor and the C759T polymorphism of 5HT 2C receptor have been positively associated with positive and negative symptom response [172,173]. All these findings highlight a crucial role of serotonergic neurotransmission in the pathophysiology of schizophrenia. However, further studies are needed to improve efficiency of antipsychotic drug that modulate the activity serotonin receptors.
Glutamate Metabotropic Receptors
Metabotropic glutamate receptors are encoded by GRM1 to GRM8 genes and have a modulatory function for the release of neurotransmitters, regulation of neuroplasticity, and synaptic excitability [174]. Based on receptor structure, ligand selectivity, and the psychological effect caused by activation of the receptor, mGluRs are classified into three groups: Group I, Group II, and Group III ( Figure 5). Activation of Group I (mGluR1 and mGluR5) receptors causes phospholipase C-mediated effect, while Group II (mGluR2 and mGluR3) and Group III (mGluR4, 6,7,8) receptors are associated with inhibition of cAMP signaling through G i /G o protein [175]. All of the mGluR are present in neuron and glial cells, the only exception is mGluR6 which is primarily located in the retina [176].
The mGluR1 and mGluR5 belonging to Group I are located mainly in the postsynaptic site and act through phospholipase C-dependent Ca 2+ mobilization and stimulation of adenylyl cyclase, albeit the contribution of other signaling pathways has been demonstrated as well [177,178]. In general terms, activation of these receptors leads to neuronal depolarization. However, the mGluR1 and mGluR5 can also modulate the preand postsynaptic current of the NMDA receptor in a Ca 2+ -dependent manner. An increase in Ca 2+ level causes activation of mGluR1 and mGluR5, which results in decreased activity of the NMDA receptor and protection from detrimental consequences of Ca 2+ overload [179]. So far, 12 rare mutations in the GRM1 gene were discovered and described as being correlated with disease etiology [180]. Moreover, postmortem studies demonstrated increased expression of mGluR1 in the prefrontal cortex in patients with schizophrenia [181]. A growing body of evidence indicates that both mGluR1 and mGluR5 should be considered as new molecular targets for schizophrenia treatment. Preclinical studies using PCP-, amphetamine (AMPH)-, or MK-801-induced animal models indicated that mGluR1 s or mGLuR5 s positive or negative allosteric modulators (PAMs or NAMs) can effectively reduce hyperlocomotion and ameliorate deficits in prepulse inhibition and social interactions [176,182,183]. For instance, mGluR5 agonist-VU0409551, produced rapid antipsychotic-like and cognition-enhancing activity in rodent models of schizophrenia and turned out to be effective in reversing the deficits in serine racemase knockout mice, a model that mimics many behavioral and neurochemical abnormalities observed in this disease [184].
Glutamate Metabotropic Receptors
Metabotropic glutamate receptors are encoded by GRM1 to GRM8 genes and have a modulatory function for the release of neurotransmitters, regulation of neuroplasticity, and synaptic excitability [174]. Based on receptor structure, ligand selectivity, and the psychological effect caused by activation of the receptor, mGluRs are classified into three groups: Group I, Group II, and Group III ( Figure 5). Activation of Group I (mGluR1 and mGluR5) receptors causes phospholipase C-mediated effect, while Group II (mGluR2 and mGluR3) and Group III (mGluR4, 6,7,8) receptors are associated with inhibition of cAMP signaling through Gi/Go protein [175]. All of the mGluR are present in neuron and glial cells, the only exception is mGluR6 which is primarily located in the retina [176]. Figure 5. The metabotropic glutamate receptors. The group I mGluRs couples to Gq, which stimulates PLC activity and inositol 1,4,5-triphosphate (IP3) and diacylglycerol (DAG). The IP3 diffuses to the endoplasmic reticulum and activates the IP3 receptors to release Ca 2+ to the cytosol. The Group I can also couple to adenylyl cyclase to stimulate cAMP production. By contrast, Groups II and III couple to Gi/o proteins and inhibit adenylyl cyclase.
The mGluR1 and mGluR5 belonging to Group I are located mainly in the postsynaptic site and act through phospholipase C-dependent Ca 2+ mobilization and stimulation of adenylyl cyclase, albeit the contribution of other signaling pathways has been demonstrated as well [177,178]. In general terms, activation of these receptors leads to neuronal depolarization. However, the mGluR1 and mGluR5 can also modulate the pre-and postsynaptic current of the NMDA receptor in a Ca 2+ -dependent manner. An increase in Ca 2+ level causes activation of mGluR1 and mGluR5, which results in decreased activity of the NMDA receptor and protection from detrimental consequences of Ca 2+ overload [179]. So far, 12 rare mutations in the GRM1 gene were discovered and described as being Figure 5. The metabotropic glutamate receptors. The group I mGluRs couples to Gq, which stimulates PLC activity and inositol 1,4,5-triphosphate (IP 3 ) and diacylglycerol (DAG). The IP 3 diffuses to the endoplasmic reticulum and activates the IP 3 receptors to release Ca 2+ to the cytosol. The Group I can also couple to adenylyl cyclase to stimulate cAMP production. By contrast, Groups II and III couple to G i/o proteins and inhibit adenylyl cyclase.
The receptors from Group II are expressed only in a few brain regions: mGlu2 in the cerebellar and cerebral cortex, hippocampus, olfactory bulbs, and it is located in presynaptic, postsynaptic, or glial sites whereas mGlu3 is predominantly expressed in the dentate gyrus, nucleus accumbens, lateral septal nucleus, cerebral cortex, cerebellar cortex, striatum, substantia nigra pars reticulata, amygdaloid nuclei, and it is located only in the preterminal region of neurons away from synaptic sites [176,185]. Group II receptors act by inhibiting the adenylyl cyclase and voltage-dependent Ca 2+ channels while activating voltage-dependent K + channels [186]. Research on animal models of schizophrenia showed that pharmacological activation of mGluR2/3 decreased behavioral and cellular deficits of the NMDA receptor hypofunction and improved motor activity [187]. Numerous Group II mGluR's agonists were checked for therapeutic efficacy in schizophrenia. In preclinical research, LY354740 improved working memory and caused stabilization in glutamatergic signaling in the PCP-induced model of NMDA receptor hypofunction [188]. In the same model, LY379268 decreased the deficits in prepulse inhibition and reduced the expression of falling, turning, and back pedaling in rats in a dose-dependent manner [189]. The studies with healthy volunteers showed that LY354740 produced significant dose-dependent improvement in working memory during ketamine challenge suggesting that mGluR2/3 may play a role in memory impairments related to NMDA receptor hypofunction [190]. Clinical trial with LY2140023, an oral prodrug of LY404039, demonstrated the improvement in both positive and negative symptoms of schizophrenia compared to placebo. LY2140023 was safe and well-tolerated, and patients did not face different from placebo extrapyramidal symptoms or weight gain [191]. As reviewed by Moreno and colleagues, mGluR2, but not mGluR3, is the receptor responsible for antipsychotic-like effects of mGluR2/3 agonists, at least in preclinical models. This is supported by the concurrent studies with LY404039 and LY379268 showing that the effects of mGluR2/3 agonists are abolished in mGluR2, but not in mGluR3, knockout mice [192]. Interestingly, mGluR2 PAMs have the effects comparable with mGluR2/3 orthosteric agonists as was shown for LY379268 and biphenyl-indanone A (BINA) in PCP-and AMPH-induced animal models [193,194].
The drugs targeting mGluR2/3 have also been tested in clinical trials. In the first run of randomized phase II, LY-2140023 initially improved both positive and negative, but not cognitive, symptoms of schizophrenia when compared to placebo but no differences were seen between tested and olanzapine positive group. The second trial showed no significant differences between LY-2140023 and olanzapine, risperidone, or aripiprazole groups over 6-8 weeks of treatment and further clinical investigations were ceased by the Eli Lilly company [191,195,196]. The mGluR2 PAM, ADX71149 showed safety, tolerance, and efficiency toward negative symptoms of schizophrenia in IIa phase of clinical trials. In a dose-dependent manner, it significantly ameliorated smoking withdrawal-evoked deficits in attention and episodic memory and reduced ketamine-evoked negative symptoms [197,198]. However, up to date no results of phase III have been released. In 2016, AstraZeneca disclosed the results of phase II of AZD8529, a selective mGluR2 PAM, but no significant improvement in negative and positive symptoms of schizophrenia was demonstrated [199]. Receptors of Group III mGluRs: mGluR4, mGluR6, mGluR7, mGluR8 are the least explored among all metabotropic glutamate receptors. They are located mainly in the presynaptic site of neurons with the exception of mGluR6, which is located in the postsynaptic site of bipolar retinal cells. Group III receptors are similar to Group II in terms of mechanism of action-they signal via Gα i/o to inhibit adenyl cyclase and modulate the activity of other downstream effectors such as cGMP phosphodiesterase, MAPK, or PI3 kinase pathways [182,186,200]. It has been demonstrated that mGluR4 activation decreases glutamatergic transmission in the hippocampus [201] while mGluR4 knockout resulted in prepulse inhibition and lower acoustic startle response [202]. A variety of mGluR4 agonists were tested in preclinical studies. The LSP1-2111 was effective in reducing MK-801-and AMPH-induced hyperlocomotion and DOI (2,5-dimethoxy-4-iodoamphetamine)-induced head twitches [203]. The LSP4-2022 drug lowered neurotransmitter release caused by MK-801 and had an antipsychotic effect [204]. The LuAF21934 and LuAF32615 regulated hyperactivity induced by MK-801 and amphetamine and decreased head twitches caused by DOI [205]. Administration of the ADX88178 resulted in a reduction of hyperlocomotion caused by MK-801 and head twitches caused by DOI [206].
Knockout of mGluR7 in mice model worsened short-term neural plasticity in the hippocampus compared to the wild type, and produced deficits in memory and anxiety responses [207]. The mGluR7 s NAMs tested in preclinical studies-MMPIP and ADX71743-were successful in normalization of deficits caused by MK-801 and DOIinduced head twitches. However, ADX71743 needed lower doses to cause therapeutic effect compared to MMPIP [208,209]. Both drugs were also active when tested in models of cognition, attentional deficits, and social interactions [210]. Several other drugs targeting mGluR7 have been synthesized recently, for instance VU6010608 (2017) or VU6027459 (2020), but their utility in schizophrenia treatment has not been investigated yet.
Research on the role of mGluR8 in schizophrenia provided inconsistent results-some scientists demonstrated that knockout of this receptor resulted in subtle behavioral alterations including novelty-induced hyperactivity, delayed stimuli response [211] and anxiety [212]. However, these findings were not confirmed by others [213,214]. Similarly, some preclinical studies showed that mGluR8 s selective agonist (S)-3,4-dicarboxyphenylglycine (DCPG) decreased hyperactivity induced by pharmacological blockage of NMDA receptor while others did not confirm normalization of locomotor activity by the drug [215,216]. Despite these discrepancies, mGluR8 should still be considered as a potential molecular target in schizophrenia treatment.
GABAB Receptors
Gamma-aminobutyric acid (GABA) is the main inhibitory neurotransmitter in the brain. Many studies have demonstrated dysfunctions in GABA transmission in schizophrenia pathophysiology [217,218]. GABA activates fast synaptic inhibition via ionotropic GABAA receptors and slow synaptic inhibition via metabotropic GABAB receptors (GBRs) [219]. GBRs are G-protein coupled to K + /Ca 2+ channels and consist of two closely related seventh transmembrane subunits-GABAB receptor 1 (GBR1) and GABAB receptor 2 (GBR2), both of them required to assembly into functional receptor. The GBR1 subtype exists in two splice variants-GABABR1a (130 kDa) and GABABR1b (100 kDa) [217]. GBR1 binds orthosteric ligands, while GBR2 couples with G protein [7], releasing Gα i/o and G βγ when activated [219]. In addition to GABA, GBRs activity can also modulate the release of dopamine and serotonin [220].
GBRs' abundant expression in the cortex and their significant role in learning and memory formation indicate the importance of these receptors in the CNS, but the understanding of GBRs function is still limited [217,221].
A series of studies have reported abnormalities in GBRs in schizophrenia [218] and immunohistochemical experiments found decreased GBR1a immunolabeling in the hippocampus, prefrontal cortex, inferior temporal cortex, and the entorhinal cortex of schizophrenia patients [217,222]. In addition, the loci for both GABBR1 (6p21.3) and GABBR2 genes (5q34) have been recognized as the susceptibility loci for schizophrenia [218]. Fatemi and coworkers detected significant reduction in GABBR1 and GABBR2 protein level in the lateral cerebella and superior frontal cortex from patients with schizophrenia, bipolar disorder, and major depression when compared to healthy controls [218,220]. Though one report showed a weak correlation between GABBR1 gene and schizophrenia [223], two other found no connection [224,225]. In two microarray studies, increased expression of GABBR1 and GABBR2 mRNA was observed in the brain tissue from suicides [226]. Alterations in GBR subunits expression may disturb affinity, transmission, and receptor insertion into the plasma membrane, possibly promoting emotional and cognitive deficits in schizophrenia [220].
Despite the contribution of GBRs to schizophrenic symptoms and extensive drug discovery efforts, to date, only two GABAB receptor agonists-baclofen and gammahydroxybutyric acid (GHB)-have been introduced to the clinical use [227]. Baclofen has poor liposolubility and does not cross the blood-brain barrier (BBB) efficiently [227], but it systemic administration reduced behavioral hyperactivity and/or prepulse inhibition deficits in animal models of schizophrenic psychoses induced by methamphetamine [228], MK-801 [229], or phencyclidine [230]. Likewise, baclofen administered intraperitoneally reversed dizocilpine-induced prepulse inhibition disruption and spontaneous gating deficits in juvenile DBA/2 mice, and the effects were blocked by the pretreatment with a GBR antagonist [227]. In the prefrontal cortex and hippocampus of DBA/2 mice, decreased GBRs expression was found, suggesting that the schizophrenia-like phenotype may be connected to the disturbances in GABAergic system [227]. However, despite promising preclinical data, trials with baclofen on schizophrenic patients turned out disappointing. Other studies additionally demonstrated that baclofen could be responsible for hallucinations on severe withdrawal psychosis [217,220].
Second, GBRs agonist, GHB, has an advantage over baclofen in reaching significant CNS concentrations, due to the evidence for carrier-mediated transport across the BBB [227]. GHB may act directly as a neurotransmitter but also modulate dopamine transmission via the GHB receptor and GBRs after conversion to extracellular GABA [227]. Dopamine modulation seems to be regulated mainly by the GBR [231] since GABAB1 knockout mice do not display the same behavioral response to GHB administration as the wild-type [227].
The GBR antagonists and positive allosteric modulators (PAM) are under extensive studies due to their lack of undesirable side effects caused by baclofen [232]. Several preclinical investigations have demonstrated GBRs antagonists' effectiveness in the treatment of cognitive dysfunctions in a rat model of absence epilepsy or improvement cognitive task performance by activating hippocampal θ and γ rhythms in behaving rats [227,233]. Other researchers demonstrated that GB receptor antagonist-SGS742-improved spatial memory, possibly due to a weaker binding to the cyclic adenosine monophosphate response element in the hippocampus [233]. Additionally, the infusion of GBRs antagonists-CGP56999A and CGP35348-into the rat hippocampus produced deficits in prepulse inhibition and affected hippocampal sensory and sensorimotor gating [234]. Another report on the animal model of schizophrenia-the apomorphine-susceptible (APO-SUS) rat and its phenotypic counterpart, the apomorphine-unsusceptible (APO-UNSUS) rat at postnatal day 20-22-showed that CGP55845 abolished prepulse inhibition reduction, suggesting that the diminished paired-pulse ratio was caused by increased GBRs signaling. Increased expression of the GB1 receptor subunit in APO-SUS rats seems to support it [235].
Research on schizophrenia animal models with positive allosteric modulators of GBRs showed that GS39783 blocked hyperlocomotion induced by MK-801 [236]. Similarly, CGP7930 co-administered with a low dose of baclofen reduced amphetamine-induced hyperlocomotion [237]. CGP7930 has also been described to antagonize psychosis-relevant behavior triggered by hippocampal kindling, including deficits of prepulse inhibition and gating of hippocampal auditory evoked potentials (AEPs) [234]. Furthermore, CGP7930 prevented ketamine-induced deficit of prepulse inhibition, suppressed hyperlocomotion, and reduced heterosynaptically mediated paired pulse depression in rat hippocampus [232]. The most recent analysis of the X-ray crystal structure of GBR suggests that clozapine-the gold-standard drug in the treatment of resistant schizophrenia-could directly bind to the GABAB receptor in a way similar to baclofen [238].
The signaling pathways downstream GBRs are related with one of three effector proteins: the GIRK family-G protein-activated inwardly rectifying K + channels, voltage-gated N-type Ca 2+ channels, and adenylyl cyclase [219,238]. The GB receptors interact with a variety of other signaling pathways, but these connections are not fully resolved yet. However, recent study revealed a new functional relationship between widely distributed GBRs and densely expressed sodium-activated potassium channels in the olfactory bulb neurons. Li and coworkers demonstrated a novel mechanism by which GBR activation inhibits two opposing currents, the persistent sodium current and the sodium-activated potassium current [221]. Broad colocalization of GBRs and sodium-activated potassium channels in the nervous system indicate an important mechanism for GBRs neuromodulation. These results suggest a new possibility for controlling cell excitability through GBRs modulators [221].
GBRs control synaptic transmission by either inhibiting neurotransmitter release or diminishing postsynaptic excitability. Presynaptic GBRs inhibit neurotransmitter release by modulating calcium channels or interacting with the downstream release machinery [219]. GBRs dampen postsynaptic excitability by releasing G βγ subunits to activate inwardly rectifying K + channels. Local shunting and slow inhibitory postsynaptic potentials (IPSPs) generated by opening of these channels, enhance magnesium blockage of NMDARs and indirectly inhibit synaptic responses [239]. This indirect blockage of NMDARs together with inhibition of voltage-sensitive Ca 2+ channels indicate a significant mechanism by which GBRs influence calcium signaling in dendrites and spines [240]. Especially, postsynaptic GBRs are frequently located in and near dendritic spines, making them well-positioned to influence glutamate receptors [219,241].
Consistently, two-photon optical quantal analysis revealed that presynaptic GBRs suppress multivesicular release at individual synapses from layer 2/3 pyramidal neurons in the mouse medial prefrontal cortex [219]. The same authors also showed that postsynaptic GBRs directly modulate NMDARs via the PKA pathway. These results demonstrated a new role for postsynaptic GBRs directly suppressing NMDAR Ca 2+ signals, with little impact on AMPAR or NMDAR synaptic currents [219]. This potent GBRs modulation depends on G protein signaling and involves PKA pathway. Direct suppression of NMDAR calcium signals by GBRs suggests that GBRs have an ability to modulate not only electrical properties of neurons, but also to influence biochemical signaling cascades at the synapses. This is an important mechanism by which GABA signaling helps to control neuronal communication in the brain [219].
Classification of Chemokines and Their Receptors
Chemotactic cytokines (chemokines) are small alkaline peptides (7 to 15 kDa) known as the important mediators of inflammatory processes. Based on the number of amino acids between the two cysteines at the amine end of the molecule, they are classified into four groups: XC, CC, CXC, and CX3C, where C is cysteine and X represents another amino acid [242]. Chemokines usually possess the conserved four cysteines and formation of disulfide bridges determines their three-dimensional structure ( Figure 6). So far, about 50 chemokines and 10 receptors for CC subtype, 7 for CXC subtype, and single receptors for XC and CX3C chemokines have been identified [243]. Although most chemokine receptors belong to the classic G protein receptors, there is also a group of so-called atypical chemokine receptors (ACKRs), with at least 6 representatives [244]. They bind chemokines with high affinity, but due to their structural inability to couple to G proteins, they do not induce cell migration and act mainly as "capturers" of chemokine, reducing inflammation or shaping chemokine gradients [245]. Signals from chemokine receptors are transmitted by two major routes: G proteins and β-arrestin; however, these processes are cell-and tissue-dependent, and can be modulated by the ligands or receptors involved [246,247]. induce cell migration and act mainly as "capturers" of chemokine, reducing inflammation or shaping chemokine gradients [245]. Signals from chemokine receptors are transmitted by two major routes: G proteins and β-arrestin; however, these processes are cell-and tissue-dependent, and can be modulated by the ligands or receptors involved [246,247]. A particularly important aspect of chemokine-induced signaling is that chemokine receptors could bind several chemokines as well as can act as multimeric forms, homo-or heterodimers [248][249][250]. Moreover, some chemokines can form complexes with more than one receptor, thereby overlapping mechanisms may differentiate the final biological effect. Crucial is the affinity of a given chemokine to the receptor and the density of particular receptor types in the cell. Recently, the phenomenon of chemokine receptor oligomerization, which specifically modifies the response to chemokine binding, has become increasingly important. Composition of homo-or heterooligomeric complexes determines their affinity for chemokine, which can lead to the activation of different signaling pathways [251]. In addition, signaling biases have been documented for several chemokine GPCRs [243,252].
Functionally, chemokines and their receptors play an important role in the nervous system, acting as trophic and protective factors that increase neuronal survival, regulate A particularly important aspect of chemokine-induced signaling is that chemokine receptors could bind several chemokines as well as can act as multimeric forms, homo-or heterodimers [248][249][250]. Moreover, some chemokines can form complexes with more than one receptor, thereby overlapping mechanisms may differentiate the final biological effect. Crucial is the affinity of a given chemokine to the receptor and the density of particular receptor types in the cell. Recently, the phenomenon of chemokine receptor oligomerization, which specifically modifies the response to chemokine binding, has become increasingly important. Composition of homo-or heterooligomeric complexes determines their affinity for chemokine, which can lead to the activation of different signaling pathways [251]. In addition, signaling biases have been documented for several chemokine GPCRs [243,252].
Functionally, chemokines and their receptors play an important role in the nervous system, acting as trophic and protective factors that increase neuronal survival, regulate neuronal migration, and synaptic transmission. Chemokines can be classified as inflammatory or homeostatic, according to the context of their functioning [249]. They are constantly secreted and are responsible for proper cell migration, e.g., during the growth of the body. In the brain, the level of chemokines increases due to their secretion by many different cells: microglia, astrocytes, oligodendrocytes, and endothelial cells of blood vessels [253,254]. A particular role is played by the BBB and the most important is the precise exchange of chemical compounds between the CNS and the circulatory system [255]. The integrity of the BBB structure sustains brain homeostasis and allows to perform many neurological functions. Chemokines play a special role primarily in some CNS diseases, when the damages to the BBB and the blood-spinal/cerebral fluid barrier cause leukocytes infiltration triggering inflammatory processes [256].
Chemokines and Their Receptors in Schizophrenia
Although chemokines can trigger a number of downstream signaling pathways, we focused on those involving PLC activity, because of the significance of Ca 2+ released from the endoplasmic reticulum. Among all chemokine subtypes, only several are known to play a role in schizophrenia, but due to limited and sometimes conflicted data, their participation should be analyzed with caution. The discrepancies could result from heterogeneity of examined group of schizophrenic patients, including duration of the disease, age, sex, and treatment response [257,258]. Chemokine levels are mainly determined in the serum, but since their receptors' expression may vary in different cells, the chemokine concentration does not always correlate with schizophrenic symptoms. It may complicate the explanation of the role played by particular chemokines in schizophrenic insults. However, based on many studies performed recently, with the pro-inflammatory action at least a few chemokines appears to be strongly associated with the disease state. The activation of PLC-sensitive signaling pathways has been demonstrated for many chemokines with the prevalence of those belonging to CC and CXC classes and one representative of CX3C type ( Table 2). The elevations of inflammatory chemokines in blood and cerebrospinal fluid as well as altered function of immune cells in the central nervous system deregulate the chemokinemediated network that may contribute to the progression of schizophrenia [259,260]. The processes triggered by migration of immune cells to the brain may also impair neuronmicroglia crosstalk by hyperactivation of astrocytes and microglial cells. Subsequent release of pro-inflammatory chemokines activates chemokine receptors followed by the raise in cytosolic Ca 2+ , affects chemotaxis, secretion, and gene expression. The chemokines can be also released by activated astrocytes, thus inducing production of reactive oxygen species (ROS) leading to excitotoxic neuronal death. Nowadays, inflammation constitutes an apparent risk factor for schizophrenia and increased chemokines production during inflammatory conditions may play a role in development of the disease. Noteworthy, chemokines can be rapidly transported from the blood to the brain through the BBB and trigger a cascade of events contributing to alterations in BBB integrity and development of BBB breakdown [256]. Accumulating evidence indicates that increased level of pro-inflammatory chemokines: CCL2, CCL4, CCL11, CCL17, CCL22, and CCL24, in serum strongly correlates with schizophrenic symptoms including cognitive impairments in attention, working memory, episodic and semantic memory, and executive functions [11,[261][262][263][264][265][266][267][268]. Several chemokines of CXC type (CXCL8, CXCL11, CXCL12), were also shown to act through PLC/Ca 2+ downstream signaling [269][270][271]. An interesting observation was that several prenatal infections and inflammatory biomarkers may contribute to the etiology of schizophrenia, including fetal exposure to CXCL8 that could alter early stages of neurodevelopment [272,273].
Circulating chemokines detectable in serum may be produced by blood cells, endothelium or may originate from the brain. Hence, their concentration determined in situ may not always reflect their tissue levels. Moreover, due to different chemokines' half-life and higher concentration at the sites of release, the concentration determined in the blood may not correlate with the physiological response. Therefore, the chemokine nature appears to be ambivalent: they can be protective or contribute to neuronal damage. The obligatory element to initiate chemokine signal transmission is the presence of responsive receptors.
Whereas the analysis of chemokines' level in schizophrenic patients is quite complex, less information is available for chemokine receptors. As shown in Figure 7, a large number of chemokine receptors can bind more than one ligand. Moreover, the receptors can be differentially expressed in the CNS, making the separation of causes from the effects even more complicated. For example, there is an evidence that CCL-11 at low concentrations can act as a partial agonist at CCR2 and antagonize CCL2 activity, but high concentrations are sufficient to activate CCR2 in chemotaxis assays [261,274]. Some receptors have been proposed to form putative heteroreceptor complexes with an NMDA receptor (NMDAR-CCR2, NMDAR-CXCR4) that may also contribute to schizophrenia-like symptoms in mild neuroinflammation [275]. The function of CX3CR1 seems to be the most characterized since this receptor binds single chemokine-CX3CL1, which is the only chemokine with the expression higher in the CNS than in the periphery [276,277]. Communication of microglia with neurons via CX3CR1 signaling is involved in the formation of dendritic spines, facilitates neuron-microglia interactions, influences microglial activation and synaptic function [278]. Moreover, CX3CL1/CX3CR1 signaling regulates activation of microglia in response to brain injury or inflammation, and induces the response that may have either beneficial or detrimental effects [279,280]. An additional issue is that some chemokines may bind to the receptors without inducing transmembrane signals. Thus, even if the concentration of chemokines in blood of schizophrenic patients is increased, no evident changes at a physiological and behavioral level could be detected [11,267,281]. This phenomenon may, at least in part, explain the contradictory results reported in several studies. some chemokines may bind to the receptors without inducing transmembrane signals. Thus, even if the concentration of chemokines in blood of schizophrenic patients is increased, no evident changes at a physiological and behavioral level could be detected [11,267,281]. This phenomenon may, at least in part, explain the contradictory results reported in several studies. Chemokines and their receptors in schizophrenia. Some chemokines (underlined here) that are altered in schizophrenia can bind to multiple receptors and they all act through increasing calcium transients. Based on [10,11,261,264,[267][268][269]275,[281][282][283][284][285][286][287][288][289][290][291][292][293]. Chemokines and their receptors in schizophrenia. Some chemokines (underlined here) that are altered in schizophrenia can bind to multiple receptors and they all act through increasing calcium transients. Based on [10,11,261,264,[267][268][269]275,[281][282][283][284][285][286][287][288][289][290][291][292][293].
Numerous mechanisms have been discovered in terms of activity regulation of chemokines and their receptors [250]. Available data demonstrate that around 40% of the schizophrenic patients have some degree of inflammation engaging chemokine/receptor complexes [294,295]. In addition, increased permeability of the BBB in a subset of patients with schizophrenia correlates with enhanced chemokine signaling [256]. Despite the progress that has been made regarding the role of chemokines and inflammatory processes in schizophrenia pathology, the available data are still sparse and mostly correlative. Moreover, inflammation has been detected in numerous neuropsychiatric diseases, thus limiting its relevance to discoveries of new therapeutic approaches of schizophrenia.
Concluding Remarks
An increasing number of reports on schizophrenia clearly indicates its multifactorial etiology, including genomic, epigenetic, endocrinological, and environmental components, which act synergistically to produce disease-specific symptomology. As we reviewed here, there is also a considerable body of evidence to support abnormalities in neurotransmitter-GPCRs signaling as an integral piece of schizophrenia neurobiology. The pathophysiology of this illness involves profound changes in motoric function, mood, and cognition derived from dysfunctional limbic system. The monoamine and neuropeptide pathways have been demonstrated to originate and project within hippocampus, thalamus, or brainstem. Therefore, it is not surprising that abnormalities in neurotransmitter systems are in the center of both preclinical and clinical studies. However, the existence of potential alterations in GPCR signaling suggests that the relief for patients resistant to current medications will be possible only by targeting post-receptor sites. Growing body of evidence indicates that the mechanisms underlying the synthesis and inactivation of second messengers may also offer the promise for the rational design and development of efficient drugs for schizophrenia treatment. Moreover, as the signal transduction pathways downstream GPCRs frequently display unique characteristics, they offer unique targets for relative specificity of action and hold much promise for novel drugs in the long-term schizophrenia treatment. However, the difficulty to transform preclinical results into clinically efficient treatment strategies is invariably the biggest challenge for the next era in neuropsychopharmacology.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this paper.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-29T05:12:19.455Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "4413b9419983721f44fabe2c3bf83b27eb5d0e6f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/10/5/1228/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4413b9419983721f44fabe2c3bf83b27eb5d0e6f",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24763966 | pes2o/s2orc | v3-fos-license | Nanobodies effectively modulate the enzymatic activity of CD38 and allow specific imaging of CD38+ tumors in mouse models in vivo
The cell surface ecto-enzyme CD38 is a promising target antigen for the treatment of hematological malignancies, as illustrated by the recent approval of daratumumab for the treatment of multiple myeloma. Our aim was to evaluate the potential of CD38-specific nanobodies as novel diagnostics for hematological malignancies. We successfully identified 22 CD38-specific nanobody families using phage display technology from immunized llamas. Crossblockade analyses and in-tandem epitope binning revealed that the nanobodies recognize three different non-overlapping epitopes, with four nanobody families binding complementary to daratumumab. Three nanobody families inhibit the enzymatic activity of CD38 in vitro, while two others were found to act as enhancers. In vivo, fluorochrome-conjugated CD38 nanobodies efficiently reach CD38 expressing tumors in a rodent model within 2 hours after intravenous injection, thereby allowing for convenient same day in vivo tumor imaging. These nanobodies represent highly specific tools for modulating the enzymatic activity of CD38 and for diagnostic monitoring CD38-expressing tumors.
Results
Panning of VHH-phage display libraries from immunized llamas on CD38-transfected cells yields 22 distinct families of CD38-specific nanobodies. Two llamas were immunized with recombinant nonglycosylated CD38 ecto-domain (aa 46-300) and two llamas were immunized with a cDNA expression vector encoding full length CD38 ( Figure S1). Phage display libraries were generated by PCR-amplification of the VHH-repertoire from blood lymphocytes obtained 4-10 days after the last boost immunization 18,20 . CD38specific nanobodies were selected by binding of phages to CD38-transfected lymphoma cells. Selected clones were sequenced and clones that were found more than once, or a plurality of clones with one or a few amino acid substitutions in the CDR regions, were defined as a family. The results revealed selection of clones derived from 22 distinct nanobody families, with CDR3 lengths ranging from 3 to 21 amino acid residues. FACS analyses performed with crude periplasmic lysates from E. coli to detect nanobodies that bound to CD38-transfected but not to untransfected cells, confirmed the specificity of the selected nanobody families for CD38 ( Figure S2a). Table 1 provides an overview of the CD38-specific nanobodies. For each family, the number of isolates (ranging from and the number of variants within a family (ranging from 1-6) and the variant amino acid positions within the CDR3 region are indicated. Some nanobodies showed only little if any intrafamily variation, while others contained members with highly divergent amino acid sequences. Families 5,14, and 20 contain the three nanobodies (MU375, MU1053, MU551) described in our previous study reporting the 3D-structures of these nanobodies in complex with CD38 21 .
Characterisation of monovalent CD38-specific nanobodies carrying a C-terminal His6-c-Myc tag.
For each nanobody family, we subcloned the member that had shown the highest staining intensity of CD38-transfected cells in the periplasmic screening assay ( Figure S2b). To circumvent the problem of endotoxin contamination of nanobodies inherent to the E. coli expression system, we recloned the nanobody encoding region into a eukaryotic expression vector (pCSE2.5) optimized for secretory protein production in suspension cultures of HEK-6E cells in serum free medium [22][23][24] . SDS-PAGE analyses of HEK cell culture supernatants harvested 6d after transfection revealed consistent production levels of ~50 µg nanobody per ml of HEK-6E supernatant ( Figure S3).
Specific binding of purified CD38 nanobodies were determined by off-rate analysis on real time bio-layer interferometry (BLI) analysis using the immobilized ectodomain of human CD38 (Table 1), revealing dissociation rates ranging from 7.8 × 10 −5 to 6.5 × 10 −3 s −1 . Several nanobodies had very slow off-rates below the detection limit of the instrument (WF121, WF139, MU1105 and WF124). As reference, the single chain variable fragment (scFv) of Daratumumab (see below) was included (kd of 4.4 × 10 −3 s −1 ). In addition, qualitative comparisons of the dissociation rates were performed using fluorochrome-conjugated CD38 nanobodies bound to CD38-transfected cells by flow cytometry over a timeframe of 16 hours ( Figure S4). The results confirm the strong binding and slow dissociation from native CD38 on the cell-surface by monovalent CD38-specific nanobodies.
Three nanobody families inhibit and two nanobody families stimulate the enzymatic activity of CD38. Nanobodies directed to enzymes reportedly show a propensity to block enzymatic activity 25,26 . CD38 catalyzes the synthesis of cyclic ADP-ribose and ADP-ribose from NAD + and the synthesis of cyclic GDP-ribose (cGDPR) from nicotinamide guanine dinucleotide (NGD + ) 4 . Since the latter can be monitored conveniently by fluorimetry, we used this GDPR-cyclase assay to analyze the capacity of CD38-specific nanobodies to modulate the enzymatic activity of CD38. CD38-specific nanobodies from 22 families were analysed for their capacity to modulate the GDPR-cyclase activity of CD38 (Fig. 1). Three nanobodies (JK2, MU1067, MU523, families 4,20,19) inhibited the conversion of NGD + to cGDPR by recombinant CD38 in a dose-dependent manner. Two other nanobodies (WF14 and MU738, families 7 and 9) enhanced CD38-catalyzed synthesis of cGDPR.
Crossblockade analyses reveal binding of nanobodies to three non-overlapping epitopes.
Four nanobody families bind CD38 independently of daratumumab. In order to perform comparative binding analyses with monovalent CD38 nanobodies, we cloned the antigen-binding domain of daratumumab in a monovalent scFv format, designated Dara scFv ( Figure S6). We performed cross-blockade analyses with Alexa 647 conjugated Dara scFv to determine which of the CD38-specific nanobodies could bind to CD38 independently of Dara scFv. Interestingly, binding of Dara scFv was blocked by preincubation of CD38-expressing cells with nanobodies from each of the three epitope groups, including all epitope 1 families (Table 2, Figure S5). Three nanobodies did not interfere with binding of Dara scFv: one out of three epitope group 2 (JK2, family 4) and two out of eight epitope group 3: WF9 and JK36, (families 1 and 2). Two additional epitope group 3 nanobodies, MU1105 and WF14 (families 7 and 22) partially blocked binding of Alexa 647 -Dara scFv.
In addition, the complete panel of nanobodies from 22 families and Dara scFv was assessed for in-tandem binding by BLI analysis to allow further mapping of subgroups within the three main clusters. Nanobodies were non-conjugated and simultaneous binding was assessed in both orders, i.e. injection as first or second analyte (Fig. 2, Table S1). Within epitope group 1, several families competed with a selection of families within epitope group 3, which may indicate that these families recognize a bridging epitope located between 1 and 3. For most nanobodies binding was independent of the order of injection, but for some, like WF14 in family 7 and MU738 in family 9, the tandem binding profile was different dependent on the order. Interestingly, families 7 and 9 were identified as potentiators of CD38 enzyme activity (Fig. 1), and hence it is conceivable that these sensitize CD38 by stabilization of a more active conformation.
These results of in-tandem epitope binning analyses of Dara scFv with CD38-specific nanobodies confirmed the independent binding of Dara scFv and nanobody families 1, 2, and 4. Moreover, three distinct members of family 22 were shown to bind CD38 in conjunction with Dara scFv, irrespective of the order of injection, while binding of family 7 was only observed when Dara scFv was allowed to bind first, in support of a conformational mechanism ( Figure S7, Table S1). Hence within epitope group 3, families 1, 2 and 22, and within epitope group 2, family 4, represent subgroups that bind to an epitope that is non-overlapping with Dara scFv. Taken together, CD38 nanobodies from 4 distinct families and two non overlapping epitope groups are capable of binding CD38 in conjunction with Dara scFv.
Nanobodies bind to human CD38 on lymphoma cell lines, peripheral blood NK and B cells, and primary myeloma cells.
Next, purified fluorochrome-conjugated monovalent anti-CD38 nanobodies were analyzed for binding to native CD38 on the cell surface of human tumor cells, NK cells and B cells (Fig. 3). The results confirm high level of CD38 expression by established tumor cell lines derived from multiple myeloma (LP-1) and Burkitt's lymphoma (CA46, Daudi) (Fig. 3a). On peripheral blood leukocytes of normal donors, all nanobodies showed high level staining of CD16 + NK cells and a subset of CD19 int B cells and much lower staining of T cells and CD19 hi B cells, consistent with the known expression of CD38 by these cells (Fig. 3b). The same staining pattern was observed with the conventional mAb LS198-4-3 that is commonly used for routine diagnostics. We further analyzed the utility of the nanobodies to detect tumor cells in primary bone marrow samples from patients with multiple myeloma. The results show specific discrimination of myeloma cells (CD45 lo /CD56 hi ) with CD38-specific nanobodies (Fig. 3c). We next set out to determine whether the nanobodies that bind independently of Dara scFv to a non-overlapping epitope could also stain tumor cells in a therapeutic setting, i.e. when saturated with intact daratumumab. To this end, we preincubated LP-1 myeloma cells with a large excess of Darzalex ® before incubation with fluorochrome conjugated nanobodies (Fig. 4). The results show that nanobodies JK2 and JK36 effectively stain cell surface CD38 even after opsonization with daratumumab.
Specific detection of CD38 + tumors in vivo with nanobody MU1067 conjugated to the near infrared dye Alexa 680 . Next we determined whether CD38-specific nanobodies could be used as imaging agents to detect CD38 expressing tumors in vivo (Fig. 5). To this end we used a two-sided tumor model in nude mice bearing untransfected and CD38-transfected lymphoma cells injected subcutaneously in the left and right flanks. In order to allow in vivo imaging with the IVIS200 system, nanobody MU1067 was conjugated to the near infrared dye ALexa 680 and specific binding of Alexa 680 -MU1067 to CD38-expressing cells was confirmed by flow cytometry. Seven days after injection of tumor cells, Alexa 680 -MU1067 (50 µg/mouse, 2.5 mg/kg) specifically detected CD38 + tumors in vivo already within 1 hour after nanobody injection (Fig. 5a,b). At this time point very strong signals were also detected in the kidneys, consistent with renal excretion of excess unbound nanobody (15 kD). At 2 hours after injection, signals from the CD38 + tumor exceeded those of the kidneys. At the time of sacrifice (48 h post injection) the CD38 + tumors continued to show high signals, while signals in other tissues returned to background levels (Fig. 5c), with low fluorescent signals still detectable in kidneys. While the liver itself showed only background fluorescence, fluorescent signals in the gall bladder at the time of sacrifice likely reflect biliary excretion of fluorochromes. In conclusion, in the time window from 2-24 h post injection, high tumor/background ratios were observed in all animals (Fig. 5b).
Figure 2.
In-tandem binding analyses of nanobodies to immobilized recombinant CD38 by BLI. Sequential binding analyses of nanobodies to the glycosylated extracellular domain of CD38 immobilized on AR2 Biosensors were performed using the Octet RED384. Hierarchial clustering was performed using the Ward's method. Y-axis refers to first loaded analyte, Y-axis to second analyte. White: no additional binding of the second agent was observed, e.g. epitope was occupied or hindered by the loaded agent. Green: additional binding of the second agent was observed. Self-binning is indicated along the diagonal in white boxes.
Discussion
The goal of this study was to generate nanobodies directed against the cell surface ecto-enzyme CD38 as new diagnostic and potential therapeutic tools for hematological malignancies. We successfully identified 22 families of CD38-specific nanobodies from phage display libraries generated from immunized llamas. Our results show that some of these nanobodies modulate the enzymatic activity of CD38 and allow specific detection of CD38 expressing tumors in vivo.
11 of 22 nanobody families were obtained from protein-immunized llamas after panning on the aglycosylated CD38 ectodomain, the other 11 nanobody families were obtained from cDNA-immunized llamas by binding to CD38-transfected cells in solution. Since the four llamas used were outbred and genetically diverse, it is not possible to conclude that one or the other strategy is better. However, it is perhaps noteworthy that three of four families that bind CD38 independently of daratumumab (families 1, 2, and 4) were derived from genetic immunizations whereas the clone with the highest affinity (MU523, family 19) was derived from a llama immunized with protein.
All CD38 nanobodies bind to three independent non-overlapping epitopes (Fig. 6a). Interestingly, all nanobodies from epitope binning group 1 and many nanobodies from epitope groups 2 and 3 interfered with binding of Dara scFv. The nanobody CDR3 loop can fold over a side of the variable domain to increase the interaction surface with the antigen and the solvent accessible surface area of a nanobody can be as large as that of a VH-VL pair 25 . However, the size of the pair of variable domains of Dara scFv is roughly twice as large as that of a the single variable domain of a nanobody. Consistently, the results of the tandem epitope binning analyses by Octet show that the binding site of Dara scFv is larger than that of the CD38-specific nanobodies. Although there is no structural data available on the binding of Daratumumab, it presumably uses both its VH and VL domains for binding to CD38, i.e. it can be expected to cover roughly twice as large a surface area of CD38 than the nanobodies. One nanobody family of epitope group 1 (JK2, family 4) and a subgroup of three nanobody families within epitope group 3 (WF9, JK36, and MU1105, families 1, 2 and 22) bound CD38 independently of daratumumab. Nanobodies JK2 and JK36 effectively recognize cell surface CD38 even after opsonization with saturating doses of daratumumab. These nanobodies could potentially be used to monitor expression of CD38 on the cell surface of lymphocytes and tumor cells in daratumumab-treated patients.
We have previously determined the precise epitopes of three different CD38 nanobodies within epitope groups 1 and 2 by co-crystallisation with the CD38 ectodomain (Fig. 6b). Structural information for nanobodies MU375 (fam 5, epitope 1), MU1053 (fam 14, epitope 1), and MU551 (fam 20, epitope 2) is available in PDB codes 5F21, 5F1O and 5F1K, respectively 21 . All three nanobodies compete with the single chain variable fragment of daratumumab for binding to cells and to recombinant protein, suggesting overlapping epitopes. In line with this, Ser274 of CD38 (Fig. 6b), which is described to be important for daratumumab binding 13 , is part of the footprint on CD38 in each of the two epitope 1 nanobodies 21 . Within epitope group 1, family 14 differs from family 5 in that it competes with a subset of epitope group 3 nanobodies. The structural data confirms that MU1053 (fam14) is more directed to the hind site of the protein, further away from the catalytic pocket than MU375 (fam5).
Previous studies have uncovered a striking propensity of nanobodies from immunized llamas to bind to the active site of enzyme antigens 20,23,[25][26][27] . In our study, three of 22 nanobody families, i.e. all epitope 2 nanobodies, blocked CD38-catalyzed conversion of NGD + to cyclic GDPR in a dose dependent fashion, whereas two nanobody families (families 7 and 9 from epitope groups 3 and 1, respectively) potentiated CD38 enzyme activity. In this context it is also of interest to note that only one of well over 100 monoclonal antibodies generated against CD38 has been shown to inhibit the enzyme activity of CD38 28 . The crystal structure of this mAb sar650984 in complex with CD38 (Fig. 6b) revealed binding of sar650984 far away from the active site crevice (Fig. 6b), implying an allosteric mode of action 28 . It seems likely that the enzyme-inhibiting nanobodies similarly act in an allosteric fashion, considering that the binding site of the MU551 family 20 nanobody is also located away from the active site crevice (Fig. 6b). Similarly, the nanobodies that were found to sensitize the catalytic activity of CD38 may act in an allosteric manner, given the observation of conformational constraints for these nanobodies in the tandem binding studies. It has been suggested that metabolites of NAD + generated by CD38 in the tumor microenvironment can promote tumor growth and immunosuppression 8 . Thus, it is conceivable that blocking the enzymatic activity of CD38 may be of therapeutic benefit in cancer. If so, this could influence the choice of nanobodies as therapy candidates for pre investigational new drug experiments. In particular, use of the antagonistic nanobody family 4 that binds independently of daratumumab, would be feasible even in patients undergoing daratumumab treatment. It will thus be interesting to determine whether allosteric modulation of CD38-enzyme activity in vivo by CD38 nanobodies can counteract its purported immunosuppressive and tumor promoting effects in the tumor microenvironment 8 .
In a subcutaneous xenograft tumor model in nude mice, we examined the capacity of CD38-specific nanobodies to specifically target CD38-expressing tumor cells. In this model, the subcutaneous location and the nude skin minimized quenching of fluorescent signals from the NIRF-conjuagted nannobodies by muscle, bone or hair and thus facilitated in vivo imaging. Moreover, since the nanobodies do not cross react with mouse CD38, the mouse model provided a clear background. The results of these experiments clearly demonstrate the capacity of nanobodies to specifically target CD38 + vs. CD38 − tumors. In a clinical setting, radionuclide-labeled nanobodies can be expected to provide higher sensitivity at lower doses 29 , but higher background signals due to binding of nanobodies to endogenously expressed CD38 in healthy tissues. Besides the high specificity and affinity to CD38, the efficient imaging of CD38 + tumors can be attributed to the small size of the nanobody which allows excellent tissue and tumor penetration 15,16,29 and fast clearance of excess unbound nanobodies from the circulation by renal excretion 30 . Hence the current panel of high affine monovalent CD38-specific nanobodies are attractive for use as companion diagnostic for anti-CD38 therapies.
For therapeutic applications, nanobodies can readily be humanized, e.g. by fusion to the hinge and Fc-domains of human IgG1 31 . Moreover, the VHH domain itself can be humanized by substituting framework residues to more closely resemble those of human VH domains 32 . This is done routinely for llama-derived nanobodies in clinical development 33,34 . In conclusion, our results underscore the potential of nanobodies for modulating the enzymatic activity of CD38 and for specific in vivo detection of CD38 + tumors. Importantly, we describe four nanobody families that bind independently of daratumumab, which could potentially be valuable for monitoring the efficacy of daratumumab therapy since they can still detect CD38 after binding of daratumumab. The nanobodies reported here thus hold promise as new diagnostic and potential therapeutic tools for multiple myeloma and other CD38-expressing malignancies.
Methods
Protein production and llama immunizations. The extracellular domain (aa 46-300) of a variant of CD38 in which the three potential N-linked glycosylation sites were inactivated was produced as a secretory protein in yeast cells and purified as described previously 3 . The extracellular domain of CD38 (aa46-300) with intact glycosylation sites was produced as a secretory protein with a chimeric His6x-Myc epitope tag in the pCSE2.5 vector 22 (kindly provided by Dr. Thomas Schirrmann, Braunschweig). For cDNA immunization the full-length open reading frame of CD38 was cloned into the pEF-DEST51 expression vector. Two llamas (Lama glama) (designated 10, 25) were immunized subcutaneously with purified recombinant aglycosylated protein emulsified with Specol adjuvant (240 µg in 500 µl total volume) 20,27,35 . Two llamas (designated 538 and 539) were immunized by ballistic cDNA immunization 20,36 . The humoral immune response was monitored in serially diluted serum by ELISA on microtiter plates (Nunc MaxiSorp, Thermo Fisher Scientific, Waltham, MA) coated with recombinant CD38, using monoclonal antibodies directed against llama IgG2 and IgG3 kindly provided by Dr. Judith Appelton, Cornell University, NY 37 . Animals were bled 4-18 days after the 3rd or 4th boost.
Cells. The Yac-1 and DC27.10 mouse lymphoma cell lines were transfected with linearized full-length human CD38 expression vector pEF-DEST51. Stable transfectants were selected in medium containing blasticidin and by fluorescence activated cell sorting. Human multiple myeloma (RPMI-8266, U266, LP-1) and Burkitt's lymphoma (CA46, DAUDI) cell lines were obtained from the Leibniz-Institute DSMZ-German Collection of Microorganisms and Cell Cultures, Braunschweig, Germany. Bone marrow aspirates of patients MM123 and MM129 were obtained after written informed consent as approved by the ethics committee (Ethikkommission der Ärztekammer Hamburg, PV4767).
Construction of phage display library and selection of CD38-specific nanobodies.
Mononuclear cells were isolated from 120 ml of blood by Ficoll-Paque TM (GE Healthcare, Chalfont St Giles, UK) gradient centrifugation. RNA purified from these cells by TRIZOL reagent (Invitrogen, Carlsbad, CA) was subjected to cDNA synthesis with random hexamer primers. The VHH coding region was amplified by PCR with degenerate VHH-specific primers 20,23 . PCR products were purified from agarose gels, digested sequentially with SfiI and NotI (NEB, Ipswich, MA) and cloned into the pHEN2 phagemid vector downstream of the PelB-leader peptide and upstream of the chimeric His6x-Myc epitope tag 27,38 . Transformation into XL1-Blue E. coli (Stratagene, La Jolla, CA) yielded libraries with sizes of 4.0 × 10 5 -10 7 clones. Phage particles were precipitated with polyethylene glycol from culture supernatants of E. coli transformants infected with a 10-fold excess of M13K07 helper phage (GE Healthcare, Chalfont St Giles, UK).
Panning of specific phage was performed using either the recombinant aglycosylated human CD38 ectodomain immobilized on microtiter plates (Nunc MaxiSorp, Thermo Fisher Scientific, Waltham, MA) or in solution with CD38-transfected Yac-1 cells. Phage particles (1.6 × 10 11 ) were incubated with recombinant CD38 or CD38-transfected cells for 60 min with agitation at room temperature in PBS, 10% Carnation non-fat dry milk powder (Nestlé, Glendale, CA). Following extensive washing, bound phages were eluted from ELISA plates with 50 mM diethylamine and neutralized with 1 M Tris-HCl pH 8. Phages were eluted from transfected cells by trypsinization. Eluted phages were titrated and subjected to one or two more rounds of panning, following the same procedure. Phage titers were determined at all steps by infection of TG1 E. coli cells (Stratagene, La Jolla, CA). Plasmid DNA was isolated from single colonies and subjected to sequence analyses using pHEN2-specific forward and reverse primers.
Production and reformatting of nanobodies. Monomeric nanobodies were expressed in HB2151 E. coli cells (GE Healthcare, Chalfont St Giles, UK) 20,23 . Protein expression was induced with IPTG (Roche, Rotkreuz, Switzerland) when bacterial cultures had reached an OD 600 of 0.5 and cells were harvested after further cultivation for 3-4 h at 37 °C. Periplasmic lysates were generated by osmotic shock and removal of bacterial debris by high speed centrifugation. Nanobodies were readily purified from E. coli periplasmic lysates by immobilized metal affinity chromatography (IMAC).
The coding region of selected nanobodies was subcloned using NcoI/PciI and NotI upstream of a chimeric His6x-Myc epitope tag into the pCSE2.5 vector 22 (kindly provided by Thomas Schirrmann, Braunschweig). Daratumumab scFv was generated by gene synthesis using the published sequence (WO 2011/154453) by fusing the VH domain and the VL domain via a 15GS linker flanked by NcoI and NotI sites and cloned upstream of a chimeric His6x-Myc epitope tag into the pCSE2.5 vector.
Off-rate determination. Off-rates of CD38 nanobodies were determined by BLI technology, using an Octet RED384 instrument (ForteBio). As running buffer HBS-EP + (0.01 M HEPES pH 7.4, 0.15 M NaCl, 3 mM EDTA, 0.05% v/v Surfactant P20) was used. Assays were performed at 25 °C. The shake speed during the biosensor preparation and off-rate determination was set at 1000 rpm. Amine reactive 2 nd Generation (AR2G) biosensors (ForteBio) were activated for 10 minutes with EDC(20 mM)/NHS(10 mM) and recombinant human CD38 was loaded at 10 µg/ml in 10 mM sodium acetate pH 6 for 15 min. After immobilization, surfaces were deactivated with 1 M ethanolamine (pH 8.5) for 10 min. During off-rate screening 100 nM and 1 µM nanobody were allowed to associate during 5 min on immobilized human CD38 followed by a 10 min dissociation. After each cycle the human CD38 surfaces were regenerated via 5 short pulses of 5 s of 100 mM HCl and running buffer. Data processing and off-rate determination was performed with ForteBio Data Analysis Software Version 9.0.0.12. Sensorgrams were double referenced by subtracting 1) running buffer on reference biosensor containing only human CD38 and 2) nanobody interaction on parallel reference biosensors on which no human CD38 was immobilized. Processed curves were evaluated via a fitting with the model '1:1' .
CD38 epitope binning.
In-tandem epitope binning of CD38-specific nanobodies and Dara scFv was performed on an Octet RED384 instrument (ForteBio). As running buffer HBS-EP + (0.01 M HEPES pH 7.4, 0.15 M NaCl, 3 mM EDTA, 0.05% v/v Surfactant P20) was used. Experiments were performed at 20 °C. The shake speed during the biosensor preparation and epitope binning was set at 1000 rpm. Amine reactive 2 nd Generation (AR2G) biosensors (ForteBio) were activated for 10 min with EDC(20 mM)/NHS(10 mM) and human CD38 protein was loaded at 10 µg/ml in 10 mM sodium acetate pH6 for 15 min. After immobilization, surfaces were deactivated with 1 M ethanolamine (pH 8.5) for 10 min. In the epitope binning, 100 nM nanobody 1 was loaded during 3 min on immobilized CD38 to saturate all available epitopes. Nanobody 2 was presented after a 10 s dip in running buffer for 3 min followed by a 1 min dissociation. After each cycle the human CD38 surfaces were regenerated via 5 short pulses of 5 s each of 100 mM HCl followed by running buffer. Data was processed with ForteBio Data Analysis Software Version 9.0.0.12. Binding levels of nanobody 2 were determined at the end of the 3 min association and compared to levels at baseline (beginning of association). Irrelevant nanobody controls were included. The binding level of nanobody 2 for each nanobody 2 -nanobody 1 pair was divided by the binding response of nanobody 2 on a CD38 surface saturated with nanobody 2 (self-binning). Normalized data was hierarchically clustered using Ward's method (distance measure: half square Euclidian distance; scale: logarithmic) and visualized in Spotfire (TIBCO Software Inc.).
Fluorimetric enzyme assay. CD38 catalyzes both, the synthesis of cADPR and nicotinamide from β-NAD + , and the fast hydrolysis of cADPR to ADPR. A fluorimetric enzyme assay with slower kinetics has been developed using nicotinamide guanine dinucleotide (NGD + ) as substrate 4 . NGD + is converted to cyclic GDP-ribose (cGDPR) and nicotinamide followed by a very slow hydrolysis of cGDPR to GDPR, leading to accumulation of the fluorescent product cGDPR. Enzymatic production of cGDPR from NGD + (80 µM, Sigma, St Louis, MO) was monitored continuously for 50 min at 410 nm (emission wavelength) with an excitation wavelength set at 300 nm, using a Hitachi F-2000 fluorimeter. Anti-CD38 nanobodies were pre-incubated at a final concentration of 400, 40, 4, and 0.4 nM with 5 nM recombinant glycosylated extracelluar domain of CD38 for 15 min at RT before addition of NGD + and further incubation in the dark at RT in triplicate wells for each treatment. Readings (EX300/EM410) from wells without CD38 were subtracted from all sample readings and were plotted for each nanobody concentration in Relative Fluorescence Units (RFU) vs. time. The rate of cGDPR production was calculated as the slope of these curves (RFU/s) during the linear phase of the reaction, i.e. between t = 10 min and t = 20 min. For nanobody dissociation analyses, two separate aliquots of CD38-transfected cells were incubated either with Cell Proliferation Dye eFluor 450 (eBioscience) or with Alexa 647 -conjugated nanobodies for 20 min at 4 °C. Cells were washed four times, mixed at a 1:1 ratio and further incubated at 4 °C or at 37 °C for 0.5, 2, or 16 h before FACS analyses. The dissociation of nanobodies from the target cells and association with the eFluor 450 labeled cells was analyzed using the FlowJo software (Treestar).
In vivo and ex vivo imaging.
Tumor graft experiments were conducted using athymic nude mice (NMRI-Foxn1nu) obtained from Charles River Laboratories (Sulzfeld, Germany). Experiments were performed in accordance with international guidelines on the ethical use of animals and were approved by the animal welfare commission (Amt für Verbraucherschutz, Lebensmittelsicherheit und Veterinärwesen Hamburg, Nr. 17/13). Prior to optical molecular imaging in vivo, 8-10-week-old mice were kept on an alfalfa-free diet for 7 d to minimize autofluorescence of the intestine. For generation of tumor grafts, mice were injected s.c. on the right side with 1 × 10 6 CD38-transfected DC.27.10 cells and on the left side with 1 × 10 6 untransfected DC27.10 cells, each in 0.2 ml of a 50:50 mix of RPMI medium and Matrigel (BD Biosciences, Franklin Lakes, USA). After 7 d, i.e. when tumors reached ~8 mm in diameter, 50 µg of Alexa 647 -conjugated nanobody MU1067 was injected i.v. via the tail vein. Similar doses have been found to yield good tumor to background ratios in previous studies using nanobodies conjugated to near infrared fluorochromes 30,39 . Optical molecular imaging was performed before injection and at indicated time points after injection. For optical molecular imaging, mice were anesthetized with isofluorane and positioned in the imaging chamber of a small animal imaging system (IVIS-200, Caliper Life Sciences, Hopkinton, Massachusetts, USA). After qualitative imaging in vivo, quantitative analyses were performed by placing ROIs around the CD38-positive tumors, the CD38-negative tumors (negative control) and the hind limb (background signal). Total radiant efficiency was determined with Living Image 4.2 software (Caliper Life Sciences). The tumor-to-background ratio was calculated by dividing the tumor uptake value by the background value. For ex vivo validation of in vivo measurements, animals were sacrificed 48 h post-injection. Tumors and organs (spleen, lungs, liver, kidneys, stomach, ileum, and muscle) were dissected and imaged with the IVIS-200. | 2018-04-03T02:04:59.654Z | 2017-10-30T00:00:00.000 | {
"year": 2017,
"sha1": "378a8a1342e6a06c7b45920dd91f6140a9db8c2a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-14112-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "378a8a1342e6a06c7b45920dd91f6140a9db8c2a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
235292251 | pes2o/s2orc | v3-fos-license | Research on the Utility and Efficiency of Public Management Supported by Computer Big Data
In recent years, my country has provided a large amount of capital and talent training output in public management. In order to strengthen the efficiency of my country’s public management, with the support of the computer big data model, people’s lives have been faster and better. With the support of computer big data, some support on the Internet can appear in people’s lives faster and better, and strengthen people’s awareness of protection of public management. In the 21st century, under the background of the Internet, people’s awareness of public management, and in recent years, the government has also strengthened the direction of public management. In order to make public management better in life, the purpose of increasing public management is undoubtedly to ensure the beauty of mankind. Healthy living needs. It is also an inevitable requirement for achieving common prosperity. In the process of public management practice, opportunities and challenges coexist. Increasing the supervision of the talent market and increasing the awareness of citizens are all for the sake of public management to a better degree, as China’s economic development changes. Reform and development of China’s social economy unconsciously take a step forward, and the proportion of public management in China’s economy is more important. Only when public management is better completed can we move towards a new social process. Public management is With the Chinese government as the core of the various departments to cooperate, so that public public management can better play, public management is diversified. Including two major categories of social public organizations and other social organizations [1]. The purpose of public management is to promote the overall coordinated development of society and to promote the realization of social public interests.
Introduction
Under the development of China's new economic model, public management has moved to a new height. Under the influence of economic management, my country's public management has shown a better development model driven by the Internet. The government invests in efforts to better promote the public management system. Public management is a new management model, new approach, new paradigm or new discipline framework that is different from traditional public administration. Public
What is public management
It is a new type of strategic model proposed by the national administrative department to increase the rapid development of China's economy. By starting from the society, public management puts forward management demands to the Chinese people, and requires everyone to cooperate and jointly improve public management. It is a new and effective management mode proposed because of the defects in government management. On the one hand, it emphasizes that the purpose of public management on the other hand increases the protection of the people, fairness can be better, and more prominently reflects the status of the government in the eyes of the Chinese people [3]. On the other hand, it can also increase supervision of the government, and the use of public power is management, which is more scientific and effective.
Purpose of public management
Public management is a double-edged sword. It can better increase supervision, fill loopholes, rationalize the requirements of the masses faster and more conveniently, further strengthen the management efficiency of our country, and increase the supervision of government departments can also increase the economic growth rate [4]. The rationalization trend of public management is the integration of various forces in the public sector with the government as the core, and extensive use of political, economic, management, and legal methods to strengthen the government's governance capabilities and improve government performance and public service quality. So as to realize public welfare and public interest. 3 But in the field of public management, it can be managed faster and more efficiently through the Internet.
The need to strengthen the efficiency of public management
The improvement of public management has further strengthened the government guarantee system, the necessity of public management, and the attention of the people. Public management is also a necessary issue in China's economic development, and the strengthening of public management is also faster. Improving the efficiency of China's economic management and public management is a necessary route for China to become a powerful country in science and technology. Increase the training of talents. Through the strategy of selecting talents in colleges and universities, select a part of college students to put forward better suggestions, take the essence and discard the dross, and choose the ones that are suitable for our country [5]. The policy of increasing costs and increasing people's supervision, the government's omissions, the people promptly raise problems, and then solve the problems, strengthened our country's management model more quickly, and the omissions in public management, the government assists the people and cooperates with each other. Achieve better and more convenient public management policies.
The content of public management
The content of public management in the sense of discipline includes government management, administrative management, urban management, public policy, development management, education economic management, labor and social security, etc. The rise of public management benefits from the global new public management movement. The core of China's public management is the government. Strengthening the content of public management is also the key to further guaranteeing China's economic development. The content of public management is taken from the people and used for the people, which can strengthen the management system faster and better. It further confirms the speed of China. The content of public management is wide and the scope is large. It is a branch of public administration, which is the study of the activities and techniques and methods of various public organizations in the management of public affairs with government administrative organizations as the core.
Figure 2. Internet use
In the context of the Internet, the quality of social life can be learned more quickly through the Internet, which further promotes the operation of the public management system. It is also a new model that can improve lifestyles better and more conveniently, and speed up public management. Operation, with the help of the Internet, the public management model can operate better, serve the society faster and more efficiently, and it can also increase the improvement of the public management model. Its main body is diversified. In the context of globalization, only the Internet is the fastest and most convenient method, and it has also improved people's quality of life [6]. Increasing investment in the Internet also facilitates the operation of the public management system, strengthens the operation of the main body of public management, and can better The social conditions and public opinion of the people, research the final solution and practice method. Under the computer-supported public management system, the two are combined with each other, highlighting the fastest and strongest of the two, and using them can better highlight the development under the public management model, and it can also promote my country's social and economic development model faster , So as to achieve the mode of public management in the context of the Internet.
The Internet improves the efficiency of public management
The biggest advantage of the Internet is that it can capture the needs of people's lives faster and better. This not only improves the efficiency of public management, but also improves the quality of people's living standards [7]. In the context of the Internet, we can have the most efficient The big data model is also the fastest and most convenient way to further improve the speed of the public management system. At this stage, the field of public management models is continuously expanding, which further reflects the importance of the Internet and strengthens China's faster The economic management system further highlights the top priority of the Internet in the public management model.
How to strengthen the use of the Internet
While strengthening the Internet, it also vigorously promotes technical talents, increases the use of the Internet, can make it faster and more convenient for people to use time, and can more quickly highlight the use value of public management in the context of the Internet. With the promotion of the Internet, public management has taken a step forward, the Internet technology has been vigorously developed, and excellent talents have been better selected to promote the development of our country.
The extent of internet usage in our country
The Internet has become an indispensable model in people's lives, and the Internet can better absorb the opinions of the people, faster and more convenient, and quickly push the big data model to China's public management system. Better and more convenient service to the people [8][9]. With the rapid promotion of the big data model, not only can it absorb more university talents, but it can also provide a lot of jobs for the unemployed in the society. This not only reduces China's social pressure, but also better promotes China's economy. The new strategic model of development.
Conclusion
Under the technology of computer big data, many colleges and universities in our country provide a good entrepreneurial platform for college students through innovation and entrepreneurship mode. In the environment of many big data models, it provides a lot of difficult employment problems for employees, and solves the problem that college students can not get employment just after graduation [10]. In front of big data, it puts forward a lot of solutions to the problems of College Students' entrepreneurship. In order to facilitate college students' faster employment, vigorously develop college students' innovation, increase the employment mode, increase the use of science and technology talents, and increase funds Investment provides the best platform for college students. In the process and analysis of big data, it becomes a new application node of information technology integration Instead, it has become an innovative way to speed up people's pace of life and facilitate people to find jobs. | 2021-06-03T01:39:24.668Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3f1dd2243df15ee2791604fe83eade7c430c99ef",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1915/3/032016",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3f1dd2243df15ee2791604fe83eade7c430c99ef",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
233935602 | pes2o/s2orc | v3-fos-license | Uncovering the Role of Biophysical Factors and Socioeconomic Forces Shaping Soil Sensitivity to Degradation: Insights from Italy
: Following an operational framework derived from earlier research, our study research estimates the specific contribution of biophysical and socioeconomic factors to soil sensitivity to degradation at two-time points (Early-1990s and Early-2010s) in Italy, a Mediterranean hotspot for desertification risk. A total of 34 variables associated (directly or, at least, indirectly) with different processes of soil degradation (erosion, salinization, sealing, contamination, and compaction) and climate change were considered here, delineating the predominant (underlying) cause (i.e., biophysical or socioeconomic). This set of variables represented the largest (quantitative) information available from national and international data sources including official statistics at both national and European scale. Contribution of biophysical and socioeconomic dimensions to soil sensitivity to degradation was heterogeneous in Italy, with the level of soil sensitivity to biophysical factors being the highest in less accessible, natural areas mostly located in hilly and mountainous districts. The highest level of soil sensitivity to socioeconomic drivers was instead observed in more accessible locations around large cities and flat rural districts with crop intensification and low (but increasing) population density. All these factors delineated an enlarged divide in environmental quality between (i) flat and upland districts, and between (ii) Northern and Southern Italian regions. These findings suggest the appropriateness of policy strategies protecting soils with a strong place-specific knowledge, i.e., based on permanent monitoring of local (biophysical and socioeconomic) conditions.
Introduction
Soil is a particularly sensitive environmental matrix affected together by biophysical degradation and socioeconomic transformations [1][2][3].While human activities shape territories to expand their limits and living places, an enlarged anthropogenic pressure is generating a planetary ecological crisis [4].Undoubtedly, humans need to exploit natural resources; however, it is necessary to identify specific thresholds to avoid the activation of irreparable processes of soil degradation [5][6][7].Therefore, defining and characterizing research dimensions such as soil sensitivity to degradation [8,9], in turn, associated with the notion of multi-hazard risk of desertification [10][11][12][13], is vital to perform integrated monitoring approaches.While soils are recognized as the most ignored part of the global ecosystem, they are likely the most affected by physical and economic deterioration [14][15][16].
Soil degradation processes have been increasingly observed in both advanced economies and emerging countries.Europe, and especially the Northern Mediterranean region, is widely regarded as a hotspot of soil degradation driven by climate change and increasing human pressure [47,54].Surpassing soil degradation thresholds may lead to an inevitable process characteristic of soils shifting from an initial (reversible) status of sensitivity to degradation to (strictly irreversible) conditions for desertification [7,55,56].In these regards, the Mediterranean belt was defined as one of the most sensitive areas affected by soil degradation and early desertification processes [57,58].Among them, Italy is considered a critical hotspot for many issues together, that include biodiversity loss, wildfires, habitat fragmentation, soil erosion, deforestation, water shortage, and reduction in soil organic matter content [59,60].In this country, as likely everywhere in the Mediterranean basin, abuse or misuse of soil resources, but also the allocation of land for unsustainable uses have also caused regional disparities in many socio-environmental variables that may further enhance soil degradation intensity [8].However, it is worthy to highlight that soil degradation is not distributed with the same intensity throughout the Italian territory.Earlier studies demonstrate that there is a spatial asymmetry in the level of soil sensitivity to degradation [61,62].Together with the rather well-known issue of "territorial disparities" (namely, "socioeconomic inequalities" among regions in the same country), soil quality, and, more generally, "environmental divides" across space (considering both ecological and socioeconomic conditions) are getting increasing attention at both regional and national planning scales [63].In Italy, soil sensitivity to degradation is recognized to have a strong connection with multiple socioeconomic dimensions, but detailed analyses at the regional scale are rather scarce.Investigating together socioeconomic and biophysical drivers of soil degradation at the appropriate spatial scale will consolidate empirical knowledge about the spatial distribution of soil degradation processes and the more effective policies required to combat them.
In these regards, soil degradation is defined here as a complex process impacting crop production and resulting in the decline of environmental quality and ecosystem services [54].The distribution of soil quality is assumed to be spatially heterogeneous, depending on geological, climatic, vegetation, and human factors [64,65].In recent years, however, climate and land-use changes, population increase, and differential economic growth were also assumed to exacerbate soil sensitivity to degradation.Taken together, the effect of such factors can be spatially "neutral" (stable or increasing level of sensitivity over a given area) or "asymmetric" (increasing sensitivity in areas with low or high soil quality), thus amplifying (or reducing) regional disparities in soil quality.To identify sensitive areas in Italy over a sufficiently long time interval, the present study applies a framework originally proposed by Salvati et al. [63] analyzing 34 variables that quantify the intensity of 6 processes of soil degradation and allow estimation of the overall contribution of biophysical and socioeconomic dimensions of change to the level of soil sensitivity to degradation at an appropriately detailed spatial scale (773 homogeneous agricultural districts) in Italy.Our study completes a traditional, mainstream research based on the Mediterranean Desertification and Land Use (MEDALUS) philosophy to the analysis of desertification risk [62].The empirical results of this study are considered key to developing efficient decision-making tools informing country-scale and regional-scale measures of soil conservation, reducing spatial disparities in the level of land degradation [63,66].Relatively few studies were aimed at estimating the specific contribution of biophysical and socioeconomic dimensions of soil degradation at a sufficiently detailed spatial scale over large areas with the final objective of designing more effective policy strategies targeting the causes underlying soil, landscape, and environmental processes of change.
Study Area
Italy extends a surface area of 301,330 km 2 and its coastline extends for almost 7500 km.Marked variability in topography, latitude, and proximity to the sea accounts for a great variation in environments, landscapes, climates, and soils [67][68][69].For instance, the average annual rainfall ranges from less than 400 mm in Sicily to 1500 mm in Northeastern Italy.The country is also socially divided into affluent regions (mainly in Northern Italy) and economically disadvantaged districts (primarily in Southern Italy) [70][71][72].
Data Sources and Variables
The present study estimates the specific contribution of biophysical and socioeconomic drivers to soil sensitivity to degradation in Italy by applying the operational framework proposed by Salvati et al. [63] and considering a total of 37 variables classified into 6 dimensions of soil degradation (Table 1).These dimensions were delineated following the EU Communication on soil conservation (231/2006) and include five processes of soil degradation (erosion, salinization, sealing, compaction, and (point and diffused) contamination) and an additional component of climate change, supposed to (indirectly) influence soil quality [73].Variables (and general dimensions) of soil degradation were theoretically related with the proximal cause-either biophysical or socioeconomic-in order to estimate a composite indicator of biophysical (and socioeconomic) sensitivity of soils to degradation (see Table 1).Statistical redundancy when manipulating a high number of variables with the final aim at preparing indicators of soil sensitivity was reduced using multidimensional approaches (e.g., factor analysis).The empirical results of this procedure were finally validated through fieldwork [63].The selected variables were derived from official statistics (Censuses of Agriculture, Population and Buildings, Industry and Services, whose results were disseminated by the Italian National Statistical Institute (Istat)), land-use and land cover databases (maps derived from the pan-European CORINE Land Cover project: See technical details below in the section), and additional country-specific sources like meteorological statistics and soil cartography.All variables refer to the Early-1990s (mainly 1990 or 1991) and the Early-2010s (mainly 2010 or 2011).All data were collected at a detailed spatial resolution (e.g., municipality or census track for socioeconomic variables, 1:250,000 (or lesser) scale for biophysical variables).Variables were selected according to their documented relationship with soil sensitivity to degradation [73][74][75][76][77][78] according to a previous work by Salvati et al. [63].A comprehensive review of the rationale for use of most variables as indicators of soil sensitivity to degradation was given in Salvati et al. [63], together with methodological details and a complete description of the variables considered.
Biophysical and Socioeconomic Indicators of Soil Sensitivity to Degradation
The statistical procedure deriving composite indicators of soil sensitivity consisted of 3 steps: (i) standardization (into a 0-1 numerical scale) of each variable through the algorithm (x obs − x min )/(x max − x min ) at the selected spatial scale (773 homogeneous agricultural districts); (ii) Principal Component Analysis (PCA) of the standardized data matrix (34 variables by 773 locations); (iii) computation of two indicators-delineating the specific importance of "biophysical" and "socioeconomic" dimensions to soil sensitivity to degradation, as the weighted average of the selected variables appropriately associated with the relevant "biophysical" or "socioeconomic" dimension as illustrated in Table 1.
To objectively derive the weight for each indicator, standardized variables were converted to a regular grid covering the investigated area by way of ArcGIS software (ESRI Inc., Redwoods, CA, USA).A common grid size of 1 km was chosen according to the original resolution of the 34 layers considered in this study (1 km for climate, soils, and data derived from population census, 250 m for soil erosion, 100 m for land cover, land-use, and soil salinization variables).A 15 km point grid composed of 1346 nodes was, thus, extracted and the value of each variable estimated at each grid node, after transformation into a 0-1 range [79].The PCA was then applied to the matrix composed of the 34 transformed variables.The number of significant axes (m) was chosen according to the components with eigenvalues higher than 1.A weight was attributed to each variable by multiplying its contribution to the i-th PCA axis by the proportion of explained variance.The sum of these products for all m-th axes corresponds to the weight assigned to each variable.Weights were expressed as a value ranging between 0 and 1. Composite "biophysical" and "socioeconomic" indicators were finally calculated as the weighted average of the respective variables [8].The scores of the composite indicators therefore range between 0 Soil Syst.2021, 5, 11 and 1, respectively indicating the lowest and the highest contribution to the level of soil sensitivity to degradation at a given location.
Results
The spatial distribution of two composites ("biophysical" and "socioeconomic") indicators of soil sensitivity to degradation at two-time points (the Early-1990s and Early-2010s) in Italy was illustrated in Figure 1 and tabulated as average scores by latitude and elevation (Table 2), regarded as relevant geographical gradients in the analysis of soil degradation and desertification risk in Italy.The indicator of soil sensitivity due to biophysical factors illustrate a north-south gradient in the country: Sensitivity increases substantially from Northern Italy to Southern Italy, mainly because of the inherent drift of climate regimes (more arid moving from Northern to Southern regions).Considering a measurement scale that ranges between 0 (no sensitivity) and 1 (the highest level of soil sensitivity), scores higher than 0.5 were extensively observed along the sea coast all over Italy, both in the major islands (Sicily and Sardinia) and in the Adriatic coast in North-Eastern Italy.Relatively high scores (between 0.4 and 0.5) were also recorded in flat districts close to the sea coast in Southern and Central Italy.A large part of the Po Plain, the largest lowland in Northern Italy, was classified within the same score range.Biophysical conditions leading to soil sensitivity became worse during the study period, as documented in the 2010s map.Socioeconomic forces responsible for a high level of soil sensitivity in Italy showed a different spatial distribution centered on more specific territorial "hotspots" corresponding with urban areas, the surrounding peri-urban districts, and some additional flat and rural regions, irrespective of the latitude gradient.Representing the intimate geography of human pressure in Italy, the highest score was observed in the metropolitan areas of Socioeconomic forces responsible for a high level of soil sensitivity in Italy showed a different spatial distribution centered on more specific territorial "hotspots" corresponding with urban areas, the surrounding peri-urban districts, and some additional flat and rural regions, irrespective of the latitude gradient.Representing the intimate geography of human pressure in Italy, the highest score was observed in the metropolitan areas of Milan, Rome, and Naples, the three major urban agglomerations in the country.A moderate anthropogenic pressure (scores ranging between 0.2 and 0.3) was also recorded in Northern Italy, along the Adriatic Sea coast in Central and Southern Italy and, more sparsely, in districts of South-Western Italy and the two major islands.As far as the role of human pressure shaping soil sensitivity, homogeneous regions were basically found in the Po plain, likely the most densely populated and affluent area of Italy, hosting a mix of intensive crop and livestock, industrial activities, traditional and advanced services, and infrastructures.
Considering average scores by geographical gradients, soil sensitivity to biophysical factors increased almost linearly from Northern regions to Southern regions and increased over time more in Central-Southern Italy than in Northern Italy.The average score in Southern Italy is the highest observed all over the country (0.44 over a 0-1 scale).Soil sensitivity was the highest in lowlands (0.45 over a 0-1 scale), decreasing moderately in uplands, and reaching the lowest level in mountainous districts.These findings delineate how impacts of biophysical drivers of land degradation are, on average, more intense in flat districts of Italy when ecological conditions are less favorable (e.g., drier climate regimes), despite a generally high level of soil fertility.The largest increase over time was found in the mountainous range, declining in both uplands and lowlands, revealing a sort of "spatial rebalance" in the distribution of biophysical drivers of land degradation all over Italy.However, statistical analysis demonstrates that the distribution of the "biophysical" indicator across districts was quite similar in the time points investigated here (Spearman rank correlation testing similarities between the 1990s and 2010s scores, r s = 0.98, p < 0.001, and n = 773).
Soil sensitivity to socioeconomic forces was less clearly distributed all over the country, being moderately higher in Central Italy than in Southern and Northern Italy.A latitude gradient was instead observed for the increase over time in the same indicator, being the largest in Southern Italy (an economically disadvantaged area) and declining in both Central and Northern Italy (including more affluent districts).As expected, lowlands had the highest sensitivity score, reflecting an intense anthropogenic pressure that grew rapidly over time.Scores decreased along the elevation gradient, reaching the lowest value in mountainous districts.Population density, urban concentration, infrastructural development, and industrial growth were likely the most effective forces of change determining, on average, a higher sensitivity of lowlands compared with uplands and mountain ranges.The largest increase over time in soil sensitivity to socioeconomic forces was observed in uplands, likely reflecting socioeconomic processes determining a particularly intense anthropogenic pressure that spreads from lowlands to the surrounding upland districts.The spatial distribution of the socioeconomic indicator remained mostly unaltered in both time periods (Spearman rank correlation testing similarity in the 1990s and 2010s scores, r s = 0.93, p < 0.001, and n = 773).Confirming earlier results, the biophysical indicator of soil sensitivity increased weakly with population density, a proxy of human pressure in Italy (r s = 0.31 and 0.26 for the Early-1990s and the Early-2010s, both at p < 0.05, and n = 773).As expected, the socioeconomic indicator showed the reverse pattern, rising significantly with density (r s = 0.58 and 0.61 for the Early-1990s and the Early-2010s, both at p < 0.001, and n = 773).
The spatial relationship between socioeconomic and biophysical indicators of soil sensitivity to degradation in Italy was illustrated in Figure 2 < 0.001, and n = 773).Although the studied relationship was linear, a more evident heterogeneity was observed in agricultural districts showing the highest values of soil sensitivity to biophysical factors.
Taken together, these results indicate a partial substitution between biophysical and socioeconomic factors of soil sensitivity especially in agricultural districts with less critical background conditions (left side of Figure 2).In such contexts, the intrinsically linear relationship between the two drivers suggests the importance of (formal and informal) measures impacting both ("socioeconomic" and "biophysical") dimensions of change, e.g., enhancing adaptation to climate change and mitigating human pressure at the same time.In more critical conditions (right side of Figure 2), the relationship between the two drivers was found less homogeneous and predictable, suggesting the key role of local forces (mostly "biophysical") shaping particularly high levels of sensitivity to soil degradation.
The absolute ratio of socioeconomic to biophysical indicators' scores provides a summary view of the intrinsic role of both drivers in soil sensitivity to degradation of Italy.Generally speaking, the importance of biophysical factors was dominant all over Italy, as far as soil sensitivity is concerned.However, spatial asymmetries were observed that may indicate differentiated conditions at both regional and local levels.According to basic statistics reported in Table 2, the ratio was higher (stronger role of socioeconomic forces) in Central and Northern Italy, reflecting a significantly higher human pressure than in the rest of Italy, increasing largely over time.Southern Italy totalized the lowest importance of socioeconomic forces and the lowest increase over time in the country.Socioeconomic forces were particularly intense in lowlands, reducing progressively in uplands and, especially, mountainous districts, as illustrated in Figure 3. Taken together, these results indicate a partial substitution between biophysical and socioeconomic factors of soil sensitivity especially in agricultural districts with less critical background conditions (left side of Figure 2).In such contexts, the intrinsically linear relationship between the two drivers suggests the importance of (formal and informal) measures impacting both ("socioeconomic" and "biophysical") dimensions of change, e.g., enhancing adaptation to climate change and mitigating human pressure at the same time.In more critical conditions (right side of Figure 2), the relationship between the two drivers was found less homogeneous and predictable, suggesting the key role of local forces (mostly "biophysical") shaping particularly high levels of sensitivity to soil degradation.
The absolute ratio of socioeconomic to biophysical indicators' scores provides a summary view of the intrinsic role of both drivers in soil sensitivity to degradation of Italy.Generally speaking, the importance of biophysical factors was dominant all over Italy, as far as soil sensitivity is concerned.However, spatial asymmetries were observed that may indicate differentiated conditions at both regional and local levels.According to basic statistics reported in Table 2, the ratio was higher (stronger role of socioeconomic forces) in Central and Northern Italy, reflecting a significantly higher human pressure than in the rest of Italy, increasing largely over time.Southern Italy totalized the lowest importance of socioeconomic forces and the lowest increase over time in the country.Socioeconomic forces were particularly intense in lowlands, reducing progressively in uplands and, especially, mountainous districts, as illustrated in Figure 3.
Discussion
With our study, emphasis has been placed on a detailed analysis' scale of land degradation dynamics in the Mediterranean region, since earlier studies covering large areas were especially oriented to calibrate monitoring approaches addressing desertification risk [79][80][81].On the contrary, diachronically quantifying spatial disparities in soil sensitivity to degradation in both affected and non-affected regions at country (or supra-national) scale, provides robust, key information for policies aimed at contrasting soil deterioration, land degradation, and, ultimately, desertification in the Mediterranean basin [82].Results of our study suggest how soil degradation can be effectively managed only by a thorough understanding of the factors generating territorial disparities and influencing the quality of the environment [64].By the contrary, environmental policies more frequently concentrated on specific soil degradation processes, such as soil erosion, e.g., by introducing agro-environmental schemes and measures within the Common Agricultural Policy framework.These measures have been demonstrated to be partially effective in the Mediterranean region [82], needing a broader action framework that encompasses multiple degradation processes with both biophysical and socioeconomic origin (from soil erosion to salinization, from soil sealing to contamination).
By considering six groups of factors related to soil degradation, this study showed how the affected land area increased throughout Italy during the investigated period.The highest increase was concentrated in the region's most vulnerable to soil deterioration.This process seems to amplify the "environmental divide" observed in the Early-1990s between "structurally" sensitive lands (i.e., semi-arid or dryland districts, agriculture-oriented, and with rural poverty, mainly found in Southern Italy) and less sensitive lands (relatively wet climate, mainly service-oriented regions with high per capita income, mostly located in Central and Northern Italy).This gap may trigger a downward spiral of land degradation determining soil quality loss at the regional scale (e.g., [63]).More specifically, the empirical findings of our study indicate that: (i) in districts with most critical conditions (mainly located in Southern Italy), soil sensitivity to degradation mostly depends on the synergic action of biophysical factors.In such contexts, the additional impact of socioeconomic forces was spatially heterogeneous and quite moderate; (ii) in less critical contexts, the impact of biophysical and socioeconomic forces was more balanced and both drivers contribute substantially to determine the (increasing) level of soil sensitivity to degradation.
These findings were graphically summarized in Figure 4 comparing the per cent rate of growth over time characteristic of both socioeconomic and biophysical indicators.The two rates of growth were significant and non-linearly correlated (rs = 0.13, p < 0.05, and n
Discussion
With our study, emphasis has been placed on a analysis' scale of land degradation dynamics in the Mediterranean region, since earlier studies covering large areas were especially oriented to calibrate monitoring approaches addressing desertification risk [79][80][81].On the contrary, diachronically quantifying spatial disparities in soil sensitivity to degradation in both affected and non-affected regions at country (or supra-national) scale, provides robust, key information for policies aimed at contrasting soil deterioration, land degradation, and, ultimately, desertification in the Mediterranean basin [82].Results of our study suggest how soil degradation can be effectively managed only by a thorough understanding of the factors generating territorial disparities and influencing the quality of the environment [64].By the contrary, environmental policies more frequently concentrated on specific soil degradation processes, such as soil erosion, e.g., by introducing agro-environmental schemes and measures within the Common Agricultural Policy framework.These measures have been demonstrated to be partially effective in the Mediterranean region [82], needing a broader action framework that encompasses multiple degradation processes with both biophysical and socioeconomic origin (from soil erosion to salinization, from soil sealing to contamination).
By considering six groups of factors related to soil degradation, this study showed how the affected land area increased throughout Italy during the investigated period.The highest increase was concentrated in the region's most vulnerable to soil deterioration.This process seems to amplify the "environmental divide" observed in the Early-1990s between "structurally" sensitive lands (i.e., semi-arid or dryland districts, agriculture-oriented, and with rural poverty, mainly found in Southern Italy) and less sensitive lands (relatively wet climate, mainly service-oriented regions with high per capita income, mostly located in Central and Northern Italy).This gap may trigger a downward spiral of land degradation determining soil quality loss at the regional scale (e.g., [63]).More specifically, the empirical findings of our study indicate that: (i) in districts with most critical conditions (mainly located in Southern Italy), soil sensitivity to degradation mostly depends on the synergic action of biophysical factors.In such contexts, the additional impact of socioeconomic forces was spatially heterogeneous and quite moderate; (ii) in less critical contexts, the impact Soil Syst.2021, 5, 11 9 of 14 of biophysical and socioeconomic forces was more balanced and both drivers contribute substantially to determine the (increasing) level of soil sensitivity to degradation.
These findings were graphically summarized in Figure 4 comparing the per cent rate of growth over time characteristic of both socioeconomic and biophysical indicators.The two rates of growth were significant and non-linearly correlated (r s = 0.13, p < 0.05, and n = 773), evidencing a moderate inverse U-shaped trend that denotes how the highest increase in the socioeconomic indicator of soil sensitivity was observed in agricultural districts experiencing an intermediate growth of the biophysical indicator of soil sensitivity.More importantly, the largest part (nearly nine out of ten) of critical districts (with a biophysical indicator score systematically above 0.5) experienced a positive increase of both indicators during the study period.Moreover, the intrinsic growth of the socioeconomic indicator was significantly higher in highly sensitive districts than in non-sensitive areas.Such evidence delineates a future scenario with worse conditions of soil sensitivity driven by an even stronger interplay of biophysical and socioeconomic forces.While the most critical districts remain associated with a profile of soil sensitivity to degradation basically dependent on biophysical factors, intermediate and moderately critical districts were increasingly characterized by the joint impact of both forces.Differentiated strategies targeting soil sensitivity to degradation should address critical and less critical areas, distinguishing the impact of specific drivers of sensitivity over a sufficiently long time interval and taking specific actions against the most relevant factors of land degradation.More importantly, the largest part (nearly nine out of ten) of critical districts (with a biophysical indicator score systematically above 0.5) experienced a positive increase of both indicators during the study period.Moreover, the intrinsic growth of the socioeconomic indicator was significantly higher in highly sensitive districts than in non-sensitive areas.Such evidence delineates a future scenario with worse conditions of soil sensitivity driven by an even stronger interplay of biophysical and socioeconomic forces.While the most critical districts remain associated with a profile of soil sensitivity to degradation basically dependent on biophysical factors, intermediate and moderately critical districts were increasingly characterized by the joint impact of both forces.Differentiated strategies targeting soil sensitivity to degradation should address critical and less critical areas, distinguishing the impact of specific drivers of sensitivity over a sufficiently long time interval and taking specific actions against the most relevant factors of land degradation.Taken together, these results refine and corroborate the empirical findings presented in earlier studies [79], outlining the need for more careful identification of local "hotspots" of soil degradation in non-affected regions.Furthermore, our study confirms that the increased sensitivity to soil degradation in Italy depends on the synergic impact of biophysical and socioeconomic factors [64,65,73].Interestingly enough, the socioeconomic drivers of soil degradation, including changes in land management, crop intensification, population growth, and urban sprawl, explain the larger increase in the degree of land sensitivity especially in the less ecologically "fragile" areas, with definite implications for policies aimed at reducing socioeconomic disparities among regions [83][84][85].In this perspective, a renewed effort should be made to deploy multi-scalar and multi-targeted policies for mitigation of soil degradation and, ultimately, desertification risk [78].Fields of application should encompass agriculture, water, sustainable use of land, population dynamics, economic growth, and social change.Measures should go therefore towards an "integrated vision" of territorial processes and disparities related to soil degradation [86- Taken together, these results refine and corroborate the empirical findings presented in earlier studies [79], outlining the need for more careful identification of local "hot-spots" of soil degradation in non-affected regions.Furthermore, our study confirms that the increased sensitivity to soil degradation in Italy depends on the synergic impact of biophysical and socioeconomic factors [64,65,73].Interestingly enough, the socioeconomic drivers of soil degradation, including changes in land management, crop intensification, population growth, and urban sprawl, explain the larger increase in the degree of land sensitivity especially in the less ecologically "fragile" areas, with definite implications for policies aimed at reducing socioeconomic disparities among regions [83][84][85].In this perspective, a renewed effort should be made to deploy multi-scalar and multi-targeted policies for mitigation of soil degradation and, ultimately, desertification risk [78].Fields of application should encompass agriculture, water, sustainable use of land, population dynamics, economic growth, and social change.Measures should go therefore towards an "integrated vision" of territorial processes and disparities related to soil degradation [86][87][88].
Following the commitment of the National Action Plans developed in Northern Mediterranean countries (e.g., Italy, Spain, Portugal, Greece) [89], there is an evident need to coordinate regional, country and supra-national strategies for soil quality and sustainable development in order to reduce the "environmental divide" between critical and non-critical areas [90][91][92].In these regards, coordinated policies should promote the economic growth of regions together with the conservation of their socio-environmental quality within a spatially balanced framework [93].According to the European Strategy for Soil Protection, a comprehensive approach informing synergic, multi-target measures against soil degradation at country scale should parallel sector policies and single-target actions enforced at regional and local scales.Examples of single-target actions include measures reducing soil erosion by water or wind in affected areas, e.g., giving incentives (or economic subsidies) to farmers adopting less intensive mechanization (e.g., no-tillage and minimum tillage).This kind of measures should be better connected with broader policies promoting rural development, e.g., supporting farms that adopt organic cultivations protocols reducing the use of chemicals, with positive implications for soil contamination.Fine-tuning of actions addressing specific soil degradation targets will definitely improve the overall effectiveness of any national strategy of soil protection.Additionally, further studies should incorporate an explicit analysis of other dimensions of soil degradationmainly based on the geological risk.Landslides, flooding, and the intrinsic contribution of earthquakes, volcanoes, and tsunamis to soil degradation can be more effectively identified and quantified, being relevant phenomena in Mediterranean countries.The proposed analysis' framework can easily incorporate these dimensions providing a truly "holistic" analysis of soil degradation.
Conclusions
Contribution of biophysical and socioeconomic dimensions to soil sensitivity to degradation was largely heterogeneous in Italy, with the level of soil sensitivity to biophysical factors being the largest in less accessible, natural areas mostly located in hilly and mountainous districts.The highest level of soil sensitivity to socioeconomic drivers was instead observed in more accessible locations around large cities and flat rural districts with crop intensification and low (but increasing) population density.All these factors delineate an enlarged "environmental divide" between (i) flat and upland districts and between (ii) Northern and Southern Italy, suggesting the appropriateness of dedicated, policy strategies protecting soils with a strong place-specific knowledge, i.e., based on permanent monitoring of local (biophysical and socioeconomic) conditions.These strategies should achieve a particularly ambitious objective improving environmental cohesion between more and less sensitive territories to soil degradation in light of a "spatial justice" vision.
Figure 1 .
Figure 1.Spatial distribution of biophysical and socioeconomic components of soil sensitivity to degradation (upper left, biophysical indicator in the Early-1990s; upper right: Early-2010s; lower left: Socioeconomic indicator in the Early-1990s; and lower right: Early-2010s).
Figure 1 .
Figure 1.Spatial distribution of biophysical and socioeconomic components of soil sensitivity to degradation (upper left, biophysical indicator in the Early-1990s; upper right: Early-2010s; lower left: Socioeconomic indicator in the Early-1990s; and lower right: Early-2010s).
at the level of homogeneous agricultural districts.At both investigation times, the relationship was strongly positive (Early-1990s: Socioeconomic indicator = 0.733 * (biophysical indicator) + 0.266, R 2 = 0.275; Early-2010s: Socioeconomic indicator = 0.757 * (biophysical indicator) + 0.269, R 2 = 0.320).A non-parametric Spearman pair-wise rank correlation confirmed these results, indicating a significant correlation between the two indicators of soil sensitivity to degradation, both in the Early-1990s (r s = 0.56, p < 0.001, and n = 773), and in the Early-2010s (r s = 0.56, p < 0.001, and n = 773).Although the studied relationship was linear, a more evident heterogeneity was observed in agricultural districts showing the highest values of soil sensitivity to biophysical factors.
Figure 2 .
Figure 2. Spatial distribution of the absolute ratio of socioeconomic-to-biophysical indicator of soil sensitivity to degradation in Italy (left, the Early-1990s; middle: The Early-2010s; and right: Change over time, where "<1" indicates a decreasing ratio, ">1" denotes an increasing ratio).
Figure 2 .
Figure 2. Spatial distribution of the absolute ratio of socioeconomic-to-biophysical indicator of soil sensitivity to degradation in Italy (left, the Early-1990s; middle: The Early-2010s; and right: Change over time, where "<1" indicates a decreasing ratio, ">1" denotes an increasing ratio).
Figure 3 .
Figure 3.The relationship between socioeconomic and biophysical indicators of soil sensitivity to degradation in Italy at the spatial level of agricultural homogeneous districts (left: Early-1990s and right: Early-2010s).
Figure 3 .
Figure 3.The relationship between socioeconomic and biophysical indicators of soil sensitivity to degradation in Italy at the spatial level of agricultural homogeneous districts (left: Early-1990s and right: Early-2010s).
Soil Syst.2021, 5, x FOR PEER REVIEW 10 of 15 crease in the socioeconomic indicator of soil sensitivity was observed in agricultural districts experiencing an intermediate growth of the biophysical indicator of soil sensitivity.
Figure 4 .
Figure 4. Per cent rate of change over time (from the Early-1990s to the Early-2010) in two indicators of soil sensitivity to degradation in homogeneous agricultural districts of Italy, distinguishing between critical and non-critical districts (biophysical indicator >0.5 or <0.5 in the early 2010s).
Figure 4 .
Figure 4. Per cent rate of change over time (from the Early-1990s to the Early-2010) in two indicators of soil sensitivity to degradation in homogeneous agricultural districts of Italy, distinguishing between critical and non-critical districts (biophysical indicator >0.5 or <0.5 in the early 2010s).
Table 1 .
List and number of variables used in the empirical approach presented in our study distinguishing the related process of soil degradation and the basic component of change (biophysical or socioeconomic).
Table 2 .
Average score of socioeconomic and biophysical dimensions of land sensitivity to degradation in Italy by macroregion and time point, and the absolute ratio of socioeconomic to biophysical indicator's score. | 2021-05-08T00:04:41.760Z | 2021-02-09T00:00:00.000 | {
"year": 2021,
"sha1": "b5e2fde1ea44fc5a34790311baf5174e975d4869",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-8789/5/1/11/pdf?version=1613728147",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b0f18fa7a7ffc59024ccdbc488c2c5a60b6be158",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
} |
13185655 | pes2o/s2orc | v3-fos-license | The associations between magnetic resonance imaging findings and low back pain: A 10-year longitudinal analysis
Purpose To conduct a 10-year longitudinal analysis of the relationship between magnetic resonance imaging (MRI) findings and low back pain (LBP). Materials and methods Ninety-one volunteers with a history of LBP, but without current LBP were recruited between 2005 and 2006. Participants’ baseline demographics and MRI findings were recorded. All volunteers were invited for a follow-up MRI in 2016; of these, 49 volunteers (53.8%) participated in the follow-up. We enquired whether they had LBP history during the 10 years between the baseline and follow-up examinations. Sagittal T1 and T2-weighted MRI were used to assess the intervertebral space from T12/L1 to L5/S1. We evaluated the presence of disc degeneration by Pfirrmann’s grading system, disc bulging, high intensity zone (HIZ), spondylolisthesis, and any type of Modic changes in the follow-up MRIs. We compared the follow-up MRI findings with the baseline findings; the progress of each finding over the 10 years were also compared between the groups with (n = 36) and without (n = 13) LBP. Results Average age of the study participants at follow-up was 44.8 years; 25 were female and 24 were male. Average age, sex, body mass index, and smoking habits of those who did and did not participate in the follow-up study, as well as the demographic characteristics of those who did and did not have LBP history during the 10 years, were not significantly different. Compared with the group without LBP history, the group that had LBP history during the 10 years did not have a significantly increased prevalence of disc degeneration, disc bulging, and HIZ in the follow-up and baseline MRIs. Spondylolisthesis and any type of Modic changes were also not associated with LBP history during the 10 years. Conclusions Follow-up MRI findings consistent with Pfirrmann grading ≥4, disc bulging, HIZ, spondylolisthesis, and any type of Modic changes were not associated with LBP history during the 10 years between the baseline and follow-up study. The progresses of these findings were also not associated with the LBP history. In addition, baseline MRI findings were not associated with LBP history during the 10 years; therefore, our data suggest that baseline MRI findings cannot predict future LBP.
Introduction
Low back pain (LBP) is one of the most common causes of health disability, and continues to be the leading cause of disability over the last decade [1]. A Japanese population study reported that the lifetime prevalence of LBP was >80%, as in other industrialized countries [2].
Magnetic resonance imaging (MRI) is able to identify soft tissue such as disc, nerves, and muscles, which are among the possible sources of LBP; however, in some cases, MRI findings may not necessarily identify the source of LBP. Some reports have shown that disc degeneration was associated with LBP [3,4,5], while others have demonstrated no such relationship [6,7]. It has been suggested that symptoms of chronic LBP are often fluctuating, and that LBP is often demonstrated as a condition with patterns of exacerbation and remission [8]. We have reported that disc degeneration, disc bulging, and high-intensity zone (HIZ) were associated with previous history of LBP, and that patients with these findings are prone to develop severe LBP, unless they did not have current severe LBP [9]. However, these reports were related to cross-sectional studies.
There are a few longitudinal studies regarding the relationship between baseline MRI findings and future LBP [10,11,12]; however, there is only one longitudinal study about LBP reporting both baseline and follow-up MRIs [13]. The purpose of this study was to examine the longitudinal associations between MRI findings and LBP history during the 10 years between the baseline and follow-up study. The primary aim of this study was to investigate if the follow-up MRI findings and the progress of each finding were associated with a LBP history during the 10 years. The secondary aim was to investigate if the presence of MRI findings at baseline predicted future LBP.
Study participants
As described in detail previously [9], between September 2005 and March 2006, we recruited volunteers who were also Kanto Rosai Hospital personnel to participate in the study. Ninetyone participants with a history of LBP, but without current LBP at that point were included. We excluded participants who had prior back surgery. LBP was defined as pain localized between the costal margin and the inferior gluteal folds, as depicted in a diagram, with or without lower extremity pain in the past 1 month, according to a previous report [9,14,15]. The area was shown diagrammatically on the questionnaire, in accordance to a previous study [9,15]. LBP was defined as a history of medical consultation for LBP. Medical consultation for LBP is one of the standards for evaluating the severity of LBP; it indicated that the LBP was not mild [16]. In 2016, we invited the 91 volunteers to undergo a follow-up MRI. Of these, we invited 41 incumbent personnel three times via our institution's intranet. We tried sending postal mails to the rest of the 50 retired personnel because we did not know their e-mail addresses; however, new postal addresses of 15 of these 50 were unknown. Eventually, 49 volunteers participated in the follow-up. We enquired whether they had had LBP history during the 10 years between the baseline and follow-up study, according to the aforementioned definition of LBP. However, we did not enquire whether LBP was a single episode or multiple episodes, if they had had LBP history. The participants' smoking history was also established. We then compared the demographic data of the participants who did and did not participate in the follow-up study, in order to validate that the participants in the follow-up study were representative of all the participants in the baseline study. This study was approved by the medical/ ethics review board of Kanto Rosai Hospital. Informed consent was obtained from all individual participants included in the study.
Image assessment
MRI was performed using a 1.5T Siemens Symphony scanner (Siemens Healthcare, Erlangen, Germany). The imaging protocol included sagittal T1-weighted and T2-weighted fast spin echo (repetition time: 3,500 ms/echo, echo time: 120 ms, field of view: 300 × 320 mm), similar to our baseline study [9]. Sagittal T1-and T2-weighted images were used to assess the intervertebral space from T12/L1 to L5/S1. We had evaluated the intra-observer and inter-observer variability of assessment of the lumbar MRI scans in the previous study as greater than moderate for all evaluated items [9]; therefore, assessment of the follow-up MRI scans was performed by an orthopedist (J. T.), who was blinded to the participants' backgrounds. We evaluated the degree of disc degeneration, disc bulging, high-intensity zone (HIZ), spondylolisthesis, and Modic changes at each level of the spine. The degree of disc degeneration on MRI was classified into five grades, based on the Pfirrmann's classification system [17]. We divided the grading into two groups for the purpose of analysis. We regarded those with grades 1-3 as having no or little disc degeneration, and those with grades 4-5 as having some degree of disc degeneration. Disc bulging was defined as displacement of the disc material, usually by more than 50% of the disc circumference and less than 3 mm beyond the edges of the disc space in the axial plane [18]. As we were only able to evaluate the sagittal planes of the MRI scans, we defined disc bulging as posterior disc displacement less than 3 mm and equivalent to the anterior disc displacement in the sagittal plane, although we could not evaluate more than 50% of the circumference. In the midline slice of sagittal planes, the points of the inferior posterior edge of the upper vertebra and superior posterior edge of the lower vertebra were marked, the two points were connected with a line, and the distance between the top of the posterior bulging disc and the line for evaluating posterior bulging was measured. Anterior bulging was evaluated in the same way. We defined HIZ as an area of brightness or high signal intensity located in the posterior annulus on T2-weighted images, based on previous literature [19]. We defined spondylolisthesis as vertebral slips of >5 mm. Those definitions of the four findings were matched as our baseline study [9]. Modic change was divided as three types according to the definition: low intensity in T1-weighted images and high intensity in T2-weighted images was defined as Modic type 1; high intensity in both T1-and T2-weighted images as Modic type 2; and low intensity in both T1-and T2-weighted images as Modic type 3 [20]. However, in the final analysis, we only evaluated whether any type of Modic changes existed or not.
When a participant had at least one positive finding in any disc level for the item, we regarded the findings of the participant as positive as a whole. Finally, we focused on the relationship between LBP history during the 10 years and the MRI findings at follow-up, baseline, and the progress over 10 years. The progress of each finding was defined as a positive finding at follow-up MRI with negative finding at baseline MRI.
Statistical analysis
Between-group differences in baseline characteristics were evaluated using the Fisher's exact test for categorical variables and the Student's t-test for continuous variables. We compared the differences in MRI findings over 10 years between groups with and without LBP history over 10 years by using Fisher's exact test. Furthermore, we determined the odds ratios of each item using univariate analyses. Statistical analyses were performed using the JMP 11.0 software program (SAS Institute, Cary, NC, USA). A p value of <0.05 was considered to be significant.
Results
Of the 91 participants in the baseline study, 41 participants were incumbent and 50 had retired. Of the 41 incumbent participants, 31 participated in the follow-up study, while of the 50 retired participants, 18 participated in the follow-up study. Addresses of 15 retired participants were unknown; thus, we were unable to send postal mails inquiring about their participation. Eventually, of the 91 participants in the baseline study, 49 (54%) participated in the follow-up study. The reasons for no participation are shown ( Table 1).
The average ages of those who did and did not participate at the follow-up study were 44.9 and 44.6 years, respectively, which was not significantly different.
There were also no significant differences in sex, bone mass index (BMI), and smoking habit at baseline between the groups ( Table 2).
Of the 49 participants in the follow-up study, 36 had a history of LBP during the 10 years between the baseline and follow-up study. Participants' average age was 44.9 ± 9.3 years; 25 were female and 24 were male; and their average body mass index was 21.8 ± 4.4 kg/m 2 . The average ages of those who did and did not have LBP history over the 10 years were 46.4 and 44.4 years, respectively, which was not significantly different. There were also no significant differences in sex, BMI, and smoking history between the groups (Table 3).
Compared with the group without LBP history during the 10 years, the group that did develop LBP did not have a significantly increased incidence of disc degeneration in at least one spinal level in the follow-up MRIs, compared with the baseline MRIs. There were also no significant differences between the two groups with regards to the progress of disc degeneration over 10 years (Table 4). Additionally, no significant differences in disc bulging in the follow-up and baseline MRI were found between the two groups. Progress of disc bulging was also not significantly related to LBP history during the 10 years (Table 4). There were also no significant differences between the two groups in terms of HIZ in the follow-up and baseline MRI. Progress of HIZ was also not significantly related to LBP history during the 10 years (Table 4). Only two participants exhibited spondylolisthesis in both the follow-up and baseline MRI. There were no significant differences between the two groups in terms of spondylolisthesis in the follow-up and baseline MRI. Of the two participants with spondylolisthesis, one had LBP history during the 10 years, while the other did not. There was no case of progress of spondylolisthesis. Modic type 1 change was identified in only one participant in the follow-up MRI; six participants were found to have type 2, while none had type 3. There were no significant differences between the two groups with regards to Modic changes in the follow-up MRI. Univariate analysis revealed the odds ratios and 95% confidential intervals of each item; however, there were no significant differences in all items (Table 5).
Discussion
The follow-up study was performed 10 years after the baseline study, with a follow-up rate of 53.8%. Over half of the 91 participants of the baseline study had retired. The follow-up rate of the incumbents was high at 75.6%, while that of the retired group was low at 36.0%. Those Table 3. Demographic data of participants who did and did not have low back pain history during the 10 years between the baseline and follow-up study data are shown as mean ± standard deviation or number of participants (%). LBP; low back pain. who did not intend to participate in the follow-up study might not have adjusted their schedule because only two days could be spared for the follow-up MRI examination. In the institute of the personnel, those who retire leave their new postal address for the office. However, since 10 years had passed, the postal address could have changed once again. Therefore, we could not contact 15 retired participants. Although the follow-up rate was relatively low, the backgrounds of those who did and did not participate in the follow-up study were not significantly different; therefore, we regarded the results of the followed-up participants as representative of the baseline participants. Both in the baseline and follow-up study, we precisely defined the region of LBP similar to that in our previous study [9], which seemed to be important for standardizing the study protocol for LBP [14,15]. Of the followed-up participants, 73.5% had a history of LBP between the baseline and follow-up study. This was relatively similar to the lifetime prevalence of LBP of approximately 83%, which was based on a population-based survey [2]. Therefore, it can be regarded that the normal population may also have LBP history over 10 years, as in the study participants. There were no significant differences in age, sex, BMI, and smoking history between the groups with or without LBP history during the 10 years. Several previous studies [21,22] have indicated that smoking was associated with LBP; however, our results were not consistent with their findings.
Pfirrmann grading indicates the degree of disc degeneration [17]. Considering that disc degeneration progresses with advancing age [4], disc degeneration was more frequent in the follow-up MRI assessment compared to the baseline MRI assessment (85.7% vs. 51.0%). Seventeen participants who did not have disc degeneration in the baseline MRI demonstrated disc degeneration in the follow-up MRI. In fact, 76.9% of those who have had no LBP history during the 10 years showed disc degeneration. There have been many reports on the relationship between current LBP and disc degeneration [3,4,5], although the results have been controversial. Videman et al showed that disc height narrowing was associated with previous LBP [23], and our previous study showed that disc degeneration was associated with previous LBP [9]. Meanwhile, a systematic review showed that there were not consistent associations between MRI findings and future episodes of LBP [24]. If LBP history during the 10 years was regarded as having previous LBP, our current findings were not consistent with our previous study's findings, but with the systematic review.
Disc bulging was also more frequent in the follow-up MRI assessment, at 75.5% of all participants, compared to 61.2% in the baseline MRI assessment. Ten of those who did not have disc bulging in the baseline MRI showed disc bulging in the follow-up MRI. While some studies have shown that disc bulging was frequently observed in asymptomatic subjects, and concluded that there was no relationship between disc bulging and current LBP [25,26], another meta-analysis study demonstrated that there is a strong relationship [5]. As for previous LBP, our previous study demonstrated a significant association between disc bulging and previous LBP [9], while Videman et al had reported no association [23]. The current results showed that there were no relationships about LBP history during the 10 years in the prevalence of the follow-up MRI, the baseline MRI, and the progress of disc bulging, as reported previously. There were no relationships of the LBP history among the prevalence of the follow up MRI, the baseline MRI and the progress of HIZ, although the frequency of HIZ increased with aging. Aprill and Bogduk reported a strong association between the annular high signal intensity zone and positive provocative discography finding [19], while Schellhas et al found that HIZ was associated with current LBP [27]. Dongfeng et al reported that HIZ may be a specific signal for the inflammatory reaction of a painful disc by their histological study [28]. Conversely, other studies have shown that HIZ was frequently observed in asymptomatic subjects [5,25,26]. A longitudinal MRI study showed that 26.6% of HIZ findings resolved and HIZ improved in 14% cases, with no statistical association between HIZ changes and changes in a patient's symptoms [29]. Our results were consistent with the reports that no association was observed.
Spondylolisthesis was considered to be one of the findings of lumbar spine instability [30]; in addition, it was assumed that those who had spondylolisthesis were inclined to have LBP [31]. However, several reports found no significant relationship between spondylolisthesis and current LBP [5,32]. In the present study, only 2 participants were found to have spondylolisthesis during the baseline MRI assessment; the same 2 participants demonstrated spondylolisthesis during the follow-up assessment, although no progressions were noted. This suggested that no significant relationship was found between spondylolisthesis and LBP history during the 10 years in our study. However, this may be attributed to the small number of spondylolisthesis cases in our sample of participants.
Several reports have found that Modic type 1 change can indicate inflammation of endplates and be related to LBP [3,33]. As Modic type 1 change was identified in only one case in the follow-up study, we analyzed the relationship between any Modic changes and LBP history during the 10 years. Our results showed that no significant relationship was found, which was inconsistent with previous reports [34,35].
Brinjikji W et al. reported in their systematic review that disc degeneration, disc bulging, and Modic 1 changes were more prevalent in adults aged 50 years or younger with back pain compared with asymptomatic individuals, because the prevalence in the asymptomatic younger population was much lower [5]. Furthermore, they also demonstrated that disc degeneration, disc bulging, and annular fissure were present in high proportions of asymptomatic individuals, and that this increased with age [36]. Although the average age during the followup MRI in our study was 44.8 years, which could be regarded as young, our results were consistent with the systematic review results of an aged population.
There were several limitations to the current study. First, the findings of this study were limited and could not be generalized because of the small sample size. In addition, the followup rate was relatively low; however, we were able to demonstrate that the backgrounds of the participants who did and did not participate in the follow-up study were not significantly different. The statistical power was insufficient, however, as it exceeds 0.6 in all disc degeneration types, disc bulging, and high-intensity zones. The power of disc bulging was 0.76, which was the largest among the three. Second, we did not evaluate the Modic changes in the baseline MRI as only sagittal T2-weighted images were analyzed at that stage; therefore, although we evaluated both T1-and T2-weighted images in the follow-up MRI, we were unable to comment on any Modic changes in the baseline MRI. Third, disc bulging and HIZ can sometimes be visible from the posterolateral sides; however, as we only analyzed sagittal images, these findings may have been underestimated. In other words, there is a possibility that the pathology was missed in the zone between the planes of the posterior and anterior vertebral body cortices because only sagittal images were used. Although this limitation had been written in our previous study [9], we also did analyze only sagittal images in the follow-up study, because we preferred same definition of those findings as same as the previous study. Fourth, there was selection bias among our study participants, as they were volunteers from all types of employment at the hospital and did not represent the general population. This was also the limitation in our previous study [9]. Lastly, the lack of specific information about frequency and severity of LBP episodes in the study cohort may be seen as a limitation of this study as well.
Conclusions
The follow-up MRI findings consistent with Pfirrmann grading !4, disc bulging, HIZ, spondylolisthesis, and any type of Modic changes were not associated with LBP history during the 10 years between the baseline and follow-up study. The progress of these findings was also not associated with the LBP history. In addition, baseline MRI findings were not associated with LBP history during the 10 years; therefore, our data suggest that baseline MRI findings cannot predict future LBP.
Supporting information S1 File. Supporting information. Dataset set of this study. (XLSX) | 2018-04-03T02:05:49.206Z | 2017-11-15T00:00:00.000 | {
"year": 2017,
"sha1": "d4df7a168be56e5af7d68bad8819c3a6f18e9d19",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0188057&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4df7a168be56e5af7d68bad8819c3a6f18e9d19",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225145085 | pes2o/s2orc | v3-fos-license | SUBSTANTIATION OF THE CHOICE OF METHODS OF NON-DESTRUCTIVE TESTING OF ELEMENTS OF ENERGY EQUIPMENT USING A FUZZY LOGIC APPARATUS
The paper illustrates the solution of the problem of choosing methods of quality control of manufacturing parts and assemblies of power equipment using a fuzzy logic device. The main methods of non-destructive testing for the detection of surface and internal defects are considered, as well as the main indicators of quality of metal products. The types of metal defects and welded joints are inspected. The description of the equipment and means of control for detection of defects is executed. The sequence and methods of quality control by ultrasonic, capillary and magnetic powder methods of control are described in detail. The results of quality control of parts during production and during their operation are obtained. The analysis of the revealed defects is carried out. An example of using an integrated approach to control is given. The obtained results of control of the percentage of coincidence of detection of defects on the product are analyzed. Comprehensive quality control was performed by visual, ultrasonic, capillary and magnetic powder methods of nondestructive testing to determine the percentage of coincidences of defects. By creating a heuristic analyzer based on the interface of the fuzzy logic system Fuzzy Logic Toolbox of the Matlab program, an example of determining a combination of non-destructive testing methods for quality control of a steam turbine bearing liner is considered. Computer simulation according to the Mamdani algorithm is carried out, which consists of fazzification with determination of ranges of change of input values for each example, task of distribution functions for each input parameter; calculation of rules based on the adequacy of the model; defuzzification with the transition from linguistic terms to quantitative assessment and graphical construction of the response surface. The simulation made it possible to determine the optimal combination of nondestructive testing methods, which provides the highest quality of defect detection in the steam turbine bearing liner.
Introduction
The development of modern energy, machinebuilding and metallurgical production is inextricably linked with the creation and improvement of methods and means of non-destructive testing (NDT), allowing to ensure high reliability and safety. Improving the quality and reliability of industrial products is possible under the condition of continuous improvement of production technology and continuous quality control of products. Control of product parameters in the industry is characterized by sufficient complexity and high cost, so the task of introducing mass control of product parameters without increasing their cost is timely and relevant. To assess the technical condition of critical facilities and units of power equipment at various stages of production and operation in many industries, the methods of NDT are widely used. Among a large number of methods and means of NDT objects and units of power equipment, a special place is occupied by ultrasonic, color (luminescent) and magnetic powder methods.
For the implementation of modern flaw detection use a wide range of serial devices and non-destructive testing, but in each case, there is a specificity (structure and properties of the object of control, its shape and design, etc.), which necessitates additional research and development of specialized controls. This is especially evident during flaw detection of products with a complex surface. An important problem is the display of the shape and size of the detected defects in the products, which are both in operation and during their manufacture.
Very often the use of one control method is not enough to check the quality of the product in accordance with the technical documentation for the product. In such cases, a set of NDT methods is used. In some cases, in practice, there may be problems for which the use of known methods (or techniques) of NDT is not effective. In these cases, research institutes and plants are developing new special methods, tools and techniques of non-destructive testing. Therefore, the substantiation of the use of the necessary methods of NDT and the means that implement and ensure the identification and determination of the characteristics of defects of parts and components of power equipment, is an urgent scientific and practical task.
Analysis of basic research and publications. One of the main priorities in the production of most products is the quality of the final product. The struggle to improve the quality of manufactured products is synonymous with the struggle for the consumer in a free market [1 -5]. This statement is especially important in the manufacture and operation of critical products and structures. Obviously, a defective unit used in a particular mechanism or structure is much less, and failure in the best case will cause the mechanism to stop, at worst -can lead to disaster. Thus, quality control is a prerequisite for metal products, which are components and parts of the responsible objects.
For qualitative assessment of such objects, the methods of NDT are widely used [1,2].
According to the current standard [4 -5] in nondestructive testing it is accepted to classify 9 types of NDT, which unite on the basis of the used physical phenomena and the nature of probing fields.
It is necessary to allocate the most widespread in technique methods of NDT for the comparative analysis and revealing of their advantages and lacks for the decision of problems of flaw detection.
Objects of control differ in a wide variety of forms, properties of materials, and also the list of defects characteristic of them. This necessitates the analysis of the main problems in the control of various methods. This is especially true of metal products, which occupy a leading position in terms of output. One of the main indicators of the quality of metal objects is the presence of defects.
Defects can impair physical and mechanical properties of metals, such as strength, ductility, density, electrical conductivity, magnetic permeability, etc. They are often divided into overt and covert. The first is detected by a visual method of control or by means of tools and methods that are given in the regulatory documentation. If the defects are most likely detected by appropriate instrumental methods of NDT, but are not detected visually, they are also classified as obvious. The latent defect cannot be detected by the intended method and equipment. Defects are also divided into critical, the presence of which makes the use of products for their intended purpose impossible or dangerous; significantsignificantly affect the performance of the product or its durability; insignificant, which do not have such an impact, as well as insurmountable and surmountable.
By origin, defects are divided into productiontechnological and operational. The first include metallurgical defects that occur during casting and rolling; technological, arising during the manufacture of products and their repair, and before the operationaldefects that occur after some operation of the products due to fatigue of the metal, their elements, corrosion, wear, as well as improper maintenance and operation.
According to the number and nature of distribution in the products, the defects can also be single, local (cracks, shells, etc.), distributed in limited areas, such as areas of corrosion, distributed throughout the product, for example, the heterogeneity of chemical composition; external (surface and subsurface) and internal (deep).
By the nature of geometric parameters, defects can be point, linear, planar and three-dimensional.
Depending on the size of metal defects are divided into sub microdefects, microdefects and macro defects.
Macro defects can be small or large. Usually, for the classification and identification of macro defects, their morphological and genetic characteristics are used.
Analysis of works [1 -3] shows that the most common and dangerous defects are cracks of different origin. Under the influence of residual and operating stresses, cracks can propagate at high speeds. Therefore, the micro-destruction caused by them often occurs almost instantly and poses a high risk. Moreover, in comparison with extended cracks of all kinds, round defects are less dangerous and more static in their development.
Thus, the assessment of the nature of the defect (whether it is long or rounded) is important information in diagnosing and predicting the residual life of the test object.
The aim of the study. When choosing a method or a set of NDT methods for specific parts or assemblies, it is necessary to take into account the following main factors: the nature (type) of the defect and its location, the sensitivity of the control method, the working conditions of the parts and technical specifications for the product, the part material, the condition and roughness of the surface, the shape and size of the part , condition and roughness of the surface, shape and size of the part, control zones, accessibility of the part and the control zone, control conditions [1 -3].
The study used three main methods of nondestructive testing that are used at enterprises and plants in the manufacture of parts and assemblies of power equipment, namely: capillary (color flaw detection), magnetic particle and ultrasonic testing. They were not chosen in vain, since they have a number of advantages and features in use (ease of control, speed, sensitivity, information content). It is the integrated control that makes it possible to assess the quality of the product as a whole.
Main part
Ultrasonic Testing method of product quality control. Let us consider ultrasonic testing using the example of welded joints of the rotor frame of the SGK 538 / 160-70UHL4 hydrogenator, which was operated at one of the Ukrainian hydroelectric power plants (Fig. 1). The method of control and adjustment of the flaw detector was carried out according to DSTU, drawings and other normative and technical documentation. To complete the task, the surface of the test object was prepared in accordance with the methodological instructions, in our case it is a fragment of the butt welded joint of the hydrogenator rotor rim discs to each other (Fig. 2) and setting up the flaw detector for operation. The control was carried out with a UD4-TM flaw detector and a SWB 45-2 converter, which was initially set up according to the control method on a standard sample V2 (setting the depth gauge, vibration velocity in the material and other auxiliary values of the transducer) and built an electronic ADD diagram (amplitudedistance-defect ) for a fixation level of a defect equal to 2 mm. Next, we gradually move on to control. During the inspection, it was revealed that on the fragment of the butt welded joint, which had to be inspected, with the section length L = 400 mm, single load-bearing integrity was revealed that did not exceed the fixation level, as well as lack of penetration of the root of the seam, which is unacceptable for all types of welded joints.
The results obtained indicate that the detected lack of penetration must be corrected. Modern standards for assessing the quality of products do not allow the use of products with these types of defects.
Penetrant Testing method of product quality control. Let us consider the Penetrant Testing method (color defectoscopy) using the example of the bearing shell of a steam turbine bearing (Fig. 3), which are intended for operation at a thermal power plant in Ukraine.
Fig. 3. Steam turbine bearing insertion
Fulfillment of the task for carrying out the Penetrant Testing method (color defectoscopy) on the example of the bearing shell of a steam turbine bearing. Surface preparation was carried out according to the guidelines. The ambient temperature was + 14 ° C, which is favorable for the control. During the control, aerosol cans were used, namely: penetrant -MR68C, cleaner -MR70, developer -MR88. The lighting in the room was combined. The task was to carry out a color defectoscopy of the fit of the babbitt fill to the steel base of the insert.
During defectoscopy, the part was cleaned of dirt and dust and a penetrant was applied. After 5 minutes of exposure, the penetrant was reapplied to improve the permeability of the active substance. After 15 minutes of exposure, the part was cleaned according to the control procedure and the developer was applied. After the developer had dried, linear indications were found on the verge of babbitt pouring and the steel base of the steam turbine bearing shell. Linear indication detected is unacceptable when babbitt casting is in contact with steel base. This indicates that mistakes were made in the technological process, in the manufacture of the product. In order to accept this part into work, it must be corrected by re-pouring and checking. Modern standards for assessing the quality of products do not allow the use of products with these types of defects. This type of defect can lead to an accident during the operation of the product.
Magnetic particle Testing method of product quality control. Let us consider the Magnetic particle Testing method using the example of a workpiece (forging) for the manufacture of tie rods with M160 thread. Testing methods and flaw detection materials according to GOST, drawings and other normative and technical documentation.
To complete the task, the surface of the test object was prepared, in our case it is a blank (forging) for the manufacture of tie rods with M160 thread. After cleaning the surface, a 20 µm layer of contrasting white paint is applied to it, as the magnetic suspension in black MR76S will be used. In our case, we use a magnetizing device (yoke) AC-42 V, operating on alternating current using the applied field method. The magnetic field strength on the controlled area of the object surface is H = 2.2 kA / m. We slowly magnetize the workpiece and apply a magnetic suspension. After holding on the surface of the test object, a linear indication with a length of L = 42 mm appeared, which is located at the point of thread cutting.
Having analyzed the width of the display opening and its length, one can declare the inadmissibility of defects of this type. The blank is not suitable for further stud production. This type of defect can lead to an accident during the operation of the product.
Comprehensive non-destructive testing of products quality control. On the above-described products of units and parts of power equipment, a comprehensive quality control was carried out by the following main NDT methods, such as visual, ultrasonic, magnetic particle and penetrant testing, which made it possible to determine the percentage of coincidences in the detection of defects, the results of which are shown in Table 1.
After analyzing the results obtained, we can conclude that it is necessary to use comprehensive quality control of products, since it is impossible to highlight the importance of one of the main methods. Only an integrated approach makes it possible to judge the presence and nature of the identified defects, taking into account all the advantages and features of using NDT methods. Using the example of a common test object (a bearing shell of a steam turbine bearing), we will consider the expediency of using a complex NDT.
Assessment of the quality of control using a fuzzy logic apparatus. In practice, when evaluating metal products by NDT methods and tools, it becomes necessary to find a balance between the reliability of the results and the cost of testing. The economic feasibility of NDT is one of the main indicators of competitiveness in the market for such control services. So, there is a need to determine a combination of such NDT methods, allowing to obtain the greatest reliability of control results while minimizing the cost of its implementation. The solution to this problem is possible due to the use of a fuzzy logic apparatus, is widely used in solving control problems, evaluations in decisionmaking systems under fuzzy, blurry conditions. The subject of fuzzy logic is the study of judgments in conditions of fuzzy. Calculations and construction of fuzzy logic diagrams can be performed using the MatLab computer program in the fuzzy logic application.
Let us consider using an example of determining a combination of NDT methods for quality control of a steam turbine bearing support shell, how a fuzzy logic system works.
Quality control of the manufacturing of the steam turbine bearing support shell can be carried out using the radiation monitoring method.
This control method gives the most reliable results (in this case) and can be taken as exemplary (100% quality).
However, this method is the most expensive and most dangerous compared to other methods. Not every enterprise has the ability to use radiation monitoring due to its complexity, danger, and the need for qualified specialists. In such cases, production workers are trying to replace radiation control with other methods. No doubt, the combined control gives the most reliable results, but how to determine the appropriate combination of methods to use?
Currently, there are many fuzzy logic algorithms. Most often everyone uses the following methods: Mamdani, Tsukamoto, Sugeno, Larsen. The main analytical relations describing the functioning of the Mamdani algorithm, presented in [6]. In works [7 -9] the possibilities of using the Mamdani algorithm are presented. The paper [10] presents a solution to the problem of classifying defects in metal pipes of oil and gas pipelines using the Mamdani fuzzy inference algorithm and the Sugeno fuzzy knowledge base. In [11], a method was proposed to improve the accuracy of detecting defects in metal products, the possibility of using the apparatus of the theory of fuzzy sets to determine such parameters of the transducer that would give the opportunity to minimize the error in determining a defect was proved. In [12], the solution to the problem of controlling the accuracy of the parameters of the technological process of producing kefir and improving its quality by creating a heuristic analyzer is considered.
To build a heuristic analyzer, we first use the Mamdani fuzzy inference algorithm. Since the enterprise considers the possibility of using three control methods (visual control is input and is not taken into account), the model should have three inputs and one output. Select the Penetrant Testing (PT) as the first input.
The second input is the Ultrasonic Testing (UT). The third entry is the Magnetic Particle Testing (MT).
We select the quality of control (Quality) as the initial value (Fig. 4). We define membership functions for the selected input variable -the Penetrant Testing (PT). In the Range item, set the range in which the function changes (from 10% to 40%) of defects. We set the type of the membership function in the Type column: for three membership functions, namely the minimum (min), average (middle) and maximum (max), we choose the Gaussian distribution.
Similarly, we set the membership functions for the selected input variable -the Ultrasonic Testing (UT). In the Range item, set the range in which the function changes (from 25% to 80%).
We set membership functions for the selected input variable -the Magnetic Particle Testing (MT). In the Range item, set the range in which the function changes (from 35% to 70%).
We set membership functions for the selected initial variable -quality control. In the Range item, set the range in which the function changes (from 0% to 100%). We set the type of the membership function in the Type column: for three membership functions, namely the minimum quality, average quality and high quality, we choose the distribution (trimf) a triangular distribution law. We set the rules according to which the model will operate. The rules are based on the model: , , , , , , , , A A A B B B C C C … -are some fuzzy sets, described by their membership functions.
In the "rules" window, we will compose rules that characterize how the quality of control changes depending on the selected combination of control methods (Fig. 5). Fig. 6. the windows of the values of the variables are presented: a -at 50% compliance with the input parameters; b -at 100% compliance with the input parameters. The response surfaces for a combination of control methods are shown in Fig. 7.
Conclusions
1. The study examined three main methods of non-destructive testing quality of power equipment products: ultrasonic, capillary and magnetic particle NDT methods.
The results of quality control of parts at various stages of production and operation are analyzed in detail, as well as a reasonably comprehensive approach to performing non-destructive testing.
The revealed defects only indicate the need to implement the use of a complex of NDT methods at all stages of production and operation of products. This can save not only the funds of enterprises, but also human lives.
2. By means of the graphical user interface, it was possible to build a fuzzy logic system that solves the problem of finding the necessary combination of NDT methods to ensure high quality control of used products.
3. The proposed heuristic analyzer plays the role of an advisor for the defectoscopist-engineer in choosing the necessary combination of methods for carrying out a comprehensive non-destructive quality control of products.
4. From the obtained results of computer simulation, the following conclusion can be drawn: when monitoring the quality of the state of the bearing shell of a steam turbine bearing, it is advisable to use a combination of two methods of non-destructive testing (ultrasonic and magnetic particle), because they give the maximum quality of control at the level of 87%. a b Fig. 6. Window of variable values window: a -at 50% compliance with the input parameters; b -at 100% compliance with the input parameters a b c Fig. 7. Window of variable values window: a -the dependence of the quality of control with the combined use of the ultrasonic method and magnetic particle; b -the dependence of the quality of control with the combined use of the capillary method and magnetic particle; c -the dependence of the quality of control with the combined use of the capillary method of ultrasonic | 2020-10-28T18:56:54.417Z | 2020-10-05T00:00:00.000 | {
"year": 2020,
"sha1": "2d69352e20f63a26a51fe53e8b2928fcdc134414",
"oa_license": null,
"oa_url": "http://ais.khpi.edu.ua/article/download/2522-9052.2020.3.21/213387",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "629a43850222cf52cc0a55d509ffdc9e7a1c6f96",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
51968870 | pes2o/s2orc | v3-fos-license | Power Minimization Based Joint Task Scheduling and Resource Allocation in Downlink C-RAN
In this paper, we consider the network power minimization problem in a downlink cloud radio access network (C-RAN), taking into account the power consumed at the baseband unit (BBU) for computation and the power consumed at the remote radio heads and fronthaul links for transmission. The power minimization problem for transmission is a fast time-scale issue whereas the power minimization problem for computation is a slow time-scale issue. Therefore, the joint network power minimization problem is a mixed time-scale problem. To tackle the time-scale challenge, we introduce large system analysis to turn the original fast time-scale problem into a slow time-scale one that only depends on the statistical channel information. In addition, we propose a bound improving branch-and-bound algorithm and a combinational algorithm to find the optimal and suboptimal solutions to the power minimization problem for computation, respectively, and propose an iterative coordinate descent algorithm to find the solutions to the power minimization problem for transmission. Finally, a distributed algorithm based on hierarchical decomposition is proposed to solve the joint network power minimization problem. In summary, this work provides a framework to investigate how execution efficiency and computing capability at BBU as well as delay constraint of tasks can affect the network power minimization problem in C-RANs.
I. INTRODUCTION
During the last decade, the evolution of information and communication technology is causing energy consumption levels to reach a distressing rate, due to the dramatic increase in the quantity of subscribers and the number of devices [2]. The massive connectivity also leads to tremendous carbon dioxide emissions into the environment. To reduce energy consumption, many new technologies and network architectures are proposed for 5G green communications [3]. Cloud radio access network (C-RAN) is a new system architecture where computational resource is aggregated into a central baseband unit (BBU) pool to implement the baseband processing of the conventional base stations. The radio functions including amplification, A/D and D/A conversion, and frequency conversion are performed at remote radio heads (RRHs) [4].
In C-RANs, conventional base stations are replaced with low-cost RRHs and these RRHs are deployed close to user equipment terminals (UEs), so the transmission power is significantly reduced. Furthermore, virtualization technique can take full advantage of aggregated computational resources to improve hardware unitization and centralized signal processing can achieve cooperation gain [3,5].
However, with the aforementioned advantages, new challenges also arise in C-RANs. With the dense deployment of RRHs, C-RANs consume considerable power. Hence turning the idle RRHs into sleep mode and designing energy efficient beamforming matrix are important issues [6,7]. In addition, the increased traffic causes a heavy burden on fronthaul in terms of capacity demand and power consumption [8,9]. Finally, the power consumption of baseband processing is also considerable, which is determined by the allocation of computational resource. Overall, all the three challenges have a great effect on the network power consumption in C-RANs.
The network power minimization problem has been extensively studied in [6][7][8][9][10][11]. Reference [9] jointly optimized downlink beamforming and admission control to minimize the network power. Reference [8] compared two transmission schemes, i.e., the data-sharing scheme and compression scheme. Reference [6] proposed a joint downlink and uplink UE-RRH association and beamforming design to reduce energy consumption. Precoding design and RRH selection were optimized jointly in [7,11]. However, the aforementioned papers only considered the first and second challenges, taking into account the power consumption for transmission, i.e., power consumptions of the RRHs and fronthaul links. Dealing with the third challenge in C-RANs is still an open issue. The computational resource aggregated in the BBU pool is provided by many physical servers. Each UE's task is first scheduled on one of these servers and then executed by a virtual machine (VM) created by the server. Therefore, task scheduling and computational resource allocation are the key to the third challenge. There exist some works on computational resource allocation [12][13][14][15][16][17][18]. References [12,14] used a queueing model to represent UEs' data processing and transmitting behavior. Reference [15] modelled the power consumption for computation as an increasing function of UEs' rates. Reference [13] investigated a mobile cloud computing system with computational resource allocation. One thing these works have in common is that they all considered delay constraint. With the popularity of the online video and mobile game, as well as the development of the Internet of things, traffic delay is considered as a key metric to measure the quality-of-service (QoS). However, none of these works take into account task scheduling and computational resource allocation simultaneously.
Besides, these works do not consider the time-scale challenge except reference [16], in which the sample averaging was adopted to approximate the time averaging of the power consumption of transmission.
Motivated by these facts, we aim to minimize the network power consumption under delay constraint where the aforementioned three challenges are considered simultaneously in this paper.
We consider a downlink C-RAN composed of many RRHs which are connected to a BBU pool via fronthaul. In the BBU pool, there is a data center with a set of physical servers. Each UE has one task which is first scheduled on a certain server and a VM is created by the server to execute this task. Then, the output data is transmitted using RRHs via fronthaul to the UEs. Due to limited fronthaul capacity, the precoded signals are first compressed and then the corresponding compression descriptions are forwarded through the fronthaul. We formulate a joint network power minimization problem of task scheduling and resource allocation, which includes not only computational resource allocation but also power allocation for transmission. Note that the power minimization problem for transmission is a fast time-scale issue because it depends on small-scale fading which varies in the order of milliseconds. However, the power consumption problem for computation is a slow time-scale issue since the task scheduling and computation resource allocation are usually executed much slower than milliseconds [16]. Therefore, the joint network power minimization problem is a mixed time-scale issue. The main contributions of this work are summarized as follows: • We first formulate two power minimization problems for computation and transmission, respectively. The power minimization problem for computation is a slow time-scale issue and also a mixed-integer nonlinear programming, where the task scheduling and computation resource allocation are optimized jointly. However, the power minimization problem for transmission is a fast time-scale issue and also a nonconvex problem where power allocation and compression noise are optimized jointly. Then, a joint and mixed time-scale network power minimization problem combining the above two problems is also formulated.
• We translate the fast/mixed time-scale problem into a slow time-scale one. Different from reference [16], where the sample averaging was used to approximate the time averaging of the power consumption of transmission, we introduce the large system analysis to convert our problem into one that only depends on statistical channel information (i.e., largescale fading) instead of small-scale fading. Therefore, the power minimization problem for transmission, as well as the joint network power minimization problem, is turned into a slow time-scale one.
• For the power minimization problem for computation, we propose a bound improving branch and bound (BnB) algorithm to determine the optimal solutions. To reduce the computational complexity and time, we also propose a suboptimal combinational algorithm. For the power minimization problem for transmission, an iterative coordinate descent algorithm is proposed to determine solutions. Finally, a distributed algorithm based on hierarchical decomposition is proposed to solve the joint network power minimization problem.
The remainder of this paper is organized as follows. Section II introduces the system model and formulates three power minimization problems. Section III proposes two algorithms, i.e., the BnB algorithm and combinational algorithm, to solve the power minimization problem for computation. Section IV proposes an iterative coordinate descent algorithm to solve the power minimization problem for transmission. Based on the analysis in Sections III and IV, a distributed algorithm based on hierarchical decomposition is proposed to solve the joint network power minimization problem in Section V. Numerical results are presented in Section VI. Finally, The notations E(•) and || • || 0 are expectation and l 0 norm operators, respectively. Finally, a ∼ CN (0, Σ) is a complex Gaussian vector with zero-mean and covariance matrix Σ.
A. System Model
Consider a downlink C-RAN where L RRHs, each with N antennas, serve K single-antenna UEs, as shown in Fig. 1. The sets of the RRHs and UEs are denoted as N R {1, 2, . . . , L} and N U {1, 2, . . . , K}, respectively. In the BBU pool, there is a data center consisting of a set of servers N S {1, 2, . . . , S}. The UEs' tasks are first processed at the data center before the output data is transmitted via the RRHs. It is assumed that the RRHs are connected to the BBU pool through high-speed but limited-capacity fronthaul links. In particular, the compressand-forward scheme is adopted such that the signals for the UEs are first compressed and then the compression descriptions are forwarded to all the RRHs.
In the following, we consider that each UE has one delay-sensitive and computation-intensive task to be executed at the data center. Similar to references [13,19], the task Φ k of UE k is modelled as where ·, ·, · is a triplet, D k is the amount of output data after accomplishing task Φ k , τ k denotes the total time constraint on task execution and data transmission, and L k represents the load of task Φ k . Here, we define the load as the execution time when it is executed on a VM with unit computation capability [18].
The tasks are scheduled on different servers for execution at the data center. We use binary variables x s,k ∈ {0, 1} to present the placement plan of tasks, where x s,k = 1 indicates task Φ k is placed on server s ∈ N S and x s,k = 0 otherwise. After the task Φ k is placed on server s with computing capacity λ s , a VM with computing capability A s,k is created by server s to complete task Φ k . Due to the diversity of servers, different servers have different executing efficiencies and we define ς s,k as the efficiency of executing task Φ k on server s. Note that a task can be scheduled on one and only one server during a task execution period so we have the constraint s∈N S x s,k = 1, ∀k ∈ N U . Then the corresponding execution time of task Φ k is given as where A s,k should meet the computing capacity constraint of server s as follows: Once one task is finished, its resulting data is encoded and forwarded to the corresponding UE.
We first define the channel matrix between all UEs and RRH l as H l = [h l,1 , . . . , h l,K ] ∈ C N ×K with h l,k = d l,khl,k , where d l,k is the large-scale fading factor caused by path loss and shadow fading between UE k and RRH l, andh l,k ∼ CN (0, I N ) is the small-scale fading factor. We assume that the UEs are static or moving slowly such that in a task execution period the largescale fading is invariant.
At the BBU, maximum-ratio transmission is adopted at the signal vector s = [s 1 , . . . , s K ] T ∈ C K×1 , where s k ∼ CN (0, 1) is the signal for UE k. The perfect channel state information is assumed to be available at the BBU, then the precoded signals for RRH l is given bŷ where V l = ξ l H l √ P l is the precoding matrix, P l ∈ C K×K is a diagonal matrix whose elements are adjustable such that power allocation is implemented to improve system performance, and ξ l is the power scale factor which is given as Due to the limited capacities of fronthaul links, the precoded signalx R l are first independently compressed and transmitted to the RRHs via fronthaul links. Here, we adopt point-to-point (P2P) compression for simplicity 1 . The quantized signal is expressed as where q l ∼ CN (0, Ψ l ) is the quantization noise independent of signalx R l with Ψ l E(q l q † l ). Note that the process of signal compression is independent so that the quantization noise signals q l and q l , are uncorrelated, i.e., E(q l q † l ) = 0, l = l. According to reference [20], the signal x R l can be recovered fromx R l at RRH l if the condition is satisfied, where C l is the fronthaul capacity for RRH l. Furthermore, the transmission power at RRH l should meet the power constraint given as follows: where P (M AX) R l is the transmission power budget.
The received signal at UE k is given by where is the independent received noise with zero mean and variance σ 2 . Although the UEs do not know the exact effective channels, we assume that the average effective channels can be learned at the 1 There are two common compress-and-forward schemes, i.e., P2P compression and Wyner-Ziv (WZ) coding. WZ coding can achieve higher performance and make better use of limited fronthaul capacity than P2P compression scheme. However, such benefits come with a cost in terms of computational complexity. Besides, finding an optimal decompression order is a hard problem. In this work, for simplicity, we only consider P2P compression scheme. However, this work can be extended to the case of WZ coding scheme applied at fronthaul with a fixed decompression order.
UEs. Therefore, the achievable rate of UE k using a standard bound based on the worst-case uncorrelated additive noise [21,22] is computed as where B is the system bandwidth, where var( . Then, the transmission time 2 of output data of task Φ k is given as T
B. Power Consumption Model
In the following, we are interested in the network power which includes the powers consumed at the RRHs, the fronthaul links, and the servers.
1) RRH Power Consumption:
The power consumption at the RRHs consists of both circuit power consumption and transmitting power consumption, and we adopt a linear power consumption model given by [7,23] where υ l is the efficiency of the power amplifier and P (Active) R l denotes the circuit power consumption to support RRH l to transmit signals. If there is no transmission at RRH l, it can be turned into sleep mode with lower power consumption P thus turning a RRH into sleep mode can save power. We define and P R l can be rewritten as 2) Fronthaul Power Consumption: The power consumption model of fronthaul links depends on specific fronthaul technologies. Similar to reference [8], we use a general model to compute the power consumption of each fronthaul channel as where is the power consumed by the fronthaul link for RRH l when working at full capacity. This model has been used for microwave backhaul links in [24] and also can be generalized to other backhaul technologies, such as passive optical network, fiber-based Ethernet, etc., as mentioned in [25].
3) Server Power Consumption:
The total power consumption of a server s is given by [26] where P (Static) Ss is constant no matter whether VMs are running or not and P V M s,k is the power consumed by a VM. It is observed that the total power consumption of a VM is directly related to the system component utilization [18,26,27]. More utilization of the system components leads to more power consumption [26]. In the linear weighted model, the total power consumption P V M s,k of VM k created by server s can be further decomposed into four components related to CPU, disk, IO devices, and memory as follows [18]: Because there exists a direct relation between the execution time of tasks on VMs and CPU utilization, we use the CPU power consumption to approximate the VM power consumption with a weight χ s,k [18,27], which can be expressed as
C. Problem Formulation
Since we are interested to minimize the network power consumption while meeting the delay constraint, we first formulate two power minimization problems for computation and transmission, respectively. Then, a joint network power minimization problem is also established.
1) Power Minimization Problem for Computation:
It is assumed that the time limitation for finishing task Φ k on a certain VM is τ . Based on the above analysis, the power minimization problem for computation where task scheduling and computational resource allocation are executed jointly is formulated as where x is a collect of x s,k 's, indicating the placement plan of tasks and A is a collect of A s,k 's denoting the resource allocation plan of the servers.
2) Power Minimization Problem for Transmission: Similarly, we first assume that the time constraint for transmitting the output signals of the tasks is τ . Then, we formulate the power minimization problem for transmission as where P is a collect of p l,k 's and Ψ is a collect of Ψ l 's. Note that P describes the power allocation scheme and Ψ indicates the quantization levels of the all RRHs.
3) Joint Network Power Minimization Problem: Finally, the joint network power minimization problem for computation and transmission is formulated as where ω is a factor to balance the power consumption of computation and transmission.
We observe that P 0 is a slow time-scale problem but the joint optimization of power allocation and quantization noise in P 1 is a fast time-scale problem since it depends on small-scale fading.
Consequently, P 2 is a mixed time-scale issue that needs further attention [16]. To solve this challenge caused by the time-scale issue, authors in [16] used ensemble averaging over fast time-scale samples so that the final problem became a slow time-scale problem. Instead, we introduce large system analysis to transform P 1 and P 2 into slow time-scale problems depending only on large-scale fading [28,29]. Furthermore, we assume that the UEs are static or moving slowly such that the large-scale fading remains invariant within a task execution period.
III. POWER MINIMIZATION PROBLEM FOR COMPUTATION
For P 0 to be solvable, it is assumed that task Φ k can be further divided into S sub-task φ s,k 's, each with load l s,k , and placed on S servers, respectively [18,30]. This assumption can be interpreted as a relaxation of the binary variable x s,k to a real variable, i.e., x s,k ∈ [0, 1], then the variable x s,k is absorbed in the new defined variable l s,k = x s,k L k . The total load of sub-tasks should satisfy the constraint s∈N S l s,k = L k . Then, a VM with computation capability a s,k is created by server s for sub-task φ s,k and the associated execution time is t where a s,k satisfies the constraint k∈N U a s,k ≤ λ s . Accordingly, the power consumption of sub-task φ s,k is given as p V M s,k = χ s,k a s,k and the relaxed version of P 0 can be written as where a is a collect of a s,k 's and l is a collect of l s,k 's. Note that P In what follows, we introduce the BnB algorithm to find the optimal solution to P 0 based on the solution to P 0-1 .
A. Branch and Bound Algorithm
We define a set S = {(s, k)|∀s ∈ N S , ∀k ∈ N U } that contains all the task-server pairs and introduce another two task-server pair sets S 0 = {(s, k)|x s,k = 0, ∀s ∈ N S , ∀k ∈ N U } and S 1 = {(s, k)|x s,k = 1, ∀s ∈ N S , ∀k ∈ N U }. With the defined sets, we formulate an equivalent problem of P 0 as follows: Similarly, an equivalent problem of P 0-1 is formulated as For notational convenience, we use the related parameter tuples (z, S 0 , S 1 ) and (z, S 0 , S 1 ) to denote P 0-2 and P 0-3 , respectively, where z is the optimal value of the objective function in P 0-3 .
The BnB algorithm for P 0 is provided in Algorithm 1. At the beginning, we define as the set of branch problems and z as the best-known objective value. The main process of the BnB algorithm consists of two important steps as follows: 1) Branching: In each iteration process, we choose the problem that achieves the minimum lower bound, denoted as (ẑ,Ŝ 0 ,Ŝ 1 ), to branch. Then, the task-server pair with the highest priority (s * , k * ) is chosen to be divided into two smaller branch problems: one is with x s * ,k * = 0 and the other is with x s * ,k * = 1. Accordingly, the relaxed problems of the two branches are given as: one is with l s * ,k * = 0 and the other is with l s * ,k * = L k * . Evidently, the priority function plays an important role in reducing the complexity and we define the priority function as f p (s, k) = ) , respectively. The two branch problems are stored in for further branching when their lower bounds are less than the current best-known value z . If a new feasible solution is found which is lower than the current best-known value z , the current best-known solution is updated. Besides, the stored branches in having an lower bound larger than the value of the new best-known feasible solution can be deleted.
B. Suboptimal Task Scheduling Algorithm
Although the BnB algorithm can find the global optimal solution, the convergence rate can be slow, especially for large number task-server pairs. Therefore, we introduce a suboptimal but Algorithm 1 BnB algorithm for task scheduling. Find the problem (ẑ,Ŝ 0 ,Ŝ 1 ) according toẑ = min (z,S 0 ,S 1 )∈ z from and update = \ {(ẑ,Ŝ 0 ,Ŝ 1 )}.
5:
Update S 10: fast task scheduling algorithm which is referred to as heuristic task scheduling algorithm, as shown in Algorithm 2.
In Algorithm 2, the unscheduled task Φ k * with the highest load is first considered and server s * which has the highest execution efficiency for this task has a priority. When the available resource in server s * is sufficient to support task Φ k * , then server s * allocates as little computing resource as possible to task Φ k * , i.e., A s * ,k * = L k * τ k * ς s * ,k * . Otherwise, task Φ * k continues to search the potential server. Note that different from the BnB algorithm, the heuristic task scheduling algorithm cannot always find solutions to P 0 . However, in the case with high execution efficiency or abundant computation resource, Algorithm 2 can achieve satisfying performance with lower computational complexity and time. Therefore, we propose a combinational algorithm where Algorithm 2 is first adopted to find the suboptimal solutions. If no solution is found via Algorithm 2, we continue to resort to Algorithm 1. We refer to such an algorithm as combinational task scheduling algorithm, as shown in Algorithm 3.
Algorithm 2 Heuristic task scheduling algorithm. 1: Initialize N S = N S . Find task Φ k * = arg max k∈N U L k and update N U N U \ {k * }. Find server s * = arg min s∈N S χ s,k ς s,k * for task Φ k * and update N S = N S \ {s * }.
5:
Update A s * ,k * = L k * τ k * ς s * ,k * and λ s * = λ s * − A s * ,k * . Then, go to step 1. Algorithm 1 is adopted to find the optimal solution. Return the solution found by Algorithm 2.
IV. POWER MINIMIZATION PROBLEM FOR TRANSMISSION
In this section, we first introduce approximate results with large system analysis and then find the solution to P 1 based on these approximations. According to large system analysis, we can take care of the small-scale fading using the following lemma.
Lemma 1. Given thath l,k 's are i.i.d. complex Gaussian variables with independent real and imaginary parts. According to the law of large numbers and the large-dimensional random matrix theory, as N → ∞, then we have the following results: where where ∆ l = log |Λ l | + k∈N U ( 1 1+e l,k − log 1 1+e l,k ) − K, e l,k =ξ 2 l p l,k d l,k trΛ −1 l , and Proof: See the Appendix.
Note that these results can achieve satisfying accuracy even when N is not too large.
Based on the approximate results, we formulate an alternative to P 1 as: . According to reference [31], the l 0 -norm can be approximated with convex relaxation l 1 -norm as ||P is iteratively updated, c 1 is a constant, and c 2 is a small constant to guarantee numerical satiability. However, P 1-1 is still non-convex with respect to p l,k and Ψ l because of R F l and Sig k . To achieve a stationary point of P 1-1 , we first introduce the following lemma.
Applying (28) to the denominator ofR F l , then we havẽ which is equivalent toR F l when Γ l = Λ −1 l . Next, we change optimization variable p l, can be rewritten as where A l = diag([a T l , . . . , a T l ] T ) ∈ C KL×KL , a l ∈ C L×1 represents a vector whose l-th element is 1 and 0 elsewhere, T = diag({t k } K k=1 ) ∈ C KL×KL is a diagonal matrix, t k ∈ C L×1 denotes a vector whose l-th element isξ 2 l d l,k , and B l ∈ C N L×N L is a diagonal matrix whose main diagonal elements from ((l − 1)N + 1)-th to (lN )-th are 1's and 0 elsewhere. Λ l can be rewritten as is a diagonal matrix, e k is a vector whose l-th diagonal element isξ 2 l d l,k 1+e l,k , and J = [0 1 , . . . , 0 l−1 , I N , 0 l+1 . . . , 0 L ] ∈ C N ×N L with 0 l ∈ C N ×N being a zero-matrix. Similarly, based on the new defined variable W, e l,k , Sig k , and Int k can be rewritten as e l,k = tr(TG lk W)trΛ −1 l , Sig k = tr(F kD F k W), and respectively, where G lk is a diagonal matrix whose (l + (k − 1)L)-th main diagonal element is 1 and 0 elsewhere,D =dd T ∈ C KL×KL withd = [d T 1 , . . . ,d T K ] T , F k ∈ C KL×KL is a matrix whose main diagonal elements from ((k − 1)L + 1)-th to (kL)-th are 1's and elsewhere As a result, P 1-1 can be reformulated as a semidefinite programming as follows: where Γ is a set of Γ l 's. The optimal value of Γ l in P 1-2 , according to Lemma 2, is Γ * l = Λ −1 l , ∀l ∈ N R . Relaxing the rank constraint rank(W) = 1, P 1-2 is still non-convex over three variables W, Ψ, and Γ. But it is convex with respect to any one of these variables and can converge to a stationary point by an iterative coordinate descent algorithm as shown in Algorithm 4.
In Algorithm 4, at t iteration, W (t) and Ψ (t) are optimized simultaneously, whereas Γ (t) is updated directly as Γ (t) l = (Λ (t) l ) −1 , ∀l ∈ N R , according to (28). Such process is repeated until convergence. Note that Algorithm 4 does not take the rank-one constraint into consideration.
After the semidefinite relaxation (SDR) of P 1-2 is solved, the optimal solution W * should be converted into a feasible solution to P 1 . Since the rank of W * may not equal to one, we can extract the feasible solution to P 1 from W * with Gaussian randomization method [34].
Algorithm 4 generates a non-increasing sequence of objective values, thus the convergence is
guaranteed [32]. The main computational complexity of Algorithm 4 lies in step 2, where the SDR of P 1-2 is solved. The computational complexity of the SDR of P 1-2 is O(D 3.5 SDP log(1/ )) with a custom-built interior-point algorithm [35], where > 0 is the solution accuracy and D SDP = KL + N L is the dimension. Assuming that Algorithm 4 converges in T 1 iterations, the total complexity of Algorithm 4 is O(T 1 D 3.5 SDP log(1/ 2 )) [8]. In this section, we find the solution to the joint network power minimization problem P 2 .
A. Problem Reformulation
We find that P 2 has to confront with all the difficulties in P 0 and P 1 because P 2 is combination of two problems coupled by the delay constraint. To avoid the nonconvexity, we first reformulate P 2 as where (32d) indicates that server s does not allocate any resource to task Φ k , if task Φ k is not assigned on server s, i.e., x s,k = 0 =⇒ A s,k = 0. As mentioned above, P 2-1 is a mixed time-scale problem. Similar to P 1 , we turn P 2-1 into a slow time-scale problem based on the asymptotic results in Section IV and formulate the SDR of P 2-1 as follows: (17d), (17e), (27d), (31b), (31c), (32c), and (32d), where ϕ is a set of ϕ k 's and In P 2-2 , x s,k 's are relaxed as continuous variables within [0, 1] and (28) and (29) are applied tō R F l andR U k , respectively. Then, P 2-2 is convex with respect to either {x, A, W, Ψ} or {Γ, ϕ}.
Thus, we find the solution to P 2-2 by alternatingly solving the following two problems: where the optimal solution to P 2-4 is given as By applying the dual decomposition to P 2-3 [36][37][38], the Lagrangian function associated with problem P 2-3 is given by where µ = [µ 1 , . . . , µ K ] T ∈ C K×1 is composed of the Lagrangian multipliers. The corresponding Lagrangian dual function is given by where and Then, the master dual problem associated with P 2-3 is formulated as Since P 2-3 is convex and satisfies the Slater's condition, the duality gap of P 2-3 and its dual problem P 2-5 is zero [39]. In the following, we propose a distributed algorithm based on hierarchical decomposition to find the optimal solution to P 2-2 .
B. Distributed Algorithm Based on Hierarchical Decomposition
In Algorithm 5, the upper level primal decomposition is conducted, which introduces P 2-3 and P 2-4 . Based on P 2-3 , the lower level dual decomposition is conducted to formulate the dual problem P 2-5 . Therefore, the distributed algorithm should involve two level iterations: the outer iteration is for P 2-3 and P 2-4 to converge and the inner iteration is for P 2-5 to converge.
In the outer iteration, the optimal solution to P 2-4 is directly given as Γ However, to obtain the optimal solution to P 2-3 , it relies on the dual problem P 2-5 , whose optimal solution can be achieved via the inner iteration where (x (q) ,A (q) ) and (W (q) , Ψ (q) ) are alternatingly updated. Specifically, at p-th inner iteration: 1) Data center's algorithm jointly optimizes task scheduling and computation resource allocation. x (p) and A (p) are updated by solving subproblem g 1 (µ) with a BnB algorithm similar to Algorithm 1 or a combinational algorithm similar to Algorithm 3.
2) BBU pool's algorithm jointly optimizes power allocation and compression noise. W (p) and Ψ (p) are updated by solving subproblem g 2 (µ) with an iterative coordinate descent algorithm similar to Algorithm 4.
3) On the other hand, the price factor is adjusted by UEs' algorithm. Since g(µ) is not differentiable over µ k , a sub-gradient approach is adopted here to update the price factor µ k at UE k, i.e., where δ (p) µ is dynamically chosen stepsize sequence [36,40]. Similarly, after the SDR problem of P 2-2 is solved, we need to extract p l,k 's from W * with Gaussian randomization method [34].
Note that there are three sub-algorithms named data center's algorithm, BBU pool's algorithm, and UEs' algorithm in Algorithm 5 and they are executed in parallel in the data center, the BBU pool, and the UEs, respectively. Therefore, the complexity is significantly reduced compared to the direct optimization of P 2-2 .
VI. NUMERICAL RESULTS
In this section, we present the numerical results to show the performance of our proposed algorithms, where L RRHs and K UEs are distributed uniformly and independently in an area with a radius of 100 m. The outer interference combined with background noise is set as -150 dBm/Hz and the path loss function is given as 128.1 + 37.6 log 10 (d) where d in km. The system bandwidth is B = 20 MHz and the number of RRH antenna is N = 5. For simplicity, we assume that each RRH has the same parameters and is subject to the same constraints, i.e., Data center's algorithm: Update x (p) and A (p) by solving subproblem g 1 (µ).
A. Power Consumption for Computation
We first consider that each task has the same execution delay constraint, i.e., τ lower bound of the execution efficiency ς lb as the x-axis. Fig. 3 is the average result of 200 independent realizations. It is found that the power consumption for computation decreases as the lower bound of the execution efficiency ς lb increases because the demand for computation resource is reduced. We also observe that the solutions obtained by the combinational task scheduling algorithm are suboptimal and require a little more power consumption but with much less runtime, compared to the BnB algorithm. Therefore, in order to save time and reduce computation complexity, it is suggested to adopt the combinational task scheduling algorithm with a little performance loss. However, the optimal solution can be found via the BnB algorithm at the cost of computational time and complexity. Besides, Fig. 3(b) suggests that as the lower bound of the execution efficiency ς lb increases, indicating that the overall execution efficiency of servers is improved, then the fraction of times where Algorithms 2 fails decreases.
B. Power Consumption for Transmission
In the following, we first validate the accuracy of the approximate results derived in Section inaccuracy levels of the sum rate of UEsR U = k∈N UR U k , the sum rate of fronthaul links R F = l∈N RR F l , and the sum power of RRHsP R = l∈N RP R l , respectively. Fig. 4 shows that these approximate results are not only close to their original expressions but become more accurate as the number of RRH antennas N increases.
For notational simplicity, we define P (T R) = l∈N R (P R l + P F l ) as the total power consumption for transmission. It is also assumed that each UE has the same transmission delay, i.e., , the data-sharing transmission scheme achieves a better performance. This is because the fronthaul rate of compression scheme relies on the signal-to-quantization-noise ratio whereas that of data-sharing scheme depends on the UEs' rates and the serving RRH numbers, since the data-sharing scheme delivers each UE's message to all the RRHs that serve this UE via fronthaul links. A smaller value of τ (T R) suggests a higher data-rate demand, then more RRHs are required to serve the UEs. Therefore, a faster increase of the fronthaul rate occurs in the data-sharing scheme. However, a gradual increase of the fronthaul rate in the compression scheme as the data-rate demand rises.
C. Joint Network Power Minimization Problem for Computation and Transmission
Finally, we present the network power minimization with respect to the delay constraint τ k under different transmission schemes (i.e., compression based scheme and data-sharing based scheme) and different executing efficiency cases (i.e., low executing efficiency case with {ς lb = 0.1, ς ub = 0.5} and high executing efficiency case with {ς lb = 0.6, ς ub = 1}) in Fig. 6. For simplicity, we assume τ k = τ, ∀k ∈ N U . The network power consumption decreases with the increase of the delay constraint because when the delay constraint increases, the QoS level decreases and less power is required to meet the QoS. Similarly, when the average executing efficiency is improved, less computational resource is required thus the network power consumption is also reduced. It is also observed that the network adopts the transmission scheme based on compression shows a better performance than the data-sharing transmission scheme.
VII. CONCLUSION
In this paper, we considered the network power consumption including power consumptions for computation and transmission in a downlink C-RAN. The power minimization problem for computation was a slow time-scale problem, since the joint design of task scheduling and computing resource allocation was generally executed much slower than milliseconds. However, the power minimization problem for transmission was a fast time-scale problem because the joint optimization of power allocation and compression was based on small-scale fading. Therefore, the joint network power minimization problem was a mixed time-scale problem. To overcome the time-scale challenge, we introduced the approximate results of the original problems according to large system analysis. The approximate results were dependent on statistical channel information and independent on small-scale fading, thus the fast/mixed time-scale problem was turned into a slow time-scale one. We proposed a BnB algorithm and a combinational algorithm to find the optimal and suboptimal solutions to the power minimization problem for computation, respectively, and introduced an iterative coordinate descent algorithm to find solutions to the power minimization problem for transmission. Then a distributed algorithm based on hierarchical decomposition was also proposed to solve the joint network power minimization problem. Simulation results showed that for the power minimization problem for computation, the combinational algorithm achieved the suboptimal solutions with much less computational complexity and time, compared to the BnB algorithm. In addition, as the delay constraint increased, suggesting the decrease of the QoS demand, the joint network power consumption was also reduced.
APPENDIX PROOF OF LEMMA 1 Based on the law of large numbers, results (23) and (24) can be directly obtained with the following expressions [22,41]: Then we focus on result (25). We first define a function f (z) = log 2 |H l P l H † l + zI N + Ψ l |, which tends to the numerator of R F l as z → 0. The derivative of f (z) over z is ∂f (z) ∂z = 1 log 2 tr(H l P l H † l + zI N + Ψ l ) −1 .
Using the random matrix theory, we have tr(H l P l H † l + zI N + Ψ l ) −1 tr k∈N U p l,k d l,k 1 + e l,k I N + zI N + Ψ l −1 . | 2018-08-10T07:42:27.000Z | 2018-08-10T00:00:00.000 | {
"year": 2018,
"sha1": "cd1b456920afa004e4dcadc6a20a4110bd4045f8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.03435",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9a403e4174faa840ac12a7dfaa4dddb448b2437f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
260174359 | pes2o/s2orc | v3-fos-license | The impact of phonon-assisted tunneling on optical and quantum characteristics of a coupled two-quantum dot system
This work investigates the impact of decoherence induced by pure dephasing and phonon-assisted tunneling mechanisms on the optical and quantum properties of two quantum dots. Special attention is given to the density matrix at steady state, and a detailed analysis of populations, coherences, optical transitions and the emission spectrum is performed. Additionally, we study the influence of both phonon-decoherence mechanisms on bipartite entanglement and the degree of mixedness of the system. In particular, our findings indicate that the phonon-assisted tunneling mechanism partially affects the coherences of the system and quantum properties when the imbalance of phonon absorption and emission is significant. Conversely, the pure dephasing mechanism does not affect the populations but strongly entangles the quantum dots and the reservoir, inducing maximally mixed states and significantly reducing the spectral splitting in the emission spectrum of the system.
Introduction
Interacting quantum dots (QDs) remain considered promising candidates for its applications in nanotechnology [1,2]. In particular, QDs have been demonstrated to be the building blocks for different realizations of devices in quantum computing [3][4][5]. A plethora of theoretical and experimental studies have been focused on understanding how those artificial quantum systems interact among them [6,7] as well as how they are influenced by the environment [8,9]. For example, theoretical studies have focused on protecting the quantum entanglement against the environment [10][11][12][13] as well as novel proposals for controlling the inherent decoherence when the QDs are coupled to thermal reservoirs [14][15][16]. The literature review states that the pure dephasing mechanism induces decoherence at short time scales in the quantum systems due to optical and acoustic phonons at low temperatures. Due to that QDs are systems that inevitably are linked to the vibrational modes of their host lattice. Few researchers have addressed the question of how the interaction mediated by phonons can affect the quantum properties of QDs, and particularly, it has been demonstrated theoretically that the pure dephasing mechanism can strengthen transitions between bright and dark states [17] and be useful for quantum storage of information [18]. Some other studies have revealed that it is possible to enhance the quantum correlations through the coupling of QDs to the same phonon-reservoir [11], as well as it has been demonstrated that phonon-assisted processes become important for spin relaxation in QDs systems [19]. It is well-known in the literature that the pure dephasing mechanism corresponds to a phonon-mediated coupling at low temperatures [20][21][22]. However, only some are mentioned concerning the phenomenon known as phonon-assisted tunneling, which originates from the Coulomb interaction between an excited QD molecule and its surrounding lattice. More precisely, this phenomenon occurs when the transfer of excitation involves an energy mismatch compensated by a phonon's emission or absorption. Recently, phonon-assisted tunneling has been attracting considerable interest, and some theoretical studies have been devoted to understanding the role played by phonon-assisted tunneling on optical and quantum properties in semiconductor quantum systems. Recent theoretical work has proved that phonon-assisted tunneling induces a dynamical phase transition by embedding double QDs system dots into the photonic crystal cavity [23,24]. This work investigates the influence of phonon-mediated coupling, as is the pure dephasing and phonon-assisted tunneling mechanisms, on the optical and quantum properties of interacting QDs. The content of this paper is as follows: Section 2 presents the theoretical model and a brief discussion on how dissipation and decoherence in the system are introduced. In Section 3, we present our numerical results and discussions. Finally, we summarize the main results of our work in Section 4.
Theoretical model
Our theoretical study considers a quantum system consisting of two vertically stacked QDs with an external lasing field. Therefore, the dynamics will be governed by direct excitons that are considered spatially defined. Each direct exciton involves strong Coulombic coupling between an electron from the conduction band and a heavy hole from the valence band. Additionally, since it is possible to adjust the laser polarization to match the auto-states of the excitons, they will be considered to have fixed spin. Thus, the bare state basis will contain the states | ⟩ = | ⟩ ⊗ | ⟩, which involves both QDs being empty, the state | ⟩ = | ⟩ ⊗ | ⟩, with the first QD empty and the second QD with a direct exciton, the state | ⟩ = | ⟩ ⊗ | ⟩, with the first QD with a direct exciton and the second QD empty. Finally, the state | ⟩ = | ⟩ ⊗ | ⟩, with both QDs having a direct exciton. Additionally, since they are sufficiently close to each other, there will be an exciton transfer between the states | ⟩ and | ⟩ through tunnel coupling term [25,26], so that the Hamiltonian describing the system is given by (ℏ = 1).
=̂ †̂+
where is the frequency for the first QD, ̂(̂) is the lowering operator for the first (second) QD, Δ = − defines the detuning between both QDs, is the coupling between the QDs. Finally, is the biexcitonic shift, which will be neglected afterward for simplicity and without affecting the results of the present paper. In what follows, we incorporate the coupling to the environment through the following master equation in the Lindblad form: where ̂(̂) = ( 2̂̂̂ † −̂ †̂̂−̂̂ †̂) is the well-known Lindblad superoperator. Notice that we have considered spontaneous emission for the first QD and pumping for the second QD through decoherent rates and , respectively. Here ̂is the Pauli matrix for the second QD. In order to study the effect of phonon-assisted processes on the system, we consider two different mechanisms. Namely, the pure dephasing on the second QD at rate and the phonon-assisted tunneling through rates and . In particular, this phonon-mediated coupling mechanism at rate ( ) describes the de-excitation (excitation) of the first (second) QD by the excitation (de-excitation) of the second QD and the creation (or annihilation) of phonons. It in order to compensate the QD-QD frequency difference [23]. It is worth mentioning that an analog to this phonon-assisted tunneling mechanism has been studied within the framework of cavity quantum electrodynamics [27,28] and remarkably, it has been demonstrated that contrary to widely established in the literature, the pure dephasing mechanism could be not adequate to understand some anomalous quantum phenomena, in contrast to the phonon-mediated coupling as it has been shown more recently [29][30][31][32].
Emission properties
In what follows, we are interested in analyzing the influence of the phonon-mediated coupling mechanisms on the emission spectrum of the system. For this task, we calculate the total emission spectrum given by where ⟩ are the two-time correlation functions for the first and second QD, respectively. It is worthwhile noting that for calculating the two-time correlation function, the quantum regression formula (QRF) should be applied [33]. More precisely, the QRF states that if the single-time expectation value of a set of operators {̂} =1 are governed by a system of differential equations as is: then the same set of operators {̂} =1 also satisfy the two-time expectation values as follows:
⟨̂(
with , the corresponding matrix that defines the optical transitions of the system. It is well-known that within the framework of cQED several quantum regimes are identified, and mainly, the strong coupling regime is considered in studies of open quantum systems when ≫ , [34]. Assuming that the system operates in this quantum regime and considering that there is a small size of Hilbert space associated with the quantum system, the emission spectrum of the system can be computed in a closed form. More precisely, defines the matrix associated with one-photon optical transitions. Once the eigenvalues with = 1, ⋯ 4 are found, it is straightforward to identify the spectral peak positions as = [ ] and its corresponding linewidths as Γ = [ ]. We first consider the particular case when there is a phonon-assisted tunneling mechanism, and the pure dephasing mechanism is neglected ( = 0). Thus, the spectral peak positions and linewidths take the form: with the phonon imbalance parameter between the QDs given by = − . It is straightforward to see that for ≈ the Eqs. (7) reveal that the phonon-assisted tunneling has a trivial effect on the system. More precisely, a simple broadening in the linewidths is observed without affecting the spectral peak positions. Therefore, we address our attention to a more interesting scenario. It is, when > 0 by assuming that > 0, = 0 and = 0. Considering that we are interested in comparing both phonon-mediated mechanisms, we consider the contrary case when there is only pure dephasing mechanism and the phonon-assisted tunneling is neglected ( = 0). Notably, it is found that the corresponding spectral peak positions and linewidths become rather intractable for any value of . Despite this, we have explored the influence of the pure dephasing mechanism numerically as follows: Figs. 1(a)-(b) show the linewidth as a function of ∕ and ∕ , respectively. The vertical dashed line serves as a guide to the eye to see how for a critical value ∕ = 2 of the pure dephasing rate, only two of the four possible linewidths contribute to the emission spectrum, as shown in panel (b) with solid-green and dot-dashed-blue lines. The other two linewidths contributing to the background emission are also shown since they increase significantly, as shown in solid-red and dashed-black lines. Notice that for the critical value of ≈ 2 , the pure dephasing induces a higher broadening on the linewidths associated with the optical transitions than the phononassisted tunneling mechanism as shown in panel (a). In Figs. 1(c)-(d), the spectral peak positions are also shown as a function of ∕ and ∕ , respectively. The figure shows that the predicted spectral peak positions are very different for low values of these two phonon-mediated coupling mechanisms, for example, at the parameter value specified by the vertical dashed line. Notice that there are two distant values for the phonon-assisted tunneling mechanism for the spectral peak positions, whereas for the pure dephasing mechanism, the spectral peak positions are very close. Consequently, it can be expected that a doublet appears in the emission spectrum when there is phonon-assisted tunneling and a singlet in the emission spectrum when there is a pure dephasing mechanism. Interestingly, and in contrast with what was previously found, for high values of both phonon-mediated coupling rates, the spectral peak positions are very close to each other, and it could be affirmed that both phonon-mediated coupling mechanisms produce a singlet in the emission spectrum. However, it must be interpreted with caution and therefore, we perform a detailed analysis of the optical transitions of the system. More precisely, we pay attention to the normalized eigenvectors of that are in one-to-one correspondence with optical transitions that contribute to the emission spectrum of the system. It is where and are the expansion coefficients in the uncoupled basis of the two QDs. As is well-known within the quantum optics framework, the magnitude squared of these coefficients is recognized as the fractional composition of the eigenstate and is frequently used to understand how the optical transitions are composed in terms of the uncoupled basis. The Fig. 1(e)-(f) shows the fractional composition through | | 2 and | | 2 with = 1, … 4 for T 1 and T 2 as a function of ∕ , respectively [35]. In particular, we observe that for values of < the optical transition T 1 has contribution from states | ⟩ → | ⟩ and | ⟩ → | ⟩ since | 1 | 2 = | 2 | 2 = 0.5. Similarly for the optical transition T 2 that has contribution from states | ⟩ → | ⟩ and | ⟩ → | ⟩ since | 3 | 2 = | 4 | 2 = 0.5. It implies that all states are involved in the optical transitions. Interestingly, for values of > the most remarkable result to emerge from the coefficients is that the system behaves as an effective three-level model involving only the states | ⟩ → | ⟩ → | ⟩ since | 1 | 2 = 1 and | 3 | 2 = 1. The knowledge of this fact can potentially be beneficial in exploring a quantum phenomenon referred to as electromagnetically induced transparency in quantum dots [36]. Similar numerical calculations for the fractional composition are shown in Fig. 1(g)-(h) for T 1 and T 2 , but as a function of ∕ . Unlike phonon-assisted tunneling, the pure dephasing mechanism does not lead to effective decoupling. In the case where exceeds . The optical transitions T 1 and T 2 exhibit the characteristic relaxation process from excited states to the ground state within each of the two quantum dots. On the other hand, in the absence of phonon-mediated coupling mechanisms (e.g., = = 0), the emission spectrum presents the well-known Rabi doublet at frequencies ± as a signature of the strong coupling regime in the system as can be seen in Fig. 1(i). Now if it is considered that there is a low rate of phonon-assisted tunneling mechanism and = 0, the emission spectrum shows a broader doublet at almost the same frequencies as shown in Fig. 1(j). Unlike the phonon-assisted tunneling mechanism, when it is considered a low rate of pure dephasing mechanism and = 0, the emission spectrum shows a singlet at the frequency of , and it could be interpreted as if the system were operating in a weak coupling regime as shown in Fig. 1(k). Notice that this apparent result contradicts the fact that we are considering that the system operates in strong coupling regime. Similar numerical calculations are shown in Fig. 1(l)-(m) but for high values of both phonon-assisted tunneling and pure dephasing mechanisms, respectively. Specifically, it has been observed that even with large values of the parameter , a doublet remains distinguishable at frequencies ± . However, when large values of are considered, only a single emission peak is observed at the quantum dot frequency.
Quantum entanglement measurements
In what follows, we investigate the influence of both phonon-mediated coupling mechanisms on the degree of entanglement of the system. More precisely, we characterize the bipartite entanglement and the degree of mixedness of quantum state through the well-known measurements as is the concurrence [37] and the von Neumann entropy [38], respectively. Fig. 2 in panels (a)-(b) shows the concurrence and the von Neumann entropy at the steady state as a function of the detuning, respectively. The solid-red line represents when no phonon-mediated coupling mechanisms are considered ( ∕ = 0, ∕ = 0). The blue-dot-dashed line corresponds to the situation when the pure dephasing mechanism appears to the first QD ( ∕ = 0, ∕ = 2) and finally, the black-dashed line shows the numerical results for the situation when only phonon-assisted tunneling mechanism is present ( ∕ = 7, ∕ = 0). From a general point of view, these two entanglement measurements have different behavior as a function of detuning. In particular, the presence of the pure dephasing mechanism destroys the bipartite entanglement between the QDs for all ranges of the detuning parameter, in contrast to the phonon-assisted tunneling mechanism which partially entangles the QDs as is shown with ≈ 0.3 a maximum in the concurrence measurement. Notice, moreover, that the pure dephasing mechanism strongly entangles the QDs with the environment as a maximally mixed state since = ln( ) ≈ 1.3862 ( = 4 being the dimension of the Hilbert space). The von Neumann entropy shows that the phonon-assisted tunneling mechanism partially entangles the QDs with the environment when Fig. 2(c) reveals that the entanglement dynamics behave according to the well-known entanglement sudden death and revivals due to the unavoidable interaction with the environment known for an interacting two-qubit system. In particular, it is observed that sudden death and revivals of the entanglement drop out fast enough due to the pure dephasing mechanism, as shown in blue-dot-dashed line. By contrast, the phonon-assisted tunneling mechanism entangles both QDs for all times by smoothing out the entanglement oscillations as shown in black-solid line. As can be seen in Fig. 2(c), the pure dephasing mechanism induces nontrivial entanglement dynamics such that the system reaches a highly mixed state, in contrast to the phononassisted tunneling that induces a delay in the quantum state mixture (the same color convention as in the previous panels).
Quantum state tomography
While we have investigated the effect of the phonon-assisted tunneling and pure dephasing mechanisms on the entanglement properties of the system, we could have as well focused on the characterization of the quantum state of the system at long time dynamics. For this task, we calculate numerically the population inversion probability for both QDs through the quantities
⟨̂ †̂⟩
and ⟨̂ †̂⟩ at the steady state as shown in Fig. 3(a). In particular, the population inversion probability for the first and second QD as a function of ∕ are shown as solid-green and dashed-black lines, respectively. Similar numerical results are shown in the same figure as solid-red and blue-dot-dashed lines but as a function of ∕ . We observed that for a very low value of rates of ∕ or ∕ , the occupations of the system have almost equal probability, and therefore it could be inferred that there are no effects attributed to phonons. While the population inversion increases with the increasing parameter ∕ , it is observed that the population inversion probability gets close to 0.5, meaning most of the probability is in the excited states -coherences of the system-. Similar results are found for population inversion when varying the parameter ∕ . However, this result must be interpreted cautiously since ∕ needs to be large enough to provide a population inversion close to 0.5. Moreover, it is well-known within the context of QDs systems that the pure dephasing mechanism is a process that destroys all quantum coherences. In order to get insight into how processes mediated by phonons can transform quantum states, we characterize the density matrix at the steady state through quantum state tomography. Fig. 3(b) reveals that in the absence of both phonon-mediated coupling mechanisms ( = = 0) the occupations have equal probability and the coherences related to the interaction are barely distinguishable. Interestingly, when only the phonon-assisted tunneling mechanism is considered ∕ = 3 ( = 0), it is observed that there is a redistribution of the probability in the quantum state in such a way that the coherences associated with the interaction take a higher value as shown in Fig. 3(c). This increase in the coherences explains the entanglement generation, and the redistribution of occupations is strongly related to the resonant spectral crossing. In contrast to this, when there is considered only the pure dephasing mechanism ∕ = 3 ( = 0) it is observed that the occupations remain unaffected and the coherences disappear as shown in Fig. 3(d) confirming that the system reaches a maximally mixed state.
Conclusions
In this paper, two interacting quantum dots are studied within the framework of the Lindblad master equation. In particular, the effects of pure dephasing and phonon-assisted tunneling mechanism on the emission spectrum and quantum properties are discussed. We have found that the pure dephasing mechanism contributes significantly to the background emission and a singlet will appear in the emission spectrum at a low dephasing rate. On the other hand, we demonstrated that for a low phonon-assisted tunneling Fig. 3. Panel (a) shows the population inversion as a function of ∕ for the first and second QD as solid-green and dashed-black lines, respectively. Similar numerical calculations are shown for the population inversion but as a function of ∕ and they are shown as solid-red and blue-dot-dashed lines, respectively. Panels (b)-(c) show the numerical results for quantum state tomography of the density matrix for = 0, = 0, for = 0, ∕ = 3, and for ∕ = 3, = 0, respectively. The rest of parameters values are as in Fig. 1. rate [39], there is a doublet in the emission spectrum and this phonon-mediated coupling mechanism plays an essential role for generating entanglement between both QDs. Our work has led us to conclude that the emission spectrum of the system is not an adequate observable to characterize the quantum regime of the system, since the strong coupling regime can appear disguised as a spectral crossing when the phonon-assisted tunneling rate becomes large enough.
Funding information
This research was funded through the national PhD program by COLCIENCIAS (Grant No. 727) and Centro de Excelencia en Tecnologías Cuánticas y sus Aplicaciones a Metrología, (Grant HERMES No. 57522).
CRediT authorship contribution statement
Santiago Echeverri-Arteaga: Analyzed and interpreted the data; Contributed analysis tools; Wrote the paper. Herbert Vinck-Posada: Conceived and designed the analysis; Analyzed and interpreted the data. Edgar A. Gómez: Conceived and designed the analysis; Analyzed and interpreted the data; Contributed analysis tools; Wrote the paper.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
No data was used for the research described in the article. | 2023-07-27T15:12:25.769Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "a0154f3762296faf256bd0d65a75a7c185d6b148",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e18451",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c14250a7c978d8b330634ad5d5e593838579f91",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16202343 | pes2o/s2orc | v3-fos-license | Lix@C60: Calculations of the Encapsulation Energetics and Thermodynamics
Li@C60 and Li@C70 can be prepared and thus, their calculations at higher levels of theory are also of interest. In the report, the computations are carried out on Li@C60, Li2@C60 and Li3@C60 with the B3LYP density-functional theory treatment in the standard 3-21G and 6-31G* basis sets. The computed energetics suggests that Lix @C60 species may be produced for a few small x values if the Li pressure is enhanced sufficiently. In order to check the suggestion, a deeper computational evaluation of the encapsulation thermodynamics is carried out.
Calculations
The geometry optimizations were carried out with Becke's three parameter functional [25] with the non-local Lee-Yang-Parr correlation functional [26] (B3LYP) in the standard 3-21G basis set (B3LYP/3-21G). The geometry optimizations were performed with the analytically constructed energy gradient as implemented in the Gaussian program package [27].
In the optimized B3LYP/3-21G geometries, the harmonic vibrational analysis was carried out with the analytical force-constant matrix. In the same optimized geometries, higher-level single-point energy calculations were also performed, using the standard 6-31G* basis set, i.e., the B3LYP/6-31G* level (or, more precisely, B3LYP/6-31G*//B3LYP/3-21G). As Li@C 60 and Li 3 @C 60 are radicals, their computations were carried out using the unrestricted B3LYP treatment for open shell systems (UB3LYP). The ultrafine integration grid was used for the DFT numerical integrations throughout.
Results and discussion
The UB3LYP approach is preferred here over the restricted open-shell ones (ROB3LYP) as the latter frequently exhibits a slow SCF convergency or even divergency. Although the unrestricted Hartree-Fock (UHF) approach can be faster, it can also be influenced by the so called spin contamination [28] and indeed, this factor was an issue in our previous [15] UHF SCF calculations as the UHF/3-21G spin contamination turned out to be higher than recommended threshold [28] in the expectation value for the S 2 term where S stands for the total spin. As long as the deviations from the theoretical value are smaller than 10%, the unrestricted results are considered applicable [28]. This requirement is well satisfied for the Li@C 60 and Li 3 @C 60 species. Fig. 1 shows the computed structures of Li@C 60 , Li 2 @C 60 , and Li 3 @C 60 . In all the three cases the Li atoms in the optimized structures are shifted from the cage center towards its wall. In particular, in the Li@C 60 species the shortest computed Li-C distance is 2.26Å, while in a central location (optimized as a stationary point) the shortest Li-C distance at the UB3LYP/3-21G level is 3.49Å. As for the energetics of the centric and off-centric structure, the central location is placed by some 9.9 kcal/mol higher at the UB3LYP/3-21G level. However, the energy separation is further increased in the UB3LYP/6-31G*//UB3LYP/3-21G treatment, namely to 15.0 kcal/mol. The metal atom in the off-centric Li@C 60 species is localized above a C-C bond shared by pentagon and hexagon (though an alternative description as above hexagon would also be possible). However, the system does not exhibit any symmetry. Distortion of the cage can be seen from the rotational constants. The icosahedral C 60 cage at the B3LYP/3-21G level has one uniform rotational constant of 0.0833 GHz. If in the UB3LYP/3-21G optimized Li@C 60 species the metal atom is removed, the remaining distorted C 60 cage has the rotational constants 0.0832, 0.0830, and 0.0829 GHz. The distorted cage is higher in energy compared to the icosahedral cage by about 2.5 kcal/mol at the B3LYP/3-21G level. In the Li 2 @C 60 case (approximative description as location above hexagon), the shortest Li-C distance is even bit shorter, 2.14Å. Interestingly enough, Li 2 @C 60 exhibits center of symmetry. The Li-Li separation is computed as 3.29Å, i.e., substantially longer than the observed value in the free (neutral) Li 2 molecule (2.67Å, cf. refs. [29][30][31]) -obviously an effect of the positive charges on the encapsulated atoms. In the Li 3 @C 60 species (with approximative description as localization above C-C bonds shared by pentagon and hexagon), the shortest computed Li-C contact is even further reduced to 2.05Å. The Li-Li distances in the encapsulated Li 3 cluster are not equal -they are computed as 2.70, 2.76 and 2.84Å. Incidentally, while the observed Li-Li distance for free Li 2 is [29-31] 2.67Å, the B3LYP/3-21G computed value is 2.725Å (it changes to 2.723Å at the B3LYP/6-31G* level). Similarly, also the observed values for the free Li 3 cluster are available [32,33], actually for two triangular forms -opened (2.73, 2.73, 3.21Å) and closed (3.05, 3.05, 2.58Å). The UB3LYP/3-21G computed distances in the free Li 3 opened cluster are 2.78, 2.78, and 3.30Å. Hence, there is a good theory-experiment agreement. The B3LYP/3-21G formal Mulliken charge (the largest value) found on the Li atoms is somewhat decreasing in the Li@C 60 , Li 2 @C 60 , and Li 3 @C 60 series with the values of 1.16, 1.10, and 0.86, respectively (the charges are somewhat reduced at the B3LYP/6-31G* level). Nevertheless, the total charge transferred to the cage is increasing in the series: 1.16, 2.21, and 2.46Å.
The vibrational analysis enables to test if a true local energy minimum was found. All the computed frequencies for the structures in Fig. 1 are indeed real and none imaginary (though we could also locate some saddle points not discussed here). The lowest computed vibrational frequencies are mostly represented by motions of the Li atoms. Obviously, owing to symmetry reductions upon encapsulation, the symmetry selection rules do not operate any more in the way they simplify the C 60 vibrational spectra [34]. Hence, the vibrational spectra of Li x @C 60 must be considerably more complex than for the icosahedral (empty) C 60 cage with just four bands in its IR spectrum [34]. This increased spectral complexity has indeed been observed [13,14]. Incidentally, the observed harmonic frequency [29][30][31] for free Li 2 is 351 cm −1 while the computed B3LYP/3-21G term is 349 cm −1 (and the B3LYP/6-31G* value 342 cm −1 ). For the endohedrals, larger-basis frequency calculations are not yet common.
There is a general stability problem related to fullerenes and metallofullerenes -either the absolute sta-bility of the species or the relative stabilities of clusters with different stoichiometries. One can consider an overall stoichiometry of a metallofullerene formation: The encapsulation process is thermodynamically characterized by the standard changes of, for example, enthalpy ∆H • Y x @C n or the Gibbs energy ∆G • Y x @C n . In a first approximation, we can just consider the encapsulation potential-energy changes ∆E Yx@Cn . Table 1 presents their values for Li x @C 60 . The absolute values increase with the increasing number of the encapsulated Li atoms. In order to have some directly comparable relative terms, it is convenient to consider the reduced ∆E Y x @C n /x terms related to one Li atom. The absolute values of the reduced term decrease with increasing Li content, nevertheless, the decrease is not particularly fast (so that, a further increase of the number of encapsulated Li atoms could still be possible). The computational findings help to rationalize why also the Li 2 @C 60 endohedral could be observed [11]. Although the basis set superposition error is not estimated for the presented values (an application of the Boys-Bernardi counterpoise method may be somewhat questionable in this situation), the correction terms could be to some extent additive. Interestingly enough, the stabilization of metallofullerenes is mostly electrostatic as documented [35,36] using the topological concept of 'atoms in molecules' (AIM) [37,38] which shows that the metal-cage interactions form ionic (and not covalent) bonds.
Let us further analyze the encapsulation series from eq. 1. As already mentioned, the encapsulation process is thermodynamically characterized by the standard changes of enthalpy ∆H • Yx@Cn or the Gibbs energy ∆G • Yx@Cn . The thermodynamic functions are calculated here using the standard partition functions available in the Gaussian program package [27], i.e., in the rigid rotor and harmonic oscillator approximation. The equilibrium composition of the reaction mixture is controlled by the encapsulation equilibrium constants K Y x @C n ,p expressed in the terms of partial pressures of the components. The encapsulation equilibrium constants are interrelated with the the standard encapsulation Gibbs energy change: Temperature dependency of the encapsulation equilibrium constant K Y x @C n ,p is then described by the van't Hoff equation: where the ∆H • Y x @C n term is typically negative so that the encapsulation equilibrium constants decrease with increasing temperature. Table 2. The products of the encapsulation equilibrium constants K Y x @C n ,p with the related metal saturated-vapor pressures [39] p Y,sat for Li@C 60 , Li 2 @C 60 , and Li 3 @C 60 computed for selected illustrative temperatures T . The potential-energy change is evaluated at the B3LYP/6-31G * level and the entropy part at the B3LYP/3-21G level; the standard state is ideal gas phase at 101325 Pa pressure. Let us further suppose that the metal pressure p Y is actually close to the respective saturated pressure p Y,sat . While the saturated pressures p Y,sat for various metals are known from observations [39], the partial pressure of C n is less clear as it is obviously influenced by a larger set of processes (though, p C n should exhibit a temperature maximum and then vanish). Therefore, we avoid the latter pressure in our considerations at this stage. As already mentioned, the computed equilibrium constants K Yx@Cn,p have to show a temperature decrease with respect to the van't Hoff equation (4). However, if we consider the combined p x Y,sat K Y x @C n ,p terms that directly control the partial pressures of the Y x @C n encapsulates in an encapsulation series (based on one common C n fullerene), we get a different picture. The considered p x Y,sat K X@C n ,p term can frequently (though not necessarily) be increasing with temperature so that a temperature enhancement of metallofullerene formation in the electric-arc technique would be still possible. An optimal production temperature could be evaluated in a more complex model that also includes temperature development of the empty-fullerene partial pressure.
If we however want to evaluate production abundances in a series of metallofullerenes like Li@C 60 , Li 2 @C 60 and Li 3 @C 60 , just the p x Y,sat K Y x @C n ,p product terms can straightforwardly be used. The rigidrotor and harmonic-oscillator partition functions and entropy terms are evaluated at the B3LYP/3-21G level, the potential-energy change at the B3LYP/6-31G* level. The results in Table 2 show several interesting features. For all three members of the series -Li@C 60 , Li 2 @C 60 and Li 3 @C 60 -the p x Y,sat K Y x @C n ,p quotient increases with temperature. This behavior results from a competition between the decreasing encapsulation equilibrium constants and increasing saturated metal pressure.
In order to allow for cancellation of various factors introduced by the computational approximations involved, it is better to deal with the relative quotient Table 2 shows that the production yield of Li 2 @C 60 in the high-temperature synthesis should be by at least four orders of magnitude smaller than that of Li@C 60 . Chances for production of Li 3 @C 60 should be still by at least two orders of magnitude worse compared to Li 2 @C 60 . Interestingly enough, an endohedral with a relatively lower value of the encapsulation equilibrium constant could, in principle, still be produced in larger yields if a convenient over-compensation by higher saturated metal pressure can take place owing to the exponent in the pressure in term (5). In fact, we are dealing with a special case of clustering under saturation conditions [40]. The saturation regime is a useful simplification -it is well defined, however, it is not necessarily always achieved. Under some experimental arrangements, under-saturated or perhaps super-saturated metal vapors are also possible. This reservation is applicable not only to the electric-arc treatment but even more likely with the low energy ion implantation [11,13,14]. Still, eqs. (2) and (5) remain valid, however, the metal pressure has to be described by the values actually relevant. For some volatile metals their critical temperature can even be overcome and the saturation region thus abandoned.
Although the energy terms are likely still not precise enough, their errors could be comparable in the series and thus, they should cancel out in the relative terms. Therefore, the suggested relative terms should be rather reliable values. This cancellation could also be the case of other terms involved like the basis set superposition error important for evaluation of the encapsulation potential-energy changes. Another term that should still be evaluated is the electronic partition function as low-lying electronic excited states can make significant contributions into thermodynamics at high temperatures [41]. Finally, a cancellation in the relative terms should also operate for the higher corrections to the rigid-rotor and harmonic-oscillator partition functions, including motions of the encapsulate. The motion of the endohedral atom is highly anharmonic, however, its description is yet possible only with simple potential functions. It has been known from computations and NMR observations [42] that the encapsulated atoms can exercise large amplitude motions, especially so at elevated temperatures (unless the motions are restricted by cage derivatizations [43]). Therefore, in the NMR observations metallofullerenes usually exhibit the highest (topologically) possible symmetry which reflects averaging effects of the large amplitude motions (for this reason, also the symmetry numbers of the Li endohedrals in this paper were taken [44] as 60). As long as we are interested in the relative production yields, the anharmonic effects should at least to some extent be cancelled out in the relative quotient as also demonstrated [19] in some model calculations. Thus, the calculated relative production yields suggested in this study should be reasonably applicable to a broader spectrum of endohedral systems [45].
Conclusions
Calculations of Li@C 60 , Li 2 @C 60 and Li 3 @C 60 with the B3LYP density-functional theory treatment in the standard 3-21G and 6-31G* basis sets have been combined with evaluations of the encapsulation thermodynamics. The production yield of Li 2 @C 60 in the high-temperature synthesis should be by at least four orders of magnitude smaller compared to Li@C 60 while that of Li 3 @C 60 should be still by at least two orders of magnitude lower compared to Li 2 @C 60 . The suggested evaluation of the relative populations is actually applicable to endohedrals in general. | 2014-10-01T00:00:00.000Z | 2008-09-01T00:00:00.000 | {
"year": 2008,
"sha1": "e69e83737c017367130d579721ec5948800efed3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/9/9/1841/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e69e83737c017367130d579721ec5948800efed3",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
225568197 | pes2o/s2orc | v3-fos-license | (Nearly) Outside the Shadow of Hierarchy: An Enquiry into the Teleology of Municipal Paradiplomacy in the EU
Local authorities have increasingly deployed instruments and practises proper of the international diplomatic repertoire—a phenomenon referred in the literature as paradiplomacy. These state-like diplomatic praxes, ranging from participating in transnational networks to signing international agreements, hint at an increased room for manoeuvre for local authorities, in that they occur without the intervention and control of the central state. While emulating states’ international behaviour, municipal paradiplomacy displays distinctive features and purposes. Drawing on the scholarship on paradiplomacy, this article provides a theoretical reflection on the teleology of municipal paradiplomatic practices. By proceeding through parallelism with state diplomacy’s objectives, this article identifies three functional equivalents that constitute the primary teloi of paradiplomacy; namely, municipal self-determination, influence and contention. By delving into the aims of municipal paradiplomacy, the article points out how municipal paradiplomacy constitutes a valuable means for municipalities to progressively free themselves from the grip of the central state.
Introduction
Thirty years ago, Ivo D. Duchacek peremptorily asserted that 'international activities of noncentral governments rarely make the first page of national dailies' .1 To date, this statement no longer reflects the political role of local authorities: once neglected actors in the Grand Politics of international affairs, local authorities appear to have made their way out of the oblivion. Going beyond the loose relations of the 20th century's ententes amicales, now local authorities deploy instruments and practices proper of the international diplomatic repertoire. This is especially true in Europe, where cities have engaged in a swath of state-like activities, including the creation of and participation in town twinning, networks and 'multilateral organisations' , as well as the signature of 'bilateral cooperation agreements' .2 The return of cities on the political scene has rekindled the scholarly interest for the urban dimension. Local authorities have come to be understood by scholars and commentators as fully-fledged governing units, distinctive from nation states, endowed of their own decision-making capacity and embedded in a specific political, socio-economic and cultural context. This characterisation of cities is evident in the terminology employed in political science, geography, and urban studies. Terms such as 'actors' , 'players' , 'policymakers'-instead of 'policy-takers' ,3 , 4 , 5 , 6 as implicitly assumed by much of state-centred theories-are just but few examples of epitomes used to underscore the proactive political role of cities. Especially in the European Union (EU), local authorities have been able to restore their political influence, developing relations with other European cities and with the EU through interurban organisations such as the Council of European Municipalities and Regions, or networks such as Eurocities.
The increased capacity of cities to act freely on the international arena has elicited theorisations likening the international behaviours and practices of cities to those of states, leading to the coinage of the term paradiplomacy. This concept may be defined as a 'mature political practice, encompassing all aspects of a true diplomatic culture, including the search for agreement 1 Duchacek 1987, 2. 2 La Porte 2013, 90. 3 Kern 2009, 1. 4 Le Galès and Harding 1998, 122. 5 See Le Galès 2002, defining the city as a 'collective actor' . 6 Schultze 2003, 121. or consensus, but also the sometimes aggressive quest for self-interest at the detriment of others ' .7 Following the seminal work of Duchacek8 and Panayotis Soldatos,9 the paradiplomatic activities of subnational authorities have been documented by a sizeable host of research.10 Nevertheless, the contribution of International Relations has been scant.11 The literature on paradiplomacy suggests that supranational municipal engagement is the outcome of an increased capacity of cities to act independently from the central state, engendered by top-down and bottom-up drivers. On the one hand, it is argued that decentralisation, internationalisation and, within the EU, European integration granted municipalities a greater capacity to act more freely at international level. On the other hand, the research contributions on the topic emphasise subnational authorities' political agency, highlighting how the direct and seemingly uncontrolled involvement of cities in international political arenas expand their room for manoeuvre, consolidating their capacity to act as autonomous political agents.
On closer examination, the paradiplomacy scholarship is primarily concerned with the diplomatic activity of substates entities; that is, regions and federated states.12 Surely, the prevalence of studies on regions is motivated by the fact that the latter are endowed of legal powers and have fairly homogenous socio-economic and cultural conditions-drivers that may trigger separatist claims (as in the case of Catalonia, Scotland and Quebec). Nonetheless, there is a growing scholarly interest on diplomatic-like practices undertaken by local authorities.13 Among this latter thread of research, the paradiplomatic practices and international presence of global cities has drawn a great deal of interest.14 However, comparatively less attention has been given to non-capital cities' activities such as second-and third-tier localities. Although the lion's share of municipal actors engaged in international activities is constituted by the aforementioned types of local authorities, the international engagement 7 Duran 2016, 2. 8 See, inter alia, Duchacek 1987 and1990. 9 See, inter alia, Soldatos 1990. 10 See Acuto 2013Aldecoa and Keating 1999;Callanan and Tatham 2014;Duran 2011Duran , 2016La Porte 2013;Lecours 2002;Mamadouh 2016;Michelmann and Soldatos 1990;Tatham 2008Tatham , 2010Tatham , 2013Gress 1996. 11 Curtis 2014 See, for instance, Aldecoa and Keating 1999;Cornago 2010a;Duran 2011Duran , 2016Lecours 2002. 13 See, inter alia, Acuto andRayner 2016;Acuto 2013;Van der Pluijm, with Melissen 2007;La Porte 2013;Mamadouh 2016. 14 See, for instance, Curtis 2014Friedmann 1986;Ljungkvist 2015;Nijman 2016;Sassen 1991Sassen , 2006.
Mocca
The Hague Journal of Diplomacy 15 (2020) 303-328 of smaller localities should not be-at least theoretically-dismissed tout court.15 Endeavouring to differentiate and underscore the specificities of local actors' engagement in paradiplomatic activities, more accurate terms have been proposed such as 'urban paradiplomacy ' ,16 'informal diplomacy' ,17 'city diplomacy'18 and 'public diplomacy' .19 In this article, the notion of para diplomacy is qualified by using the more comprehensive term municipal which, intuitively, refers to the type of diplomatic-like activities carried out by municipalities. Shying away from any attempts to either criticise the abovementioned terms or argue for the need for an alternative notion, this terminological choice hinges on the assumption that not only capital and large cities are paradiplomatic actors.
Building on the theoretical and empirical contributions on paradiplomacy, this article aims at excavating the teleology of municipal paradiplomacy, pinpointing and delving into its teloi. Since paradiplomacy is a practice mimicking state diplomacy, I identify its teloi by parallelism with the latter. Notably, state diplomacy is, on the one hand, about the mediation of antagonism and affirmation of power among sovereign actors, which strive to contain or resolve international conflicts; on the other hand, it deals with the negotiation of states' interests. Therefore, it is possible to identify the main teloi of state diplomacy in: assertion of sovereignty, bargaining of power and conflict management. However, municipal authorities, unlike states, are neither sovereign entities nor endowed of a high degree of autonomy, and thus cannot act as they wish-neither de jure nor de facto. What differentiates state diplomacy from municipal (para)diplomacy is, thus, the legal status of the entities involved in foreign affairs: while states are sovereign entities, local governments are subordinated to the latter. Therefore, alternative heuristic categories have to be developed to grapple with the intrinsic purposes of paradiplomacy.
To factor in the specificities of local authorities, this article employs the following functional equivalents: self-determination, influence and latent contention (see Fig. 1). Selfdetermination is here understood as the functional equivalent of sovereignty for local authorities, a notion that indicates an attempt to expand municipalities' room for manoeuvre. Self-determination aptly describes the greater freedom of municipalities to act on the inter national stage with no or minimal control from the central state, while taking into account how the centrifugal aspirations of local authorities are limited to the existing national constitutional and legal frameworks. Second, while state diplomacy is about consolidating state power, municipal paradiplomacy is about expanding municipalities' influence. By mobilising at the supranational level, municipalities strengthen their international political agency and partly loosen their economic dependence from the central state, thanks to the possibility of liaising directly with the EU and building up partnerships to apply for EU funding-as pointed out by the literature on this topic discussed in the ensuing sections. Finally, this article defines latent contention as an attempt to challenge the central state through processes of 'bypassing' and-to some extent-'delinking' . Although not subverting the central-local power structure, which is defined by law, paradiplomacy helps cities to be recognised as credible negotiators, becoming the counterpart of the EU in supranational decision-making and of the central state in domestic negotiations. Drawing on these three notions, this article thus provides a throughout understanding of the teleological principles of municipal paradiplomacy. In so doing, the article aims at bringing the city back into international relations.
This article proceeds as follows. After this introduction, in Section 2 a historical excursus of the main institutional processes that have enabled the development of paradiplomacy is sketched out. In Sections 3, 4 and 5 the constitutive elements of municipal paradiplomacy are illustrated. The article draws to a close with Section 6.
The Loosening of the Central-local Relations
To better grasp the teleology of municipal paradiplomacy, a historical excursus of the institutional framework that enabled local authorities' development of paradiplomatic practices seems to be a worthy starting point. Historical research has found that embryonic forms of what is now termed as paradiplomacy date back to Ancient Greece: accordingly, the first document referring to a type of nonofficial diplomacy appears to be Demosthenes's 'Περὶ Source: Author. Figure 1 State and municipal diplomacy
Sovereignty
Power Con ict management
Self-determination
In uence Latest contention
Mocca
The Hague Journal of Diplomacy 15 (2020) 303-328 τῆς παραπρεσβείας' (Perì tes parapresbeia-'On the false embassy').20 Despite that forms of paradiplomacy ante litteram were practised by ancient Greek city-states as well as by city-states in the Italian territory at the end of the 1400s,21 it was in the 20th century that municipalities returned into the limelight after centuries of marginalisation. In particular, cities' political agency was consolidated by both EU-level and domestic processes that may be summarised as: the unlocking of 'window[s] of opportunity' for municipalities at the EU level,22 the devolution of administrative powers. Let us discuss these processes in detail.
EU Opportunity Structures
The EU opened up new avenues for European local authorities to widen their wiggle room. In particular, the subsidiarity principle,23 set out in the 1992 Treaty of Maastricht, strengthened subnational authorities' participation in the process of European integration.24 The subsidiarity principle has supported national reforms to increase competencies at the local level. Accordingly, policy issues should be addressed, whenever possible, by lower levels of authority; only if these issues have a national relevance or cannot be tackled efficiently by subnational authorities do they have to be addressed by the central government. A recent attempt to consolidate the role of cities in the EU policymaking is the Urban Agenda for the EU, which envisages 'a new form of multilevel and multi-stakeholder cooperation with the aim of strengthening the urban dimension in EU policy' .25 The participation of cities in the EU polity has also been fostered by initiatives funded by European Regional Development Funds with an urban dimension, which incentivise the establishment of networks as means to collectively tackle social, economic and environmental problems. Underlying the EU's urban-friendly attitude there has been the belief that, being the closest administrative level to the citizens, local authorities can help tackle the longstanding issue of the democratic deficit in the EU. In effect, cities 'seem to be potential local bases for the implementation of European programmes, for the mobilisation of citizens on behalf of the European integration project, and 20 As Duran 2013, 150 explains, the term parapresbeia precisely indicates a 'false embassy' . 21 Berridge 2015, 1. 22 Saunier 2008 23 Now established by the Article 5 of the Consolidated versions of the Treaty on European Union and the Treaty on the Functioning of the European Union (2010/C 83/01). 24 Ewen 2008, 108;Goldsmith 1993, 683. 25 Urban Agenda for the EU 2016, 4. for the participation of coalitions that aim to advance this project' .26 Further, the willingness of the EU institutions to involve local authorities in policymaking has been instrumental to achieving specific objectives: on the one hand, the liaisons with European local governments has enabled the EU institutions to enhance internal cohesion; on the other hand, it has made easier to understand the perspective of local governments to improve the EU policymaking process.27 Thanks to the acquisition of more capacity to act without the interference from the central state, local authorities have been able to hone their paradiplomatic skills, mimicking nation states' praxes and behaviours in foreign policy. Paradiplomacy may be conceived-at least partly-as a tool to address the political marginalisation and the financial cutback operated by the central state. Municipalities have thus turned to the international scale in search of valuable assets such as capital, expert knowledge and connections-which the central state has not provided them with-to boost their economy and address local issues. As Soldatos stated, 'The development of urban paradiplomacy is an expression of "functional sovereignty" in the face of the urgent dilemma "internationalize or perish" ' .28 Even more trenchant is Benjamin R. Barber's argument, for whom 'cities have little choice: to survive and flourish they must remain hospitable to pragmatism and problem solving, to cooperation and networking, to creativity and innovation' .29 To do so, the development of an international profile-for example, through the participation in transnational urban organisations or by hosting international events-increases the visibility of the city, which is instrumental to secure investments to subsidise welfare services and improve local infrastructures.30 Hence, the creation of 'strategic cities alliances and networks' is crucial to acquire the status of international city.31
Decentralisation Reforms and Municipal Agency
In addition to the opportunities provided by the EU, public administration reforms, which led to the delegation of state competencies to local authorities, incentivised the supra-national engagement of the latter. The devolution of power was reinforced by economic, political and social issues such as mounting public expenditure, crisis of the welfare state, increasing distrust of the 26 Le Galès 2002, 76. 27 Goldsmith 1993, 683. 28 Soldatos 1991, 346. 29 Barber 2013, 13. 30 Mocca 2017, 706. 31 Soldatos 1991
Mocca
The Hague Journal of Diplomacy 15 (2020) 303-328 national political class, the 'legitimacy crisis' of central power, and pressing requests for greater public involvement in decision-making.32 On this latter point, it has been noted that state reorganisation was also the outcome of 'bottom-up pressures' of social agents, less trustful towards the central state.33 The decentralisation process was undertaken to respond to the changing relationship between voters and the political class: in several European countries, where election turnouts were shrinking, administrative changes at the local level were pointed to as the solution to address requests for a more responsible political class and a broader political participation of citizens.34 While broadening the competencies of local governments, decentralisation was paralleled by substantial cuts to local budgets. Therefore, municipalities found themselves in need of additional funding to face mounting social and economic problems. For this, many local politicians tried 'to be local boosters, to lead development coalitions and to break free from the constraints of party hierarchies' .35 Furthermore, local political leaders, less bounded by party politics, gained more room for manoeuvre to implement 'popular policies' and to make concessions to entrepreneurs and investors.36 Such 'enfranchisement'-to borrow Panayotis Soldatos and Hans Michelmann's terminology37-occurring in the political sphere was coupled with the attempt of local authorities to compete on their own on the market. As noted by William F. Lever, in the 1980s and 1990s urban and regional governments were trying to 'delink' themselves from the national economic system.38 Lever points out four main factors that led to the disconnection between centre and periphery: a widespread belief that local governments could be more successful than national governments in managing local economy; the resurgence of the idea of 'the Europe of the regions'; the confidence of local administrators to be able to economically 'outperform' the central government-which, in turn, would ensure that they would be re-elected-and, finally, through strategies of 'urban marketing' local governments could build an urban image different from the 'national identity' .39 As a result, many cities have sought to 32 Borraz and John 2004, 108, 111. 33 Jobert 1999, in Le Galès 2002, 91. 34 Borraz and John 2004, 114-115. 35 Borraz and John 2004, 113-114. 36 Borraz and John 2004, 113-114. 37 Soldatos and Michelmann 1992, 132. 38 Lever 1997, 227. 39 Lever 1997230-232 indicate Barcelona, Frankfurt and Milan as 'delinked' European cities that fared better than the average national economic performance.
develop an international or European profile to establish their position within the global networks. The lack of intervention of the central governments to address the problems faced by local authorities and the belief that cities were economically and politically more advanced than the central state soured the national-local relations and prompted cities to strengthen their linkages outside the national boundaries. As Renaud Payre and Pierre-Yves Saunier observe in their analysis of the involvement of Lyon in the Union Internationale des Villes and Eurocities,40 'Lyon's municipal leaders tried to escape a centre-periphery relationship' , in a context marked by the transfer of competencies from the central level to subnational governments and European institutions.41 Similarly, it has been noted that local authorities conceive 'networks as an opportunity to subvert centralisation strategies of the state' and to better 'represent the interests of localities' .42 In the dedicated literature, it has been underscored how municipalities' subnational mobilisation, such as interurban networking, enables cities to 'bypass' central governments.43 In particular, it has been argued that in those countries where local politicians are endowed of a high 'political status' , these have the room for manoeuvre to set out international strategies that may be in contrast with the national level.44 Furthermore, Harriet Bulkeley observes that the effectiveness of initiatives promoted within networks is influenced by 'the powers of municipalities, … central-local government relations' and the capacity to ' "outflank" the nation-state in pursuit of their political aims and ambitions' .45
Paradiplomacy as a Means for Municipal Self-determination
As anticipated in the introductory section, one of the first goals of state diplomacy is to assert state sovereignty. In this sense, diplomacy is thus a means for the representation of state's interests in the international arena. In effect, as G.R. Berridge and Alan James observe, diplomacy, although evolving over time, still involves 'relations between … proud and jealous sovereign states' and its 'essence' is still about 'promoting and justifying states' interests' .46 40 Founded in 1913, respectively. Payre and Saunier 2008, 70, 71. 41 Payre and Saunier 2008, 83. 42 Bulkeley et al. 2003, 237. 43 Ward and Williams 1997Bulkeley 2005 Notably, sovereignty does not fully pertain to local authorities: sovereignty is indeed either 'vested in one authority only'-as in unitary states-or is 'carefully proscribed' .47 Consequently, 'self-determination' is employed here as a functional equivalent of sovereignty. To dispel any doubt about the heuristic validity of this concept, a thorough explanation for this conceptual choice is required. In particular, I ought to justify why self-determination constitutes the functional equivalent of sovereignty, rather than other akin conceptsprimarily autonomy which, at first, may seem more appropriate.
Local autonomy has been defined as 'the ability of local governments to have an independent impact on the well-being of their citizens' , the latter indicating both financial and nonfinancial resources available to individuals to fulfil their aspirations.48 A more accurate definition has been proposed by Gordon L. Clark, for which local autonomy is characterised by two types of power: 'initiation' and 'immunity' .49 While 'initiation' indicates 'the actions of local governments in carrying out their rightful duties' , 'immunity allows local governments to act however they wish within the limits imposed by their initiative powers' .50 As the definition provided by Clark highlights, autonomy is set out by law, which delimits local authorities' wiggle room. On closer examination, it is possible to distinguish two types of autonomy: administrative autonomy, meaning the scope and number of policy functions to which a city is entitled, and political autonomy, which indicates the possibility to directly elect political representatives.51 Therefore, due to different combinations of policy competencies and political capacity, the degree of autonomy changes both within and across countries.52 Further, more than an absolute concept (autonomous or not autonomous), autonomy can be understood as a relative concept, defined by a continuum where the two extremes are 'high autonomy' and 'low autonomy' . Consequently, autonomy is also a hierarchical concept, whereby specific cases may be positioned on the continuum for each of the types of autonomy (see Fig. 2).
In an attempt to summarise previous definitions of local autonomy, Lawrence Pratchett describes this concept as 'an issue of sovereignty-if not sovereignty over everything within a territory, then at least sovereignty over certain spheres of activity' .53 In fact, local autonomy emanates from the upper 47 Pratchett 2004, 362. 48 Wolman and Goldsmith 1992, 3. 49 Clark 1984, 198. 50 Clark 1984. 51 See on this point also Mocca 2017, 694-695. 52 Mocca 2017, 694 53 Pratchett 2004 level's decision to devolve competencies, which 'can be withdrawn or altered at the whim of the sovereign power' .54 To provide a comprehensive definition, Pratchett spells out local autonomy as 'freedom from higher authorities' , 'free dom to achieve particular outcomes' and 'reflection of local identity' .55 The first definition emphasises the liberty enjoyed by lower levels of government, while the second underscores the opportunities available to municipalities to fulfil local needs and the last one brings out the development of 'a sense of place through political and social interaction' within a local community.56 Taking the cue from this threefold definition of local autonomy, paradiplo macy may be described as a means to achieve 'freedom to' , inasmuch as it enables municipalities to deploy new instruments and methods to meet their localities' needs as they see fit. In this sense, municipalities engage in 'public diplomacy' to 'influence international political decisions that might affect city life' .57 This conceptualisation of municipal paradiplomacy implicitly characterises much of the literature on transnational municipal networks. Accordingly, this stream of research highlights how taking part in Transnational Municipal Networks enhances cities' political influence, lobby capacity and problemsolving attitude and allows for knowledge sharing and network building.58 By way of contrast, paradiplomacy is less about 'freedom from': it is true 54 Pratchett 2004, 362. 55 Pratchett 2004. 56 Pratchett 2004, 366. 57 La Porte 2013 58 See, inter alia, Andonova, Betsill and Bulkeley 2009, 63-65;Bulkeley and Betsill 2003, 26-27, 52;Ewen 2008, 103, 108, 110;Heinelt and Niederhafner, 2008, 174;Kern and Bulkeley 2009, 314-316, 319-320, 323-324, 327-328;Kübler and Piliutyte 2007, 367-370;Le Galès 2002, 106-107;Mocca 2018, 204, 217;Payre and Saunier 2008, 72, 80, 81.
Source: Author. Figure 2 The concept of autonomy that, by engaging at the supranational level, municipalities widen their political influence, but it does not make them more autonomous stricto sensu since it increases neither local authorities' policy competencies nor their legal powers. Likewise, municipal paradiplomacy does not entail the social and political relations braided within local communities since it is a prerogative of local political elites. As such, paradiplomacy is not so much about developing a truly community-driven 'sense of place' , as it is often about branding and marketing local specificities to make localities more attractive to investors. This brief rundown of the concept of local autonomy helps elucidate the first telos of municipal paradiplomacy and underscores that this practice should not be mistaken as a quest for more autonomy. In this respect, Noé Cornago points out that paradiplomacy 'suggests a contentious connection with diplomacy … while simultaneously affirming an ambition of separate existence or autonomy' .59 With this statement, Cornago restricts paradiplomacy to regions or federal states and meso-level administrative units that, being endowed of legal capacity and significant administrative competencies, may jeopardise the unity of nation states-while cities' supranational engagement cannot pose such a threat, 'neither territorially nor simply symbolically' .60 Inarguably, Cornago's point61 reveals some truth: as cities cannot separate from a country and become independent political entities, their paradiplomatic activities are not bound to expand their autonomy-which, as already mentioned, indicates the capacity of a political entity to cater for its citizens' needs without the intervention from the upper levels of authorities. For sure, the political freedom of local authorities is constrained by the very same existence of the central state: while the latter does not assume an interventionist role in local governments' international activities, it establishes by law the perimeters within which municipalities are free to move. Nevertheless, as documented in the dedicated literature, local authorities' international mobilisation contributes to broaden their political leverage in foreign and domestic policy, carving out a space for themselves to act outside the control of the central state.62 In light of the above discussion, the term selfdetermination appears to be more appropriate to describe local authorities' increased capacity to act without or with limited control of the central state provided by municipal paradiplomatic activities. The use of the concept of self-determination as a 'mild' form 59 Cornago 2010b, 94. 60 Cornago 2010a, 14. 61 Cornago 2010a 62 See, inter alia, Bulkeley and Betsill, 2003, 190;Kern and Bulkeley 2009, 329;Le Galès 2002, 95;Mocca 2018, 218;Payre and Saunier 2008, 83. of autonomy is ideally positioned between full independence and complete subordination, as illustrated in Fig. 3. Unlike autonomy, self-determination better frames the freedom of municipalities to act as autonomous agents in international affairs, while taking into account the national constraints within which they operate. Such a conceptualisation is also in line with Manuel Duran's interpretation of paradiplomacy as a 'complex political reality, taking a middle position between attempts to break away from the state (or sometimes even the state system), and attempts to engage in the search for communalities, economic and cultural ties with other international actors, including the state' .63 Understood as an attempt by local authorities to distance themselves from the state and thread cross-level and transnational cooperative relations, paradiplomacy thus constitutes an instrument to broaden the self-determination of local governments. More precisely, self-determination may be understood in both economic and political terms. On the one hand, paradiplomacy may provide municipalities with more opportunity to attract external investments and funding. For instance, cities' international strategies are often aimed at increasing their international profile as exemplary cities in the attempt to attract investors.64 As such, self-determination acquires the meaning of self-reliance. In other words, more room for manoeuvre provided by paradiplomatic initiatives, such as engagement in interurban networks, helps municipalities to achieve greater (but not full) economic independence, often integrating domestic and sub-national funding.65 On the other hand, paradiplomacy helps cities consolidate their political agency, by acting as the negotiating counterpart of the EU and more indirectly of the central state.66 In this sense, selfdetermination is understood as self-rule, inasmuch as, through paradiplomatic practices, municipalities act without consulting the central state, even signing binding agreements and covenants. 63 Duran 2016, 3. 64 Mocca 20172018, 217. 65 Mocca 2018, 210. 66 Mocca 2018 Source: Author. Figure 3 Illustration of self-determination
Sovereignty
Self determination Subordination
Mocca
The Hague Journal of Diplomacy 15 (2020) 303-328 Ultimately, self-determination, just like autonomy and independence, is an asymmetric relational concept, inasmuch as it implies the emancipation of one agent from another, whose freedom and powers are not equal. While toning down cities' runaway aspirations, the concept of self-determination implies a latent tension between the local and central levels of authorities, as will be shown later in this article. Through their paradiplomatic repertoire, municipalities try to move away from the orbit of the state. Despite not being endowed of 'hard powers' and being restrained by state authority, municipalities mobilising at the supranational level wield political influence.
The Soft Power of Influence
Turning now to the second telos, the analogous of state power is influence. The conceptualisation of influence as an aim of paradiplomacy can be found in the literature. For instance, according to Teresa La Porte, the (para)diplomatic instruments used by local authorities to undertake European engagement are about 'persuasion'-that is, the 'power of influence' .67 Seeing influence as the functional equivalent of power does not mean that local authorities do not have powers at all. However, in foreign policy, nation states hold the hard power of coercion, while local authorities may only rely on the 'soft power' of influence which, as Joseph Nye observes, is about 'getting others to want the outcomes you want' .68 To draw the line between power and influence, this article borrows Peter Morriss' theorisation that denotes power as the capacity to 'effect something' , which means 'to bring about or accomplish it' , and influence as the ability to 'affect something' , that is 'to alter it or impinge on it in some ways' .69 Although this conceptual distinguo lends some theoretical help to clarify the purpose of municipal paradiplomacy, it is possible to add another analytical layer to better identify the third telos of paradiplomacy. As a result, power, understood as the capacity to 'effect' , may be further distinguished in power as normativity and power as enactment. In its first articulation, power falls under the remit of the state's and, to some extent, regional governments' competencies, which have the power to legislate. In its second meaning, power conceived as enactment indicates the capability to intervene concretely over an issue, and rests with all levels of governments-including local authorities. This latter 67 La Porte 2013, 86. 68 Nye 2008, 95. 69 Morriss 2002 conceptualisation of power captures the 'can-do thinking' proper of local authorities, underpinning subnational mobilisation.70 In this sense, albeit not expanding the range of powers of which municipalities are entitled, paradiplomatic activities strengthen cities' capacity to act, providing them with the opportunity to create networks of peers with which to exchange policy ideas and examples that could be implemented into their localities.71 Following this line of reasoning, it could be argued that municipalities may 'affect' the decision-making process of upper levels of authority on specific issues through the soft power of persuasion-for instance, on the content of the EU legislation on urban matters or of EU funding calls.72 Therefore, municipalities exert an indirect 'effect' on power. In practical terms, municipalities wield influence by lobbying supranational organisations:73 in so doing, municipalities reproduce on a lower scale the negotiation practices proper of international diplomatic relations, thus asserting their agency as competent negotiators. Inarguably, paradiplomacy constitutes a means to bolster cities' reputation and recognition: cities proactively establish connections with other peers and supranational organisations to build their identity as credible political agents on the international chessboard. In more sophisticated terms, taking the cue from psychanalytical-like reading put forward by some diplomacy scholars, Duran recurs to the 'Self-Other binary' to explain how, even in the realm of paradiplomacy, the acknowledgment of other entities leads to the affirmation of the existence of the self.74 By assuming a proactive and vocal role at EU level, for example through interurban networks, cities seek to demonstrate that they are seriously committed to tackle social, economic and environmental urban problems, and therefore reliable political actors.75 The divergence between power and influence as teloi of state and municipal paradiplomacy, respectively, reflects the overall teleological difference between the two phenomena. In this respect, it is argued that while state diplomacy is primarily 'power play diplomacy' , aimed at pursuing state's objectives, city paradiplomacy can be described as 'humanist diplomacy' , devoted to the establishment of linkages among individuals.76 Furthermore, sub-state-as well as municipal-paradiplomacy is geared towards the 'management of estrangement or the mediation of separateness' , seeking to pursue territorial 70 Barber 2013, 6. 71 Mocca 2018, 210-211. 72 Mocca 2018, 210. 73 Mocca 2018, 204. 74 Duran 2011, 342-343. 75 Mocca 2018, 217. 76 Duran 2016
Mocca
The Hague Journal of Diplomacy 15 (2020) 303-328 aims through the enhancement of the connections with other agents.77 While trying to delink themselves from the state, paradiplomacy becomes part and parcel of a politics of relinking, allowing to forge crucial relationships with peers located in other countries, with the ultimate goal of expanding their sphere of influence outside local and national boundaries.
To sum up, paradiplomacy may be conceived as a means for local authorities to broaden their political influence, acquiring more freedom to negotiate directly with EU institutions and to pursue their territorial interests independently from the central state. For this reason, paradiplomacy may engender a contentious relationship between municipalities and the state, a point that is examined in the next section.
Paradiplomacy as Latent Contention
Just like state diplomacy, paradiplomacy is about establishing relations with a third party. Such relations may be either cooperative or more or less overtly contentious.78 As Michaël Tatham points out, two types of paradiplomacy may be discerned: 'by-passing paradiplomacy' , when subnational units act alone, with no relation with the central state, and 'co-operative paradiplomacy' , indicating the concerted action of subnational and national governments.79 In particular, Tatham observes how bypassing paradiplomacy may engender two main reactions from the central government: on the one hand, the interaction of non-state actors at EU level with no role of the state may kindle an oppositional relation; alternatively, the autonomous subnational mobilisation may be accepted or even overlooked by the central government.80 Similarly, Mark Callanan and Michaël Tatham distinguish three forms of subnationalnational relations: 'co-operation' , 'by-passing' (that is, no relation is forged) and 'conflict' .81 They suggest that extant contributions indicate that 'stronger subnational authorities' are less likely to outflank their national government as they have more leeway, finding that single local authorities are more likely to adopt a 'bypassing' attitude towards the central government.82 Further, 77 Duran 2016, 4-5. 78 Van der Pluijm with Melissen 2007, 12-13. 79 Tatham 2010, 78. 80 Tatham 2010, 90. 81 Callanan and Tatham 2014, 194. 82 Callanan and Tatham 2014 Callanan and Tatham note that this type of subnational-national relation is a ' "fall back" option' put in place when a collaborative relation is difficult.83 By way of contrast, other authors conceptualise paradiplomacy as a device for local authorities to confront the state. The relation between central states and local authorities is hierarchical and thus inevitably asymmetric, making the two sides unequal players.84 Following Duchacek and Soldatos' analysis, paradiplomacy may be framed in terms of the dichotomy 'centralization/ decentralization' ,85 inasmuch as municipal paradiplomacy puts, although indirectly, local governments and the central state in a contentious relation.
The (latent) local-centre rivalry underlying paradiplomacy was already noted in the early 1990s. Soldatos and Michelmann argue that a 'growing international interdependence'-engendered by globalisation, economic liberalisation, European integration and the improvement of the communication networks-led to the 'perforation of the sovereignty of nation states' and, consequentially, to the 'international enfranchisement' of subnational territorial entities, which resulted in the development of an 'active network of paradiplomacy' .86 In this view, the international engagement of subnational units represents the third option between surrendering to globalisation and facing economic recession.87 According to Cornago, subnational actors' paradiplomatic activities are the by-product of the ' "pluralization" of the diplomatic realm'-engendered what the author labels as 'agonistic pluralism' .88 The 'perforation' of the international sphere by non-state actors thus transformed diplomacy, which ceased to be a 'just-state practice' and became 'a multi-actor phenomenon ' .89 In this vein,'diplomacy [was] no longer the exclusive monopoly of a monistic 'self' , i.e. the central state' .90 From a theoretical perspective, this change was captured by James Der Derian's 'post-modern' reading of paradiplomacy, thanks to which paradiplomacy came to be understood as an antagonist instrument of subnational actors against the state, intended as 'a form of diplomacy, transgressing and provoking the traditional stance on diplomacy' .91 Duran conceptualises 'subnational diplomacy' as a dual strategic choice between cooperation with and 83 Callanan and Tatham 2014, 202. 84 Elander 1991, 37. 85 Aguirre 1999. 86 Soldatos and Michelmann 1992, 129, 132. 87 Soldatos and Michelmann 1992, 129-134. 88 Cornago 2010b, 91-92. 89 Duran 2013, 147. 90 Aguirre 1999, 201. 91 Duran 2013
Mocca
The Hague Journal of Diplomacy 15 (2020) 303-328 conflict against the central state.92 Similarly, Barber begs the question as to whether the 'interest of cities' and those of the states will be 'in harmony or in conflict' , and whether cities will be allowed to keep threading cooperation networks despite nation states being 'not merely indifferent but also hostile' to their activities.93 The answer to these is questions is, according to Barber, that the ambitions of cities and states 'are often necessarily in tension' , as the crossborder vocation of cities is contained by state power.94 As a third way between cooperative and contentious paradiplomacy, municipalities may also seek to find exclusive international arenas where there is no or minimum involvement of the nation states. As an example, membership in transnational municipal networks, one of the most widespread forms of municipal paradiplomatic activities, enables member cities to make their voice heard without state intermediation. More precisely, wedged in between the more or less conflictual local-centre relation is the EU, which performs the role of a 'mediator' between the two levels of authority.95 As such, cities' international networks function as an indirect channel through which to influence the central state. Echoing Liesbet Hooghe, it can be argued that nation states do not act as 'gatekeepers' that mediate between the local and supranational levels.96 In this respect, it has been shown that cities are interested primarily in influencing EU policy-making and, thus, the impact on national governments is an indirect effect of lobbying at the EU level since nation states have to implement European directives into national laws.97 Further, through peer-to-peer exchange of knowledge and experience, European cities come across effective solutions to urban problems implemented by other peers; in turn, these solutions give grounds to negotiate with the national state more capacity to address local problems.98 This argument can also be found in Payre and Saunier's premise that local governments' supranational engagement has equipped cities with 'political, intellectual, and practical resources to enable a given municipality to adjust its presence on the intermunicipal map, develop its agency within national politics, and fulfil its search for local support' .99 Such willingness of many municipalities to enfranchise themselves from the central state is not symptomatic of the 'hollowing out' of the state. Taking 92 Duran 2011, 342. 93 Barber 2013, 9. 94 Barber 2013, 9-10. 95 Mocca 2018, 210. 96 Hooghe 1995, 177. 97 Mocca 2018, 210, 217. 98 Mocca 2018, 211. 99 Payre and Saunier 2008 forward Barber's theory of a world parliament of mayors, Simon Curtis argues that the cooperative networks in which many cities worldwide are engaged cannot be seen as only 'a challenge to the state' , in that a growing influence of cities does not correspond to the decline of central states.100 For the author, this phenomenon does not prove the fall of the state per se, but the demise and restructuring of 'a particular historical iteration of the state-the nationstate'-paralleled by the growing influence of the 'Global City' as a 'new historically distinctive form of the city' .101 While this holds true, recent changes in the political landscape in Europe-and even beyond-hint at an international order stirred up by the attempt of nation states to regain control of their dispersed power, showing how the decentralisation trend may be reversed.
Discussion and Conclusion
This article has sought to provide a theoretical contribution to the literature analysing the role of municipalities in International Relations, by grappling with municipal paradiplomacy as a salient political phenomenon characterising contemporary international affairs. By employing functional equivalents of state diplomacy's aims, this article has endeavoured to examine the teleology of municipal paradiplomacy through the identification and excavation of its three main teloi: self-determination, influence and latent contention. As discussed previously, municipal paradiplomacy constitutes an instrument to widen cities' self-determination, inasmuch as it increases local authorities' opportunities to act with no or little control of the central state. Such greater freedom to act may be political and economic. Municipalities acquire political self-rule whenever engaged in negotiations with upper levels of authority as well as with their peers. In this sense, municipalities seek to widen their sphere of influence, inasmuch as, by being recognised as credible 'broker[s]' ,102 they are capable of steering decision-making in a favourable way. This recognition of the international agency of cities by third parties may also heighten economic self-reliance. In effect, paradiplomacy helps cities to place themselves on the global market.
As a corollary of the greater self-determination and influence, paradiplomacy also has implications on centre-local relations: cities develop direct linkages with other cities and the EU institutions without the interposition of nation Mocca The Hague Journal of Diplomacy 15 (2020) 303-328 states, stretching the relationship of subordination to the central state.103 On the one hand, local authorities, embedded in a growingly pluralistic system of international relations, have acquired significant international stance and established themselves as paradiplomatic actors. As such, it appears that state authority is being worn away by an ever-growing plethora of non-state agents.104 Even diplomacy, as discussed before, has become a realm where the state is not the only actor. For this, some authors have theorised the concept of 'network diplomacy' , describing the creation by non-state agents of a 'diplomatic community' parallel to the one formed by states.105 , 106 On the other hand, the revamped wave of nationalism, witnessed by the recent rise of nationalist parties in Europe (as well as elsewhere), is reasserting the centrality of the nation state as the primary actor in international relations. In this view, the nation state is-borrowing James A. Caporaso's107 words-'a distinctive form of organization based on carving up the world into territorially exclusive enclaves. Sovereignty, in its modern form, is the right to exclude-people, capital, ideas, foreign powers, and so on' .
Against this backdrop, paradiplomacy, albeit reducing state sovereignty, may be seen as entrenched in the Westphalian form of state, as some scholars argue.108 In effect, despite threading cooperative networks that may 'influence the global economy and bypass the rules and regulations of states' , local authorities' capacity to act freely is legally set out.109 In this respect, Barber prophesises that The interdependence of cities may erode their ties to nation-states and draw them toward collaboration with one another, but no state worth its salt, as measured by its sovereignty, will stand still and watch cities annul subsidiarity and escape the gravitational pull of their sovereign mother ship.110 Therefore, European local authorities' influence, capacity to 'outflank' the central state111 and self-determination do not prove their progressive delinking 103 Mocca 2018, 217. 104 Metzl 2001, 78. 105 Duran 2011, 354. 106 Metzl 2001, 77. 107 Caporaso 1996, 45. 108 Acuto and Rayner 2016, 1151. 109 Barber 2013, 9. 110 Barber 2013, 11. 111 Fairbrass and Jordan 2001, (cited in Bulkeley 2005. from the national level. As Barber puts it, 'Legislative sovereignty and budget authority give states plenty of ways to block run-away towns' .112 Similarly, it is true that, as Tanja Börzel and Thomas Risse point out, even non-top-down relations do not occur completely outside of hierarchy.113 Likewise, in the multitiered EU architecture, municipalities do not shy away from the weight of hierarchy. Notably, the multi-level governance framework emphasises the importance of a new mode of governing characterised by different actors across different scales interwoven in a complex web of relations. Liesbet Hooghe and Gary Marks liken multi-level governance to the famous Escher's web of stairs, describing it as a system where 'there is no up or under, no lower or higher, no dominant class of actor; rather, a wide range of public and private actors compete or collaborate in shifting coalitions' .114 However, the wiggle room of such a swath of actors is actually limited to those competences enshrined in national legislation.115 Therefore, rather than an entanglement of endless staircases, the EU architecture appears to be a much more neatly designed structure with a clear and recognisable summit. This picture resonates with Duchacek's rendition of paradiplomacy, whereby: 'Diagramatically, the nation state qua multivocal actor could be illustrated in the form of a stepped Saqqara pyramid, with its separate yet interconnected points of entry on the international scene, in contrast to the neat, single apex of the Cheops pyramid ' .116 Hence, the coexistence of state diplomacy and municipal paradiplomacy witnesses the struggle between two 'forms of states':117 , 118 , 119 the first type, labelled 'post-modern' , is networked, decentralised and one where power is devolved and dispersed; the second is the 'Westphalian state' , the Leviathan, where power is centralised and sovereignty is not shared with any other sub-or supranational entities.120 While forecasting whether and which of these two forms will prevail would be a daunting task, this article submits that paradiplomacy, as a long-established practice, signals the enduring endeavour of several municipalities to progressively free themselves from the grip of the nation state. 112 Barber 2013, 8. 113 Börzel and Risse 2010, 116. 114 Hooghe and Marks 2001, 7, cited in Bulkeley and Betsill 2003, 28. 115 Mocca 2018, 211. 116 Duchacek 1986, cited in Aguirre 1999, 189. 117 Caporaso 1996, 30-31. 118 Cox 1983, cited in Caporaso 1996, 31. 119 Cox 1986, cited in Caporaso 1996, 31. 120 Caporaso 1996. | 2020-07-16T09:04:06.536Z | 2020-07-06T00:00:00.000 | {
"year": 2020,
"sha1": "249acd2837846da336daa534f363eebd3ea3ad75",
"oa_license": "CCBYNC",
"oa_url": "https://brill.com/downloadpdf/journals/hjd/15/3/article-p303_4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "297de192caabe2360cb92aa387b91a78e5a86876",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
16856073 | pes2o/s2orc | v3-fos-license | A Selfish Genetic Element Influencing Longevity Correlates with Reactive Behavioural Traits in Female House Mice (Mus domesticus)
According to theory in life-history and animal personality, individuals with high fitness expectations should be risk-averse, while individuals with low fitness expectations should be more bold. In female house mice, a selfish genetic element, the t haplotype, is associated with increased longevity under natural conditions, representing an appropriate case study to investigate this recent theory empirically. Following theory, females heterozygous for the t haplotype (+/t) are hypothesised to express more reactive personality traits and be more shy, less explorative and less active compared to the shorter-lived homozygous wildtype females (+/+). As males of different haplotype do not differ in survival, no similar pattern is expected. We tested these predictions by quantifying boldness, exploration, activity, and energetic intake in both +/t and +/+ mice. +/t females, unlike +/+ ones, expressed some reactive-like personality traits: +/t females were less active, less prone to form an exploratory routine and tended to ingest less food. Taken together these results suggest that differences in animal personality may contribute to the survival advantage observed in +/t females but fail to provide full empirical support for recent theory.
Introduction
In a wide range of taxa, it has been shown that individuals from the same population differ consistently in their behaviour. The concept of animal personality applies to behavioural differences that are consistent through time and situations [1,2,3]. Often, these behavioural traits are correlated within or across contexts and are referred to as behavioural syndromes [4,5,6]. For instance, ''proactive'' individuals, in contrast to ''reactive'' individuals, have higher activity levels and a higher metabolic rate, are more exploratory and risk-prone (or bold), and faster to establish routines [7,8,9,10,11]. How animal personalities evolved within populations still remains unclear, especially because behavioural plasticity could be seen as an optimal way to cope with fluctuating environments [12].
Life-history theory provides a framework for investigating the evolution of animal personalities [13,14,15]. Animal personality can have a profound influence on life-history traits like growth, fecundity and survival [15,16,17]. Using evolutionary models, Wolf and co-workers [14] demonstrated that life-history tradeoffs promote the evolution of animal personalities. Individuals varying in exploration behaviour inhabited a low-quality resource habitat for a year at the end of which they could stay for a second year or move to a high-quality resource habitat. Superficial explorers, that evolved high levels of boldness in risky games ( = proactive), invested more in current reproduction. Conversely, those that invested more in future reproduction were careful explorers, that evolved low levels of boldness in the same risky games ( = reactive). These models therefore predict that individuals with different fitness expectations express different personality traits, here exploratory behaviour. The authors concluded that individuals with high expectations of future fitness, who have much to lose and for whom life is valuable, should be more cautious than individuals with low expectations.
Concurring with model predictions, recent evidence shows that individuals expressing reactive personality traits have a lower basal metabolic rate and therefore lower energetic needs [10,11]. Metabolism of reactive individuals could allow them to survive longer by saving more energy than proactive individuals, especially when foraging involves risk-taking. For instance, a personality implying less risk-taking behaviour and conserving energy would favour survival [16,18]. Thus, long-lived individuals should express a reactive-like personality whereas individuals characterized by a low life expectancy should express a proactive-like personality [14].
The t haplotype, also called the ''t complex'', a naturally occurring genetic variant in the house mouse (Mus domesticus), provides an appropriate case study to investigate this hypothesis and hence fill the gap of empirical data. The t haplotype is a selfish genetic element, consisting of many linked genes, showing drive [19]. Its main known fitness effect is a reduction in litter size in matings between heterozygotes due to a recessive lethal allele [20]. Recently, t related effects on life-history have been documented. In a free-living population of house mice, female heterozygotes (+/t) live longer than homozygous wildtype females (+/+), with a 30% viability advantage [21]. No difference in survival was found between +/+ and +/t males. Although no information is yet available on whether life expectancy positively correlates with fitness in wild house mice, mean life expectancy has been reported to be 100-150 days [22,23] whereas generation time is about 270 days [24]. This indicates that many mice die before they successfully reproduce, thus suggesting that a higher life expectancy could improve the chance to reproduce.
Following theory on the evolution of life-history and personality [13,14], we hypothesize that reactive personality traits co-evolved with the t haplotype. We therefore assessed personality traits in mice of both sexes and genetic backgrounds. We predicted that +/t females, characterized by a high survival rate, should express ''reactive-like'' personality traits and therefore be more shy, less active and less explorative compared to +/+ females, characterized by a lower survival rate. Moreover, we compared the propensity of +/+ and +/t to form routine as it has been shown to reflect individuals' ability to use information on their environment and then adapt to its potential changes [7,9,25]. House mice travel their territory daily, covering and marking the same routes repeatedly. Through these routines, mice acquire highly habitual responses, which they can perform rapidly and with minimal sensory input [26]. As proactive individuals form routines faster than reactive individuals, we expect +/+ females to form such routines faster than +/t females. Finally, as an index of energy intake we monitored food consumption, expecting that reactive individuals, here +/t females, ingest less food compared with proactive individuals, here +/+ females [10,11]. No differences were expected between males of different haplotypes as they have a similar survival rate.
Study Subjects
We used 82 sexually mature but non-breeding house mice (more than six weeks old; mean age 6 SE = 184610 days) which were laboratory born F2 and F3 descendants of wild-caught individuals from the same population in the vicinity of Zürich as the one in which longevity differences were reported [21]. We tested a total of 41 females (20 were +/+ and 21 were +/t) and 41 males (20 were +/+ and 21 were +/t) randomly selected from offspring of our breeding stock. No significant difference in age was observed between +/+ and +/t mice of the same sex (females: t 39 = 0.03, p = 0.973; males: t 39 = 0.84, p = 0.408). Males were younger than females (t 80 = 4.02, p,0.001), because high aggression among males meant that they could not long be housed in groups. All individuals were in good condition for the entire duration of the study.
Housing
All mice were singly housed in Macrolon Type II cages (26762076140 mm), beginning 5 days before the first behavioural test. Each cage contained standard animal bedding (Lignocel Hygienic Animal Bedding, JRS), an empty toilet paper roll and some paper towel as hides and nest building material. Food (laboratory animal diet for mice, Provimi Kliba SA, Kaiseraugst, Switzerland) and water were provided ad libitum. Animals were kept under standardized laboratory conditions at a temperature of 22uC 63uC with a relative humidity of 50-60% and on a 14:10 light:dark cycle with a 1 h sunrise and dusk phase at the beginning and end of the light phase.
Body Weight
Mice were weighed twice at a 7-day interval with the first measurement the day before the first behavioural test and the second on the day following the end of the first series of behavioural tests. We did not observe significant changes in body weight (t 81 = 1.69, p = 0.095). As the two measurements were highly repeatable (R = 0.95, F 81,82 = 40.52, p,0.001), we used the mean.
Genotype Determination
An individual ear tissue sample was collected from all males and females at least one week before testing. DNA was isolated and amplified at the Hba-ps4 locus, a marker containing a 16-bp t haplotype specific insertion [27]. PCR products were electrophoresed using an ABI 3730x1 and visualized using Genemapper 4.0 software (Applied Biosystems) to determine genotype at the t locus.
Schedule for the Assessment of Personality Traits
For breeding convenience this study was realized in two sessions. The first session took place in February -March whereas the second session took place in July -August. Each behavioural test was performed twice with a seven day interval to check for individual consistency through time [28,29,30]. Exploration tests were however replicated after nine days because of a time constraint. Activity and boldness tests were performed in the morning (from 8:00 to 11:00), whereas the first assessment of exploratory behaviour was performed in the afternoon (15:00 to 18:00) and the replicate in the morning. All behaviour tests lasted ten minutes, with the observer standing immobile at a one meter distance. As the activity and boldness tests were performed using the home cage of the mice, the stress induced by the procedures was very limited. Within a three minute acclimation period the mice were very calm and were observed grooming themselves. A single mouse was involved in only one experiment per day and had one day free after each behavioural test. The behavioural tests were run blindly with regard to the genotype of the mice.
Activity
To measure individual activity, we removed nest material and the paper roll from the home cage to facilitate observations. We replaced the cage lid by a Plexiglas lid with a grid drawn on it to uniformly split the cage widthwise into three equal parts. After a three minute acclimation period, the observer recorded the number of times a mouse crossed the lines with all four paws for ten minutes. We then calculated an activity score following previous common procedures [31,32].
Exploration
Exploratory behaviour was assessed in a concentric square field cage representing an arena composed of nine compartments, a central part surrounded by four corridors joined alternatively by covered and uncovered corners [33,34] (Figure 1). After each trial, the apparatus was cleaned with acetone to remove scent marks [35]. A focal mouse was transferred in a small dark box from its home cage to the apparatus to reduce stress before the beginning of the test. The door of the box was aimed at the direction of a covered corner in the first trial and at the direction of an uncovered corner in the replicate. The sliding door of the box was opened by remote control (using a string), and latency time to leave the box, time needed to enter each compartment, and total number of visits to compartments were recorded. For convenience latencies were subtracted from the total duration of the test (600 seconds) such that highly explorative individuals, characterized by short latencies, received a high value. Table 1. Individual consistency of the behavioural variables assessed twice at a one-week interval, estimated firstly from mixed model analysis accounting for genetic background, body weight, sex, session, trial and interactions, and secondly from ANOVAbased intra-class correlation coefficients.
Boldness
Boldness was assessed in a classical olfactory test realized with three Macrolon type II cages connected by tubes [36,37]. We connected a central cage to two cages, one at each side. The central cage was filled with bedding from the home cage of the individual tested. The two other cages were filled with either unused cat bedding for one or with soiled cat bedding for the other (Cat's Best Ö ko Plus, Qualipet). The soiled cat bedding had been used by a domestic cat during one week before the experiment. Cats represent a natural predator against which mice should have evolved avoidance mechanisms [38,39]. Following Dickman & Doncaster [40], mice should be able to assess the presence of predators indirectly through olfactory cues and avoid areas with predator's faeces or urine. Our setting thus represents two identical areas, one of which has apparently been visited by a natural predator, allowing a test of boldness in the face of predator cues [41,42,43]. This procedure avoids a repeated exposure to a real predator, known to be highly stressful for mice [34].
Focal individuals were released in the central cage and kept there for a three minute acclimation period. Removable wire mesh partitions closed the tubes, allowing odour identification of the neighbour cages. At the start of the trial, partitions were removed and the time spent and the number of visits to each cage containing each type of cat bedding were recorded for ten minutes. The mice gave significantly more visits to (t 81 = 23.25, p = 0.002) and spent significantly more time (t 81 = 22.88, p = 0.005) in the cage filled with unused cat bedding than in the cage filled with soiled cat bedding.
Propensity to Form Routine
Routine formation is usually measured by changing a familiar environment that has been experienced repeatedly and subse- quently testing how quickly individuals react to this environmental change [7,9,44]. The propensity to form routine can be indirectly measured by the magnitude of the increase in the performance of a given behaviour between the replicated trials of the same test. Following this idea, we quantified the propensity to form routine as the difference between the performance measured at the second trial and the performance measured at the first trial.
Food Consumption
Food consumption was only recorded for the 48 mice taking part in the second session because of a time constraint at the end of the first session. This sub-sample was composed of 23 females (12+/t and 11+/+) and 25 males (9+/t and 16+/+). During two consecutive weeks, one month after all behavioural experiments were carried out, the quantity of pellets eaten by the mice was recorded at the same time of day. On day 1 the food holder was cleaned and filled with new pellets of known quantity (weighed on an electronic balance, Sartorius BL 1500 S, with 0.01 g. precision). At day 7 and 14, uneaten pellets were removed for weighing, and at day 7 replaced with new pellets. We checked daily if pieces of pellets had fallen through the feeder grid into the bedding. When found, they were removed and weighed. Food consumption was repeatable between the two weeks (intra-class correlation coefficient: R = 0.44, F 47,48 = 2.55, p,0.001).
Statistical Analyses
Statistical tests were carried out using R version 2.13.1 (R development core team 2011). Numbers of visits in the cage containing soiled cat bedding, number of visits in the cage containing clean cat bedding, and total number of visits in all compartments of the exploration apparatus test were square-root transformed, while activity scores, the time needed to explore all the compartments in the exploration test, and quantity of food eaten were log-transformed to satisfy normality.
We tested the influence of individual identity, the genetic background, sex, body weight, session and trial on the variables measured using linear mixed effect models. Interactions between genetic background and sex, trial and sex, trial and genetic background, and between trial, genetic background and sex were also included. Individual identity was defined as a random effect to assess individual consistency (repeatability) while all other variables were defined as fixed effects. Significance of the random effect was determined by likelihood ratio tests while fixed effects were tested using F tests [45]. We also used ANOVA-based intra-class correlation coefficients (R) to quantify individual consistency between the two trials of each behavioural test [46,47]. A significant effect of trial in the mixed effect models described above revealed a propensity to form routine. Potential effects of genetic background, sex or their interaction on routine formation were therefore assessed by the effect of the interactions involving trial in the same mixed effect models.
Multiple correlations between the personality traits showing individual consistency enabled us to check for correlations between personality traits. To avoid type I errors, we followed the Benjamini & Hochberg procedure that also reduced type II errors by controlling false discovery rate [48,49]. Beforehand, the number of movements in the activity test, total number of visits to compartments in the exploration test, and the number of visits to cages containing clean and soiled cat bedding were averaged and then standardized (for each session separately) to control for the ''session'' effect found in the mixed effect models. For each trial the standardized variables are thus defined by an identical mean (equal to 0) and standard deviation (equal to 1).
Food consumption (total food consumed over two weeks) was normally distributed and was analysed using a general linear model to determine the influence of the genetic background, sex, body weight and their interactions. Non significant interactions (p,0.05) were dropped from the full model by a backwards stepwise procedure, following Crawley [45].
Individual Consistency
The number of movements during the activity test, the total number of visits to and the time needed to explore all the compartments in the exploration test, and the numbers of visits to the cage containing soiled cat bedding during boldness tests were consistent within an individual through time (Table 1). These variables were therefore used to test for behavioural syndromes.
Personality Traits
The analyses of the influence of the genetic background, sex, and body weight on the personality traits showed that both the t haplotype, sex and their interaction had a significant effect on basic activity (Table 2). +/t females were less active than +/+ females, and females were in general more active than males ( Figure 2). None of the personality traits measured in the boldness and exploration tests were influenced by the genetic background, sex or their interaction ( Table 2). Body weight did not have any significant effect in any of the personality traits except for the total number of visits in the exploration test (Table 2).
Propensity to Form Routine
No propensity to form routine was observed in the activity test as mice showed similar activity scores between the first and the second trial (Table 2). However, during the boldness test the number of visits increased during the second trial to both the cage with soiled cat bedding (1 st trial (mean 6 SE): 5.560.6, 2 nd trial: 8.460.6) and the cage with clean cat bedding (1 st trial (mean 6 SE): 6.660.6, 2 nd trial: 9.060.6) ( Table 2). During the exploration test, the total number of visits to the compartments increased between the first and the second trial (1 st trial: 23.162.7, 2 nd trial: 45.964.8) whereas the time needed to explore all the compartments decreased (1 st trial: 560612 sec., 2 nd trial: 468622 sec.), both suggesting a propensity to form an exploratory routine ( Table 2). Genetic background, sex or their interaction did not have any significant influence on the propensity to form a routine observed in the boldness test, as measured by the number of visits to the cage containing soiled cat bedding or the number of visits to the cage containing clean cat bedding ( Table 2). The analysis of the propensity to form an exploratory routine as measured by the increase in the total number of visits in the exploration test did not show an overall influence of sex or genetic background but a significant effect of the interaction of genetic background with sex ( Table 2). Heterozygous +/t females were less prone to form an exploratory routine than +/+ females as they had a lower increase in their number of visits whereas there was no significant difference between +/t and +/+ males (Figure 3). When analysing the decrease in the time needed to visit all the compartments between the two replicates, sex, genetic background or their interaction did not show any significant effect on the formation of an exploratory routine ( Table 2).
Correlations between Personality Traits
The positive relationship between boldness and activity allowed us to define a behavioural syndrome in females but not in males (Table 3). More precisely, this relationship was significant in +/+ females whereas +/t females only showed a non-significant tendency to express it ( Figure 4; Table 3).
Discussion
Our study demonstrated that laboratory reared female house mice of a genotype conferring a survival advantage under natural conditions expressed reactive-like behavioural traits favouring cautiousness and energy conservation. The longer living +/t females were less active, less prone to form an exploratory routine, and tended to ingest less food than the shorter living +/+ females.
Having a low activity level could have various positive effects on survival. First, decreasing activity can be beneficial for small rodents when facing predators relying on hearing or sight to detect prey [50]. Second, organismal maintenance requires partitioning of the available energy budget to different biological functions among which effector organs like skeletal muscles are responsible for much of the daily energy expenditure [51]. Within a given energy budget, an individual with a reduced activity can attribute a large part of its energy budget to other functions that could improve survival.
Our results on food consumption supported our energy-saving interpretation as +/t females showed a tendency to have a lower food intake than +/+ females. This could reflect a lower need for energy and/or a better capacity to save energy that could both favour survival when access to food is restricted or risky. Our results suggest that reactive individuals could decrease the frequency of their visits to feeding places compared to proactive individuals and may decrease the risk of being caught by predators when feeding. Moreover, research on rate of aging in rodents showed that mice fed with a 65% reduced diet improve their maximum life span by 51% compared to mice fed ad libitum [52]. Caloric restriction extends life span through mechanisms such as reduced oxidative damage [53]. This could also apply to +/t females and hence would partly explain their survival advantage over +/+ females that have a higher food consumption.
Moreover, +/t females were less prone to form an exploratory routine. Although reactive and proactive individuals have similar learning abilities, at least in birds, reactive individuals form routines slower than proactive individuals [9,25]. This particularity, seen as a higher attentiveness to the environment, confers an advantage to reactive individuals as they can better adjust to sudden environmental changes than proactive individuals [7,54,55].
Conversely to other personality studies, we did not observe behavioural syndromes between most of the personality traits we assessed [5,6]. We found a syndrome defined by a positive correlation between activity and boldness, such that the less active females were also the more cautious. However, this relationship was significant in +/+ females whereas +/t females only showed a tendency. Some studies have shown that behavioural syndromes are not ubiquitous, even within the same species. In three-spined sticklebacks (Gasterosteus aculeatus) the presence of behavioural syndromes depends on whether population characteristics favour suites of correlated behaviours [32,56,57]. The absence of behavioural syndromes in male house mice could thus be due to sex-specific behavioural optima.
The differences observed in the activity test are consistent with expected differences in energy demands due to milk production. Costs of lactation are very high in small rodents and increase with litter size [58]. Litter size is influenced by the t haplotype. On average +/t females have smaller litters than +/+ females, as whenever +/t females mate with +/t males their litter sizes are nearly halved due to the lethal homozygous effect of the t haplotype [59]. Thus a female's expected average litter size should correlate with activity levels. Higher activity levels help to gather information about food to cover energetic needs during lactation. Consistent with this, we showed for non-breeding mice that +/t females had lower activity levels than did +/+ females. Fitness of +/t and +/+ females will on average be equal if +/t females compensate for smaller litters by producing more litters, which greater longevity would permit. This would contribute to maintaining the polymorphism in the population. Perrigo [60] showed that lactation strongly influences activity patterns of females, and that males were less active than females. We also found that males were less active than females.
The lack of difference in exploration and boldness between mice of different sexes and genotype suggests that balancing selection has resulted in a single optimal behavioural level for each, with no correlation between individual values for each traits. House mice in western Europe live commensally with humans and nearly always are found close to easily accessible food resources, and often in dense population [26,38], suggesting that exploration to find new food patches may often be secondary to exploration to monitor social situations. Both males and females monitor the presence of conspecifics and defend their territories against intruders [61]. Similarly, boldness behaviour might be under strong balancing selection pressure reducing inter-individual variability, the raw material needed to evolve personalities.
Although our study provides interesting insights into personality traits associated with +/t females and survival differences, the causal relationship is unclear. The t haplotype, consisting of a third of chromosome 17, has had an independent evolutionary history from its wildtype counterpart for more than two million years [62]. Major Histocompatibility Complex genes are located within the four inversions comprising the t haplotype [63] and there is evidence that a gene influencing both male and female mate choice is also located within the t haplotype [20]. Genes influencing other traits, such as personality and/or survival, either additively or epistatically or through dominance, could be located within this region.
Behavioural studies like ours do not only help in understanding the t haplotype but also underline new questions related to lifehistory trade-offs and the evolution of animal personalities [13,14,64]. The rate-of-living theory postulates a negative association between life span and the rate of energy expenditure [65]. Thus two opposite strategies ''live fast and die young'' or ''live slowly and die old'', define a fast-slow life-history continuum along which individuals can be ranked [66,67,68]. Our results give evidence that these two life-history strategies apply to the t complex, with +/+ females living extravagantly and +/t females living frugally. However, Wolf et al. [14] predicted an association between residual reproductive value and risk-related behaviours like exploration or boldness so that we could expect +/t females to be shyer and less explorative than +/+ females. However, our results fail to provide full empirical support to theory as only activity showed a clear association with the t haplotype and we did not find a strong relationship between activity and boldness.
Literature provides few examples reporting the influence of personality traits like activity, aggressiveness, and sociality on reproductive success or longevity [69,70,71] (see [72] for a review). Our study indicates that longer living house mice express reactive personality traits, demonstrating that longevity correlates with personality. However, as studies focusing on life-history productivity and personality are still missing in this species, we do not know if the expression of specific personality traits could also influence their reproductive success and/or tactics [13,64]. | 2018-04-03T06:09:15.816Z | 2013-06-24T00:00:00.000 | {
"year": 2013,
"sha1": "7fdbef91724a53536310a9b0f4758b1ede8c4492",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0067130&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fdbef91724a53536310a9b0f4758b1ede8c4492",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
256546879 | pes2o/s2orc | v3-fos-license | Anabolic Androgenic Steroid Use Patterns and Steroid Use Disorders in a Sample of Male Gym Visitors
Introduction: The use of anabolic androgenic steroids (AAS) and other image- and performance-enhancing drugs is a growing public health concern. AAS use is associated with various physical and mental harms, including cardiovascular risks, cognitive deficiencies, and dependence. The aim of this study was to determine whether patterns of AAS use and other variables are associated with the presence of an AAS use disorder (AASUD). Methods: An online survey was completed by 103 male AAS consumers visiting gyms. The association of different patterns of AAS consumption (cycling vs. continuous forms of AAS use), psychoactive substance use, mental health disorders, and sociodemographic variables with moderate-severe AASUD (fifth edition of the Diagnostic and Statistical Manual of Mental Disorders ≥4 criteria) was investigated. The associations between duration of AAS use and the AAS dose with moderate-severe AASUD were investigated using logistic regression analysis with moderate-severe AASUD as the dependent variable. Results: Moderate-severe AASUD was present in 25 (24.3%) of the participants. AAS consumers meeting criteria for moderate-severe AASUD, compared to those that did not, in the last 12 months reported a longer duration of AAS use (in weeks), a higher average AAS dose (mg/week), and a greater number of AAS side effects. Duration of AAS use and the AAS dose were the only independent predictors, with an increase of 3.4% in the probability of moderate-severe AASUD with every week increase of the duration of AAS use in the last year (p < 0.05) and an increase in moderate-severe AASUD of 0.1% with every 10 mg increase in the average AAS dose per week (p < 0.05), respectively. Conclusion: Our findings show that moderate-severe AASUD is relatively frequent among male AAS consumers and is positively associated with the duration and average dose of AAS use in the last 12 months.
Introduction
Anabolic androgenic steroids (AAS) can be medically prescribed for the treatment of delayed puberty and other medical problems caused by testosterone deficiency. Traditionally, nonmedically prescribed AAS were used by This competitive weightlifters, powerlifters, and bodybuilders to gain muscle mass and strength to increase their performance. However, since the 1970s, AAS are increasingly used by recreational athletes to enhance physical appearance. Since then, in addition to AAS, a wide range of other substances, together termed "image-and performanceenhancing drugs" (IPEDs), including growth hormones and fat loss drugs, are used to alter physical performance or appearance [1,2].
There is increasing evidence that AAS use is a growing public health concern [3], with an estimated global lifetime prevalence in the general population of 3.3% (95% CI: 2.8-3.8): in males significantly higher than in females (6.4% vs. 1.6%) [4]. The prevalence of AAS use is relatively high in Europe, North America, the Middle East, Oceania (Australia and New Zealand), and South America (Brazil) and relatively low in Africa and Asia [5]. The prevalence of AAS use is probably underestimated because these estimates are generally based on self-report data and because of the illegal nature of the supply and the secretive nature of their use [6]. In 2012/2013, about two-thirds of new clients of Needle and Syringe Programmes (NSP) in the UK were AAS consumers [7]. In 2009, a Dutch study concluded that 8.2% of 718 members of fitness centers used IPEDs, mainly AAS and stimulants [8], whereas more recently, another study among 2269 male gym visitors reported that 9.0% used AAS [9].
AAS consumers with a high cumulative history of AAS exposure are at risk for various physical problems [10], including hypogonadism [11], cardiovascular conditions [12,13], and cognitive deficiencies [14]. In addition, AAS use is associated with the use of (other) illicit substances and AAS dependence disorder [15]. Although this disorder has been proposed and recognized by some [15,16], it is important to note that psychiatric classification systems do not recognize AAS dependence as a mental disorder. For this reason, Kanayama, et al. [17] decided to use a slightly adapted version of the existing criteria for dependence according the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-4) with the restriction that AAS dependence is only present if there is a maladaptive pattern of AAS use associated with clinically significant impairment or distress, manifested by three or more of the seven DSM-4 criteria. According to Kanayama et al. [17], AAS dependence may arise when AAS use is continued despite surging prominent adverse medical and psychiatric effects.
Based on seven different studies, Kanayama et al. [17] concluded that about 30% of illicit AAS consumers developed AAS dependence. In an online survey among regu-lar US visitors of Internet discussion boards about fitness, bodybuilding, weightlifting, and steroid use, 23.4% of 479 AAS consumers met criteria for AAS dependence [18]. A follow-up study confirmed these findings and showed higher AAS doses, higher quantity of agents, shorter periods without AAS consumption, and longer lifetime duration of AAS use in AAS-dependent consumers compared to AAS-nondependent consumers [19].
AAS dependence might be related to specific patterns of AAS use or doses used. To avoid negative health effects associated with continuous AAS consumption, some AAS consumers apply "cycles" with periods of AAS use interrupted by regular breaks with no AAS use that generally last at least as long as the periods AAS were used [20]. Other AAS consumers apply some form of continued use which, by varying degrees, may be sustained by positive effects during use, like the desired increase in muscle mass and feelings of confidence and well-being and, on the other hand, the avoidance of negative side effects, such as hypogonadal symptoms [21] that occur after AAS use and that may withhold many AAS consumers from cessation [22].
In the most recent version of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) [23], substance abuse and dependence are merged into one substance use disorder (SUD) with different levels of severity according to the presence of the number of criteria, ranging from a mild (2-3 of 11 criteria) through a moderate (4-5 of 11 criteria) to a severe (≥6 of 11 criteria) SUD. In this paper, we use the DSM-5 diagnostic criteria for SUD, specifically adjusted for AAS use, to assess the presence of a moderate or severe "AAS use disorder" (AASUD), i.e., participants meeting ≥4 (adjusted) DSM-5 SUD criteria.
Currently, there are no data in the literature about the relationship between AAS use patterns, duration of use, and the presence of AAS dependence or AASUD. Regarding the AAS dose, several studies have noted that dependent consumers took significantly more AAS than nondependent consumers in terms of total dose [19], total number or length of AAS cycles, and cumulative duration of AAS use [24]. We executed an online survey among AAS-using male gym visitors to investigate the prevalence of moderate-severe AASUD (≥4 DSM-5 criteria) and to detect variables associated with AASUD, including the pattern of AAS use, doses of AAS that were used (in mg/week in the last 12 months), duration of AAS use (weeks in the last 12 months), AAS side effects, psychoactive substance use, and the presence of mental disorders [19,25]. Sociodemographic variables and the use of supplements were also investigated.
Study Population
This study is using a convenience sample of IPED consumers recruited in the Netherlands. Between December 12, 2019, and April 1, 2020, participants were contacted through social media (Facebook, Twitter), the harm reduction agency "Mainline" (www.mainline.nl), and the most visited Dutch forum for strength sport and bodybuilding (www.bodybuilding.nl). In addition, participants were recruited by posters and flyers in fitness centers and during the biggest strength sports and bodybuilding event in Belgium and the Netherlands (S.A.P. Cup: www.muscletotaal.nl/sapcup). Participation in the survey was open for men and women aged 18 years or older. In the present study, participants were included if they were 18 years or older, had used AAS in the past 12 months, and had answered the 11 questions to assess the presence of an SUD (which equals completion of at least 70% of the survey questions).
The survey was carried out in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). Participants provided informed consent before participation in the survey. Responses were not traceable to a specific person, and no personal data were collected. Optionally, participants could leave an e-mail address after the survey to participate in the raffle of ten books. The collected data were stored in a secure folder encrypted with AES-256 encryption and maintained by Mainline with the encryption key solely in the possession of the principal investigators.
Measurements
The online survey was designed in SurveyMonkey ® (www.surveymonkey.com) and in total consisted of 76 items. Survey items related to either the participant's behavior and experiences in the past 12 months or the present situation, unless explicitly stated differently, measured the following variables: demographics (age, gender, educational level, working and partner status), sports practice (main sport practiced, intensity and frequency of training, competitive involvement, main sporting goal), supplement use, AAS and other IPED use (types of AAS used, frequency of use, route of administration, dose, pattern of AAS use, types of IPEDs used), side effects experienced, and psychoactive substance use. In addition, for mental health, participants were asked about their body image satisfaction and whether they ever or currently had one or more of the following conditions: ADHD, anxiety disorder, depression, eating disorder, psychosis, and/or substance dependence. Age of first occurrence of the mental condition and age of first AAS use were additionally asked. Participants who had a cyclic pattern of AAS use were asked to rate their mental well-being "on" and "off cycle" on a scale ranging from 0 to 10. A copy of the survey is included in online supplement S1 (for all online suppl. material, see www.karger.com/doi/10.1159/000528256). AASUD was assessed by 11 yes/no questions regarding the presence of the DSM-5 diagnostic criteria for SUD but adjusted for AAS use [17]. In addition, the new "craving" criterion of DSM-5 (item 4) was split to make the item applicable in the context of both intermittent and continuous patterns of AAS use. In the current study, only participants with a moderate-severe DSM-5 AASUD are regarded to have a clinically relevant disorder, i.e., participants meeting ≥4 DSM-5 SUD criteria with AAS as the substance of abuse. Participants with a positive response on only 2 or 3 DSM-5 SUD criteria are not regarded to have a clinically relevant AASUD because this may result in a high number of false positives, i.e., the inclusion of AAS consumers reporting some problems with their AAS use without meeting clinical significant levels of a mental disorder.
Statistical Analysis
Cases with a missing value on a variable were omitted from analyses that included the variable. Adjusted sample sizes for these cases were reported in the tables displaying the results. For the description of the study population, the normally distributed variables were expressed as means with standard deviations and for skewed distributed variables, the medians were calculated with interquartile ranges. The relationships between AASUD (yes vs. no) with demographic factors, sports characteristics, AAS use patterns, duration and quantity of use (AAS and other IPEDs), mental and physical health, and lifestyle factors were explored using, respectively, the Fisher-Freeman-Halton Exact test, Fisher's Exact test, and the independent samples t test or Mann-Whitney U test for significance. A correlation analysis was done to explore the relations of the presence or absence of moderate-severe AASUD (DSM-5 ≥4) and the number of AASUD criteria with age, substance use, (non-AAS) substance dependence, mental health disorders, IPED use other than AAS, and past physical or sexual abuse, based on the literature [19,[25][26][27]. Variables that were significantly associated with moderate-severe AASUD were investigated further in a logistic regression analysis with moderate-severe AASUD as the dependent variable. Significance was determined at p value <0.05. Statistics was performed with SPSS software, version 28.
Sample
Of the 189 participants, 133 (70.4%) reported ever use of AAS. Three women, 16 men who had not used AAS in the past 12 months, and 11 men who completed less than 70% of the questions were excluded. The final sample consisted of 103 men.
Characteristics of the Participants
The first column of Table 1 summarizes the demographic and physical training characteristics, AAS use patterns, psychoactive substance use, and mental health status of the 103 AAS consumers. Participants were all male with a mean age of 31 years, were generally well educated (intermediate/higher education; 78.6%, n = 81), and bodybuilding was the sport most frequently practiced (69.9%, n = 72).
Patterns of AAS Use
Participants were classified according to three patterns of AAS use over the past 12 months: a continuous stable dose of AAS (n = 9, 8.7%), a continuous use pattern where DOI: 10.1159/000528256 Numbers in the table are number (percentage), mean (standard deviation), or median (interquartile range); significance of group differences were tested with the Fisher-Freeman-Halton Exact test, Fisher's Exact test, unpaired T test, and Mann-Whitney U test. DSM-5, Diagnostic and Statistical Manual of Mental Disorders (5th ed.); DNP, 2,4-dinitrophenol; mg, milligram. a Items refer to the last 12 months, unless stated otherwise. b For AAS consumers with a "blast-andcruise" use pattern, this is the average of the dose during a "cruise" and during a "blast," taking into account their respective duration. c N = 100 (no AASUD = 76, AASUD = 24). d IPEDs, image-and performance-enhancing drugs. e Without doctor's prescription. f N = 102 (no AASUD = 77, AASUD = 25). g "Fairly" and "very" satisfied combined. h N = 98 (no AASUD = 74, AASUD = 24). i N = 97 (no AASUD = 74, AASUD = 23); # continuous same dose versus "blast and cruise"; ¥ continuous same dose versus "cycling." *Significant at p < 0.05. **Significant at p < 0.01. All items were formulated to accommodate the specific features of AAS use. •Item 3 was rephrased compared to previous questionnaires measuring AAS dependence (e.g., ) to match the characteristics of AAS use. •The fourth of the eleven diagnostic criteria in DSM-5 was unsuitable for use with continuous AAS users as it measures the desire to use again. Therefore, item 4b was added to the questionnaire. •For item 8, situations were chosen that may possibly result from AAS's mental effects.
Eur
DOI: 10.1159/000528256 "blasts" and "cruises" of high and lower AAS doses were alternated (n = 44, 42.7%), and a third pattern in which periods of AAS use and no AAS use were "cycled" (n = 50, 48.5%). Online supplementary Table S2 shows the specific doses and durations of AAS use for each of the three patterns.
Differences between Participants without and with AASUD
The second and third columns of Table 1 show the characteristics of the subgroups without (N = 78) and with (N = 25) moderate-severe AASUD. Compared to those without moderate-severe AASUD, participants with moderate-severe AASUD spent less time (in minutes a week) on training (p = 0.035) and less often competed as an athlete (currently or in the past) (p = 0.015). AAS consumers with and without moderate-severe AA-SUD in the last 12 months also significantly differed in their duration of AAS use in weeks (p = 0.019) and in their AAS dose in mg/week (p = 0.034). In the last 12 months, both oral and injectable AAS were taken for more weeks by AAS consumers with moderate-severe AASUD versus AAS consumers without moderate-severe AASUD (p = 0.003 and p = 0.039). Furthermore, AAS consumers with moderate-severe AASUD more frequently (p = 0.038) used (non-prescribed) insulin and/or DNP, two IPEDs with a high-risk profile [28,29]. AAS consumers with moderate-severe AASUD versus AAS consumers without moderate-severe AASUD in the last 12 months also reported to have experienced significantly more side effects (p = 0.017). The pattern of AAS use was not associated with moderate-severe AASUD (p = 0.120). The higher prevalence of participants with a "blast-and-cruise" pattern of AAS use compared to those with a "cycling" AAS use pattern (60.0%, n = 15 vs. 32.0%, n = 8) among those with moderate-severe AASUD was not significant (p = 0.055 in the post hoc test).
Psychoactive substance use in the last 12 months and a current or past mental disorder were reported by, respectively, 44 (44.0%) and 40 (40.8%) participants. About three-quarters of participants (76.0%) were fairly or very satisfied with their physical appearance. No differences were reported by AAS consumers with and without moderate-severe AASUD in the use of psychoactive substances or alcohol and in mental health parameters. In online supplementary Tables S3 and S4, additional information can be found on participants' use of substances and mental health status.
Logistic Regression Analyses to Identify the Correlates of AASUD
Variables were tested to examine their association with moderate-severe AASUD. Online supplementary Table S5 shows the correlation matrix of these variables. AAS dose in the last 12 months, duration of AAS use, AAS side effects, and lifetime mental disorder were significantly correlated with moderate-severe AASUD and/or the total number of AASUD criteria. Logistic regression analyses were then performed to identify the variables that were independently associated with the diagnosis moderatesevere AASUD. The variables linked to AAS use (AAS dose in the last 12 months in mg/week and duration of AAS use in weeks) showed considerable overlap with each other and with AAS side effects. To prevent collinearity problems, we excluded AAS side effects from the logistic models and tested separate logistic regression models for the relationship between the average AAS dose in the last 12 months and AAS use duration with moderate-severe AASUD. Calculated power for these analyses at this sample size was 0.826. Age and the lifetime presence of a mental disorder were included in the logistic models. Results are shown in Table 3. In the first logistic model (Table 3), with a total explained variance of 10.0%, duration of AAS use over last 12 months, but not age or mental health disorder, independently predicted moderate-severe AASUD with an increase of 3.4% in the probability of moderate-severe AASUD with an increase of every week of AAS use (p < 0.05). The second logistic regression model (Table 4) with a total explained variance of 13.8% showed that the AAS dose, but not age or mental health disorder, independently predicted moderate-severe AASUD with an increase of 0.1% in the probability of moderate-severe AASUD with the increase of 10 mg of AAS per week (p < 0.05).
Additional Analyses to Test the Stability of the Associations
Additional explorative analyses showed that the average dose used during AAS consumption was not associated with moderate-severe AASUD (p > 0.05; online suppl. Table S6A). In a model with the average dose used during AAS consumption, age, and lifetime mental disorder, the duration of AAS use remained positively associated with moderate-severe AASUD (odds ratio = 1.032, p < 0.05). The explained variance of this model was 14.9% (online suppl. Table S6B). The strength or significance of the association (odds ratio = 1.032, p < 0.05) was not affected when continuous AAS consumers on non-supraphysiological doses (<200 mg/week. last 12 months, n = 4) were excluded from the model. To test whether exclu-sion of 11 participants with less than 70% of the questions answered may have influenced the results, this group was compared with included participants (N = 103) on demographic variables, supplement use, age of first use of AAS, and duration of AAS use in the last 12 months (missing values precluded further comparisons), and no group differences were found (results not shown).
Discussion
The main findings of the current study are that 24.3% of our study sample of male gym goers meet (self-reported) criteria for a moderate-severe AASUD (≥4 DSM-5 criteria) and that the duration of AAS use in the last 12 months and the AAS dose in mg/week in the last 12 months are independently associated with moderate-severe AASUD. Every week increase of the duration of AAS use is associated with an increase of 3.4% in the probability of moderate-severe AASUD. Every 10 mg increase of the AAS dose per week is associated with an increase of 0.1% in the probability of moderate-severe AASUD. The association between duration of AAS use and moderatesevere AASUD was independent from the average (supraphysiological) dose used during AAS consumption.
The prevalence of moderate-severe AASUD of 24% is in line with earlier results of about 30% of all AAS consumers developing AAS dependence, based on the DSM-3-R/DSM-4 criteria for dependence (≥3 of 7 criteria) [16] and is in contrast with some previous reviews, stating that dependence liability of AAS is probably low [30]. A recent review, including data from 10 studies (total N = 1,247 AAS consumers), found a mean prevalence of AAS dependence across all studies of 32.5% (95% CI: 25.4-39.7), with a median of 29.5% [15]. The relatively lower prevalence of moderate-severe AASUD in the current study is mainly caused by the low prevalence of moderate-severe AASUD in the group of "cycling" AAS consumers (16.0%, n = 8), whereas moderate-severe AASUD is numerically overrepresented (34.1%, n = 15) in the group of continuous "blast-and-cruise" AAS consumers, with 32.0% versus 60.0%, respectively, within the group of AASUD. This calls for more attention to the potential role of chronic "blast-and-cruise" patterns of AAS use in the etiology of AASUD. This is especially important, given the recent increase in the "blast-and-cruise" pattern of AAS use. In 2007, in an online survey among 1,955 male AAS consumers, 5.0% indicated to have taken AAS continuously for the entire year [26], whereas little than a decade later, a similar study found that nearly half (47.32%) of the 2,385 AAS consumers reported a continuous "blast-andcruise" use pattern [22]. This is important as with longterm continuous AAS use, it may be increasingly difficult to stop the use of AAS since the consumer will be confronted with the consequences of a prolonged disruption of endogenous testosterone production [11,21].
To the best of our knowledge, this is the first study in which (slightly adjusted) DSM-5 criteria were used to assess the presence of AASUD. It should be noted however that AAS differ from traditional addictive substances, like cocaine and heroin, in that they produce little immediate reward or acute intoxication. The high rate of positive response to the criterion "spending a lot of time planning anabolic steroid use and/or obtaining anabolic steroids" may indicate the presence of a deliberate and regulated use pattern, instead of a loss of control over use. This pattern, unlike psychoactive substance use that is rather influenced by impulses, is regarded a typical feature of AAS use [26] and in itself not necessarily maladaptive or cause of distress. This finding is also in line with a recent study showing that time spent on activities related to the use of AAS was the symptom with the smallest effect on a DSM-4 symptom network of AAS dependence, whereas continuing AAS use despite physical and/or mental problems was the most central symptom [25]. Psychometric assessments like these are, therefore, essential to evaluate the criteria for AASUD, including the identification of the most central criteria that relate to the most typical and relevant symptoms of AASUD [25,31].
Despite these definition issues, animal studies suggest that AAS can induce AAS dependence [32] and that AAS is a critical modulator of executive functions [33] and impairs behavioral flexibility and increases compulsivity [34]; findings that were corroborated in recent human studies [25,35]. However, unlike many addictive substances, AAS do not acutely stimulate dopamine release in the nucleus accumbens [36]. One of the alternative ex-planations given for chronic and compulsive AAS consumption in conjunction with high intensity bodybuilding is social physical anxiety and negative perception of physical appearance [37]. However, in the current study, anxiety and dissatisfaction with one's physical appearance were infrequent in both AAS consumers with and without moderate-severe AASUD and thus unlikely to be an explanation for the compulsive consumption of AAS.
The current study has both strengths and limitations. The main strengths of the study are the detailed measurement of the different patterns of AAS use and the use of a broad range of variables as potential confounders for the development of moderate-severe AASUD. The main limitations are the unstructured sampling frame and the exclusive use of self-reported information and thus the risk of overreporting of AASUD criteria and mental disorders. However, the prevalence data in the current study are generally consistent with similar data from other studies using different recruitment and assessment procedures.
Furthermore, it is worth noting that the exclusion from the study of 16 AAS consumers who had not used AAS in the past 12 months may have influenced differences on some level between AAS consumers with and without moderate-severe AASUD. Exclusion of records, instead of imputation of missing values, for those participants with less than 70% of the questions answered (n = 11) may have also affected these differences due to a decrease of statistical power.
In the present study, AAS doses were estimated by adding up the self-reported doses in milligrams of different types of oral and injectable AAS. It should be stressed that this procedure does not acknowledge differences between AAS types in negative and positive effects and does not consider that the declared and the actual content or concentration of AAS from the black market often differ considerably [11,27].
A potential reason for concern is the somewhat "circular" nature of the associations of moderate-severe AAS-UD with duration of AAS use and the AAS dose as the definition for moderate-severe AASUD includes criteria that relate to AAS use duration and dose. However, the key element in these criteria is loss of control, craving, and AAS use to avert withdrawal symptoms, and neither duration of use nor dose are in itself criteria that determine AASUD. As was pointed out earlier, the cross-sectional design of the present study prevents inferences about causality, and future studies with a prospective study design are needed to explore the hypotheses brought forward in this paper.
Conclusion
Taken together, we found that the duration of AAS use (in weeks in the past 12 months) and AAS dose (in mg/ week in the past 12 months) are associated with the presence of moderate-severe AASUD and that these effects remained significant after controlling for age and lifetime mental disorders. We conclude that AASUD could be a frequent complication in chronic, high-dose AAS consumers. Prospective research is needed to follow mental and physical changes over time in chronic AAS consumers and to identify those AAS consumers who make the transition from intermittent to chronic AAS use and, possibly, (moderate-severe) AASUD. Additionally, more research is required to aid chronic AAS consumers as the effectiveness of treatments for (moderate-severe) AAS-UD at present is undetermined and there is only scarce evidence for the benefits of interventions to reduce or stop the consumption of AAS [38]. Furthermore, to aid those with increased health risks due to chronic AAS use, healthcare should accommodate the specific needs of chronic AAS consumers [39]. However, at present, expertise and clinical guidelines in this domain are largely absent [40]. | 2023-02-04T06:17:22.001Z | 2023-02-02T00:00:00.000 | {
"year": 2023,
"sha1": "0a749c064e6b3259f8c04d8ba624e66a0e33669f",
"oa_license": "CCBY",
"oa_url": "https://www.karger.com/Article/Pdf/528256",
"oa_status": "HYBRID",
"pdf_src": "Karger",
"pdf_hash": "7a52027c1ccedb1f26dcc89583222a1c4c658a90",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
104340166 | pes2o/s2orc | v3-fos-license | Glucose-6 Phosphate Dehydrogenase Deficiency in terms of hemolysis indicators and management
Background: Glucose-6 Phosphate Dehydrogenase (G6PD) Deficiency is one of the commonest inherited enzyme abnormalities in humans, caused by many mutations that reduce the stability of the enzyme and its level as red cells progress in age. Objectives: To determine the useful hematologic indicators of hemolysis, observe an early detection of G6PD enzyme deficiency (if any), and the available therapeutic measures. Patients and Methods: 123 patients with G6PD deficiency and hemolysis after exposure to fava beans whom visited AL-Elwiya Pediatric Teaching Hospital from the 1 of February 2016 till 31 of May 2016 were entered this study retrospectively. Hemolysis laboratory indicators were observed. Management supportive measures were put in consideration also. Results: We found that 10-20% levels of hematocrit and normochromic normocytic anemia were the most frequent on presentation, while a range of 15.1-20% of reticulocyte counts was the most common with lower rates in females group. Hyperbilirubinemia was seen with nil patients had abnormal renal function tests. About three quarters (76.4%) of the total number of involved cases had glucose-6-phosphate dehydrogenase (G6PD) deficiency. Only 4 patients required no blood transfusion, 102 patients (82.9%) needed transfusion once, and the rest 17 (13.8%) had more than one blood transfusion. Most of cases (91.1%) recovered within the first 3 days. However; all cases were recovered by the fourth day of admission. Conclusion: Hemoglobin and blood morphology with hyperbilirubinemia were useful hematologic indicators of hemolytic process, while blood transfusion was the most used therapeutic measure, and recovery was expected within 2-3 days.
Introduction:
Glucose-6-phosphate dehydrogenase (G6PD) deficiency is a common genetic abnormality known to predispose to acute hemolytic anemia (AHA), which can be triggered by certain drugs or infection.However, the commonest trigger is fava beans (Vicia faba) ingestion, causing AHA (favism), which may be lifethreatening especially in children.(1) A clinical symptom of G6PD deficiency closely linked to drug induced haemolysis is the hemolytic anemia resulting from the ingestion of the fava bean.(2,3) Patients with favism are always G6PD deficient, but not all G6PD deficient in-dividuals develop haemolysis when they ingest fava beans.It is assumed that some other factors, such as genetic and metabolism of the active ingredients in the beans, which causes oxidative damage in red blood cells, are involved.(3) During the second half of the twentieth century, severe hemolytic anemia in individuals * Dept. of Pediatrics/ University of Baghdad / Al-Kindy College of Medicine haider77hadi@yahoo.com,**Pediatrics specialist, Al-Elwiya Pediatrics Teaching Hospital.
ingesting fava beans, more commonly in children, was reported.(4) In Mediterranean area, the major type of allele that exists is called the "Mediterranean" variant, among the population, whereas in other countries such as Japan there is a different variant with a different type of mutation prevalent within that population.This type of mutation is called the "Japan" variant.The Mediterranean variant, found in Southern Europe, the Middle East and in India, is characterized by very low enzyme activity (0-10%) in RBCs using spectrophotometric and potentiometric methods.(5) Deficiency of glucose-6-phosphate dehydrogenase (G6PD), an X-linked recessive enzymatic defect in the hexose monophosphate shunt that protects cellular damage from oxidative stress, is a common haematological problem worldwide.(6) At a public health level, preventive measures may be useful.Subjects with G6PD deficiency may not even know which fava beans are until they collapse with favism.(7) Nevertheless; ingestion of fava beans, certain drugs, infections, and metabolic conditions can cause hemolysis.Inadequate management of those G6PDdeficient individuals who develop acute hemolytic anemia can lead to permanent neurologic damage or death.(8,9) Number of factors can precipitate hemolysis in G6PDdeficient subjects; such as certain drugs, infections, and some metabolic conditions, like diabetic ketoacidosis.(10-17) Clinical signs and symptoms of hemolysis typically arise within 24 to 72 hours of drug dosing, and anemia worsens until about day (7).This makes it difficult for the health practitioner to identify a hemolytic crisis in patients who undergo outpatient or short hospital stay (less than 24 hour) procedures.Therefore, the practitioner should inform the high-risk patient and his or her caretaker to look for signs and symptoms of a hemolytic crisis (headache, dyspnea, fatigue, lumbar/substernal pain, jaundice, scleral icterus, and dark urine).(4) Treatment consists of discontinuation of the offending agent and maintenance of urine output by infusion of crystalloid solutions and diuretics such as mannitol and/or furesomide, with blood transfusion when needed.(4,18) Also, treatment of hyperbilirubinemia in G6PDdeficient neonates, when indicated, is with phototherapy and exchange transfusions.Prophylactic oral phenobarbital does not decrease the need for phototherapy or exchange transfusions in G6PDdeficient neonates.(19,20)
Aim of the study:
To determine the useful hematologic indicators of hemolysis, observe an early detection of G6PD enzyme deficiency (if any), and the available therapeutic measures.
Patients and Methods:
Cases with history of fava beans ingestion reaching the emergency room of AL-Elwiya Pediatric Teaching Hospital during the period from the 1 st of February 2016 till 31 st of May 2016 were targeted retrospectively in this study, and their total number was 238 patients.The important investigations tracked from medical records includes: complete blood count with blood film and Reticulocyte count, direct Coomb's test, renal function tests, serum bilirubin (total and fractionated), as well as general urine examination (with urobilinogen) and G6PD enzyme assay screening test.Supportive measures in the form of intravenous fluids with or without blood transfusion were recorded from hospital files.Only fully recovered patients whom discharged in a good condition were allowed to enter this study.Any file of any patient that did not contain any of the above information was excluded.
Because of that; only 123 patients were involved out of 238, 101 of them were males and 22 were females.All of them were between 1-13 years old at time of diagnosis.Z-test to measure P-value was used for statistical evaluations.
Results:
The most frequent range of hematocrit levels that our patients got on presentation at the hospital was 10-20%, while less than 10% levels became next in frequency, as shown in Figure 1.
Figure 1 The range of hematocrit levels (during their first presentation at emergency department)
Doing simple calculations based on figure (1) and results, we have 86 male patients (69.9%) out of the total (123 patients) whom had hematocrit levels 20%, compared to 17 females (13.9%).Z-test between 2 proportions = 3.54 P-value = 0.022.B. Type of anemia: Most of patients had normochromic normocytic anemia (118 patients; 95.9%) and only 5 patients (4.1%) had hypochromic microcytic red blood cells (1 male and 4 females).Nevertheles; 100 males (81.3%) out of the total (123 patients) had mean cell volume (MCV) levels > 70 femtoliter (fl) compared to 18 females (14.6%), which was the cut-line for microcytic red blood cells, while a level of mean cell hemoglobin concentation (MCH) less than 27 picogram (pg) was considered as hypochromic red blood cells.C. Reticulocyte count &RPI (reticulocyte production index): We found generally that reticulocyte counts (%) of 15.1 -20% were most common, but females favoured lower counts inside their group, as illustrated in Table 1 with a similar trend in case of reticulocyte production index (RPI) (21) .A value of 2-3 (of RPI) was the most frequent in both sex groups with a high percentage of 65% of total no. of patients (49.6% males and 15.5% females).
where normal hemoglobin (Hb) = 12g|dl.D. The shape of the erythrocytes: Blister cells appeared in most of blood films in both sexes, this is clarified in Figure 3.
Figure 3 Distribution of various shapes of red blood cells (RBCs) in favism
Glucose-6 Phosphate Dehydrogenase Deficiency in terms of hemolysis indicators Hayder H. Al-Momen and management E. Serum bilirubin level: Hyperbilirubinemia was found in all patients, which is considered a usual result of hemolysis with the highest readings of 25-30 mmol/L, as seen in Table 2. F. Blood urea nitrogen and serum creatinine: Renal function (blood urea nitrogen and serum creatinine) was normal in all patients (100%), and so the function was not significantly impaired.G. G6PD enzyme assay: The deficiency was screened through de-coloration of methylene blue.Around 94 patients (76.4%) out of the total cases had deficient G6PD enzyme; most of them were males (90 patients) that stands for (73.4%) compared to 4 female patients, which represents (3.3%).This is obvious in Figure 4. Z-test between 2 proportions = 2.56 P-value = 0.02.
Figure 4 G6PD enzyme
Treatment: Nearly all patients required blood transfusion, only 4 of them were considered with no need to blood as per the opinion of treating physician, also some patients required more than one time of blood transfusions as shown in Table 3.This might be due to continuous hemolysis even with high initial hemoglobin levels.
Recovery: It was dependent on regaining normal hematocrit levels (around 30%) for at least 12-24 hours.As it is stated in table 4 below, all of the cases (100%) recovered from the disease within the 1 st 4 days, and (91.1%) recovered within the 1 st 3 days.
Discussion:
Hematocrit level was 20% in majority of cases ; this level may be nearly similar to the results of a study from Mosul (22), but lower than that mentioned by Segel GB. (23).This may give us a clue of a more lower hematocrit levels in Iraqi patients rather than western populations.In 118 cases (95.9%), the anemia was of normochromic normocytic type, while the rest (5 cases) had hypochromic microcytic anemia.Nearly similar results were reported by Sawsan S. Abbas (24), NelsonDA (25), and ErbagclAB.(26) Reticulocyte count generally was high, precisely the RPI (Reticulocyte Production Index), which was between 2-3 in most of the involved cases; this indicates that hemolysis exceeded the ability of bone marrow to compensate, so the true reticulocyte count may be low for the first 3-4 days and this consistent with Hassan MK (27) and ErbagclAB.( 26)
Glucose-6 Phosphate Dehydrogenase Deficiency in terms of hemolysis indicators Hayder H. Al-Momen and management
Blister erythrocytes were seen in the blood film of most cases, followed by fragmented cells, and spherocytes in the last rank.Nearly similar picture was reported by Luzzatto L (28), Yilmaz N (29) and Frank JE. (30) Hyperbilirubinemia was found in all patients, which is considered as a usual result of hemolysis.(22) The peak level of total serum bilirubin was between 25-100 mmol/L (>1.5-6 mg/dl).This agrees with many other studies like those done by Omar SK ( 22), Sawsan S. Abbas (24), and Beutler E. (31) Renal function (blood urea nitrogen and serum creatinine) was normal in all patients (100%), and clinically they were well.This is consistent with Belsy MA (32) and IbidS.(33) G6PD enzyme assay was done early in the first day of admission and showed early detection of deficient enzyme in 94 cases (76.4%).This is consistent with other authors such as ErbagclAB (26), Abbas SS (34), Mehta A (35), and S ML. (36) Since most of patients had moderate to severe anemia, and continuous hemolytic process was suspected even with relatively mild anemia, 119 patients (96.7%) were given blood transfusion as a major constituent of therapy.This relatively aggressive approach might point out to the severity of variant(s) existing in our locality, while only 4 cases (3.3%) did not need blood transfusion.Approximate results were also reported by Omar SK (22), ErbagclAB (26) and S ML. (36) Finally, all patients recovered well.Recovery in most of them (91.1%), was within the first 3 days.This period was also found by Frank JE. (30), IbidS (33), and Hilmi FA. (37)
Conclusion:
Hemoglobin and blood morphology with hyperbilirubinemia were useful hematologic indicators of hemolytic process, but renal function was not affected, especially in patients with early presentation followed by early treatment.A significant number of patients showed an early reduction of G6PD enzyme within the course of the disease.Blood transfusion was the most used therapeutic measure, and recovery was expected within 2-3 days. | 2019-04-07T05:12:21.482Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "82cfe254faf691f542d45a4a351add67331325e5",
"oa_license": "CCBYNC",
"oa_url": "https://iqjmc.uobaghdad.edu.iq/index.php/19JFacMedBaghdad36/article/download/120/80",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "82cfe254faf691f542d45a4a351add67331325e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4776424 | pes2o/s2orc | v3-fos-license | Regulation of mTORC1 by lysosomal calcium and calmodulin
Blockade of lysosomal calcium release due to lysosomal lipid accumulation has been shown to inhibit mTORC1 signaling. However, the mechanism by which lysosomal calcium regulates mTORC1 has remained undefined. Herein we report that proper lysosomal calcium release through the calcium channel TRPML1 is required for mTORC1 activation. TRPML1 depletion inhibits mTORC1 activity, while overexpression or pharmacologic activation of TRPML1 has the opposite effect. Lysosomal calcium activates mTORC1 by inducing association of calmodulin (CaM) with mTOR. Blocking the interaction between mTOR and CaM by antagonists of CaM significantly inhibits mTORC1 activity. Moreover, CaM is capable of stimulating the kinase activity of mTORC1 in a calcium-dependent manner in vitro. These results reveal that mTOR is a new type of CaM-dependent kinase, and TRPML1, lysosomal calcium and CaM play essential regulatory roles in the mTORC1 signaling pathway. DOI: http://dx.doi.org/10.7554/eLife.19360.001
Introduction
Mechanistic target of rapamycin (mTOR) plays an essential role in sensing a myriad of environmental cues including nutrients and growth factor stimulation to regulate cell growth and proliferation (Wullschleger et al., 2006). mTOR independently associates with raptor or rictor to form two distinct complexes, mTORC1 and mTORC2, respectively. The two complexes share several common subunits, including the catalytic mTOR subunit, mLST8, DEPTOR, and the Tti1/Tel2 complex (Laplante and Sabatini, 2012). Among the remaining components, PRAS40 are specific to mTORC1, whereas rictor, mSin1 protor1/2 are unique to mTORC2 (Laplante and Sabatini, 2012). These two complexes differ in their sensitivity to rapamycin, upstream signals and downstream outputs (Laplante and Sabatini, 2012). The mTORC1 complex integrates different extracellular and intracellular signal inputs, such as growth factors, amino acids, stress and energy status, to regulate cellular processes such as protein and lipid synthesis and autophagy, by phosphorylating and activating p70 S6 kinase (p70S6K) (Chung et al., 1992;Price et al., 1992) and eukaryotic translation initiation factor 4E-binding protein 1 (4E-BP1) (Lin et al., 1995;von Manteuffel et al., 1996). In contrast, mTORC2 is involved in Akt phosphorylation and regulation of the cellular cytoskeleton (Bhaskar and Hay, 2007). Activation of mTORC1 by amino acids requires the translocation of mTORC1 from the cytosol to the surface of lysosomes, which is dependent on the Rag GTPase heterodimers RagA/B and RagC/D (Kim et al., 2008;Sancak et al., 2008).
The second messenger calcium has been shown to play an important role in the regulation of mTOR signaling. Earlier hints that calcium might be involved in mTOR signaling came from observations that calcium was required for the activation of p70S6K (Conus et al., 1998;Graves et al., 1997;Hannan et al., 2003). But the underlying mechanism was attributed to upstream regulators such as PI3K or isoforms of PKC. More definitive roles of calcium and its signaling mediator calmodulin (CaM) in mTORC1 signaling were demonstrated in the context of amino acid activation of the pathway (Gulati et al., 2008). It was shown that the phosphorylation of S6K1 in response to amino acids was inhibited by the cell permeable calcium chelator BAPTA-AM while thapsigargin, which releases intracellular calcium, activated mTORC1 activity. Moreover, it was shown that the activation of mTORC1 by amino acids was inhibited by antagonists of CaM or its knockdown using siRNA, suggesting that CaM is required for mTORC1 activity. The underlying mechanism by which calcium and CaM regulate mTORC1 was attributed to the binding of calcium-activated CaM to the hVps34, leading to the activation of its kinase activity. While the sensitivity of mTORC1 to BAPTA-AM and CaM antagonists have been reproducibly observed, ensuing studies have cast some doubt on the notion that hVps34 is a key mediator of calcium and CaM in the regulation of mTORC1 in similar and other cellular systems (Mercan et al., 2013;Yan et al., 2009).
In a previous study, we found that small molecules that are known to induce Niemann-Pick Disease Type C (NPC) phenotype inhibited mTOR . Independently, it has also been reported that NPC cells showed significant defects in lysosomal calcium homeostasis (Lloyd-Evans et al., 2008;Shen et al., 2012). Cells that have mutations in or deficient mucolipin transient receptor potential (TRP) channel 1 (TRPML1) display altered Ca 2+ homoeostasis similar to that seen in NPC cells Dong et al., 2010;Shen et al., 2011). Cells treated with chemical NPC inducers exhibited reduced TRPML1-mediated lysosomal Ca 2+ release in response to a TRPML1 agonist, indicating dysfunction of this calcium channel. Furthermore, it has been shown that TRPML1 homolog in fly is required for TORC1 activation and fusion of amphisomes with lysosomes, and the inhibition of TORC1 can be rescued by feeding fly larvae with a high-protein diet (Wong et al., 2012;Venkatachalam et al., 2013). Furthermore, TORC1 also exerts reciprocal control on TRPML function, establishing the connection between TRPML and TORC1 signaling pathway in fly cells. Whether TRPML1 regulates mTORC1 signaling pathway in mammalian cells remains unknown. Putting these findings together, we hypothesized that a defect in lysosomal calcium homeostasis in NPC cells might be responsible for the observed inhibition of the mTOR signaling pathway.
We validated our hypothesis by demonstrating that depletion of TRPML1 inhibits mTORC1 while overexpression or pharmacologic activation of TRPML1 activates mTORC1. We traced the likely site of regulation of mTORC1 pathway by calcium and CaM by determining the sensitivity of mTORC1 activity to BAPTA-AM and CaM antagonists in response to various upstream activators of the kinase complex and narrowed it to mTORC1 itself. We found that CaM interacted with mTORC1 and activated its kinase activity. Together, these findings shed significant new light on mTORC1 signaling pathway and offer a unifying mechanism that accounts for most, if not all, earlier observations implicating calcium and CaM in the regulation of mTORC1 by different upstream activators in distinct cellular context.
TRPML1 is required for the activation of mTORC1
To determine whether TRPML1 is required for mTORC1 signaling, HEK293T cells were transduced with lentiviral shRNA targeting human TRPML1 (Sh1 and Sh2) or a scrambled shRNA (Scr). Due to lack of reliable hTRPML1 antibodies, the knockdown efficiency was assessed by RT-qPCR (Figure 1a, bottom panel) as well as indirectly by the expression level of ectopically expressed EGFP-TRPML1. The activity of mTORC1, as judged by the phosphorylation of S6K, was significantly inhibited upon TRPML1 knockdown, while the phosphorylation of Akt (T308) was not affected (Figure 1a). To determine whether the mTOR inhibition caused by TRPML1 knockdown was due to blockade of lysososmal calcium release, we performed a rescue experiment in Human Umbilical Vein Endothelial Cells (HUVEC) ( Figure 1b) and HEK293T (Figure 1-figure supplement 1a) using thapsigargin, a sarco/ endoplasmic reticulum Ca 2+ -ATPase inhibitor that increases cytosolic calcium concentrations (Lytton et al., 1991). Indeed, the inhibition of mTORC1 activity by TRPML1 knockdown was rescued by thapsigargin, suggesting that mTORC1 inhibition was due, in large part, to the lack of lysosomal Figure 1. TRPML1 is required for the full activation of mTORC1. (a) HEK293T cells were transduced with lentiviral scrambled shRNA (Scr) and shRNA targeting human TRPML1 (Sh1 and Sh2), respectively. To assess the knockdown efficiency, a fraction of transduced cells were transfected with EGFP-TRPML1. After 24 hr, transfected or untransfected cells were lysed and subjected to immunoblotting. Untransfected cells were used to detect p-S6K, S6K and p-Akt, and transfected cells were used to detect GFP and GAPDH. RT-qPCR was also performed to evaluate the knockdown efficiency (bottom panel) (mean ± s.d., n = 2 independent experiments). (b) Scrambled shRNA or TRPML1 shRNA-transduced HUVEC were treated with vehicle control or thapsigargin (5 mM) for an additional 2 hr. Cells were lysed and subjected to immunoblotting. Knockdown efficiency was assessed by RT-qPCR (right panel). The bottom panel of plots shows the percentage of p-S6K and p-4EBP1 levels compared with scramble shRNA transduced vehicle control treated HUVEC normalized by GAPDH loading control (mean ± s.d., n = 2 independent experiments). (c and d) Scrambled shRNA or TRPML1 shRNA transduced HEK293T cells were deprived for 24 hr of serum (c) or 3 hr of leucine (d) and, where indicated, were stimulated with 600 nM insulin or 52 mg/ ml leucine for 10 min. Simultaneously, another fraction of scrambled shRNA or TRPML1 shRNA-transduced cells were transfected with EGFP-TRPML1 for 24 hr. Cells were lysed and subjected to immunoblotting. The plots show the percentage of p-S6K levels compared with scramble shRNA transduced serum (c) or leucine (d) starved HEK293T cells normalized by total S6K control (mean ± s.d., n = 2 independent experiments, respectively). (e) Scrambled shRNA or TRPML1 shRNA-transduced HEK293T cells were transfected with Rag AQ66A or Rag CS75L for 24 hr. Cells were lysed and subjected to immunoblotting. The plot shows the percentage of p-S6K level compared with scramble shRNA transduced empty vector transfected HEK293T cells normalized by total S6K control. (mean ± s.d. for n = 3 independent experiments). DOI: 10.7554/eLife.19360.002 The following figure supplement is available for figure 1: calcium release. Moreover, knocking down TRPML1 also attenuated the activation of mTORC1 by insulin (Figure 1c), leucine ( Figure 1d) as well as overexpression of constitutively active RagA or RagC ( Figure 1e). Furthermore, we determined the phosphorylation of S6K in normal human fibroblasts (TRPML1 +/+) and fibroblasts from a mucolipidosis IV patient (TRPML1 -/-). Compared with TRPML1 +/+ human fibroblasts, TRPML1 -/-cells showed decreased phosphorylation of S6K (Figure 1-figure supplement 1b). Interestingly, this inhibition was partially reversed by leucine compared with that in wild type cells (Figure 1-figure supplement 1c). However, the treatment of thapsigargin fully restored the phosphorylation of S6K (Figure 1-figure supplement 1b), suggesting that in mammalian cells, the decrease in mTORC1 activity in TRPML1 mutant cells is not only due to the incomplete autophagy that has been reported in Drosophila (Wong et al., 2012). In addition, knockdown of other lysosomal channels, such as TPC2 and P2X4, did not significantly inhibit mTORC1 signaling (Figure 1-figure supplement 1d), indicating that the decreased mTOR activity upon TRPML1 knockdown was not due to the dysregulation of the structure of the endolysosmal system, and as one of the lysosomal calcium channels, TRPML1 may play a more dominant role in the regulation of mTORC1 signaling.
Having shown that TRPML1-mediated lysosomal calcium release is necessary for mTORC1 activity, we then turned to the reciprocal question of whether an increase in lysosomal calcium release through TRPML1 could stimulate mTORC1. Thus, HEK293T cells were transfected with expression plasmids for EGFP-TRPML1 and its non-conducting pore mutant (D471K/D472K) EGFP-TRPML1 (KK), respectively. The phosphorylation of S6K was slightly but significantly increased by overexpression of wild type TRPML1 but not the non-conducting pore mutant TRPML1 (KK) ( Figure 2a). Next, we treated HEK293T cells with TRPML1 agonist MLSA1 (Shen et al., 2012;Feng et al., 2014). The phosphorylation of S6K was increased by MLSA1 in a dose-dependent manner ( Figure 2b). In contrast, MLSA1 failed to increase the phosphorylation of S6K in the cells transduced with lentiviral TRPML1 shRNA, or pretreated with bafilomycin A1 or Glycyl-L-phenylalanine 2-naphthylamide (GPN), suggesting that the increase in S6K phosphorylation induced by MLSA1 was mediated by calcium released through TRPML1 (Figure 2c,d). Upon amino acids stimulation, mTOR translocated from the cytosol to the lysosome, colocalizing with EGFP-TRPML1 (Figure 2-figure supplement 1), indicating that the activation of TRPML1 acted independently of the translocation of mTORC1 induced by amino acids. The upregulated TRPML1 has been reported to promote autophagy Medina et al., 2015;Wong et al., 2012). To determine if activation of mTORC1 in response to TRPML1 overexpression was due to up-regulated autophagy, we overexpressed constitutively active Rab7A (Q67L) and dominant negative Rab7A (T22N) in HEK293T cells (Jager et al., 2004;Hyttinen et al., 2013). As shown in Figure 2-figure supplement 2, overexpression of neither constitutively active nor dominant negative Rab7A affected mTORC1 signaling, while overexpression of TRPML1 plus MLSA1 treatment stimulated the phosphorylation of S6K, suggesting that the activated mTORC1 by TRPML1 stimulation was not mediated through autophagy.
Calcium and CaM are required for activation of mTORC1
Both intracellular calcium and CaM have been reported to be required for mTORC1 activity (Conus et al., 1998;Graves et al., 1997;Gulati et al., 2008;Hannan et al., 2003;Mercan et al., 2013). We thus treated HEK293T cells with the cytosolic Ca 2+ chelator BAPTA-AM (BAPTA) or the CaM antagonists W-7 and calmidazolium (CMDZ). In agreement with previous studies (Gulati et al., 2008;Ke et al., 2013;Graves et al., 1997;Zhou et al., 2010), we observed that BAPTA , W-7 and CMDZ inhibited phosphorylation of S6K in a dose-dependent manner with IC 50 values of 3.96 ± 1.30 mM, 21.59 ± 1.81 mM and 10.36 ± 0.59 mM, respectively (Figure 3-figure supplement 1). In comparison to S6K, phosphorylation of Akt (S473), the substrate of mTORC2, was also inhibited by CMDZ and W-7, but at much higher concentrations (EC 50 values of 27.21 ± 9.82 mM and 45.91 ± 9.61 mM, respectively) compared with that of p-S6K, while BAPTA did not show appreciable inhibition to p-Akt (S473) (Figure 3-figure supplement 1c). In addition, CMDZ and BAPTA also showed potent inhibitory effect on mTORC1 signaling pathway in HUVEC and A549 cells (Figure 3-figure supplement 2a,b), suggesting that mTORC1 is also regulated by Ca 2+ /CaM in primary and other cancer cells.
Next, we investigated how fast mTORC1 and mTORC2 responded to BAPTA or CMDZ. As shown in Figure 3a, both CMDZ and BAPTA caused appreciable inhibition of mTORC1 activity within 0.5-1 hr as judged by the phosphorylation of S6K and 4EBP1. In contrast, the phosphorylation of Akt (T308) and its substrate, mTOR (S2448), was not significantly affected by CMDZ until 6 hr post treatment. In addition, CMDZ did not cause significant inhibition of phosphorylation of Akt (S473) until 3 hr after treatment, indicating that the response of mTORC2 to the CaM antagonist has a much slower onset than that of mTORC1 (Figure 3a, left panel). Although the onset of the effect of BAPTA on 4EBP1 phosphorylation was slightly slower than that of CMDZ, BAPTA did not significantly affect the phosphorylation of either Akt (S473, T308) or mTOR even after 6 hr (Figure 3a, right panel). These results suggest that CaM regulates both mTORC1 and mTORC2, but the two complexes differ in their sensitivity to CaM and calcium. Interestingly, after a 6-hr treatment with BAPTA, the inhibition of phosphorylation of S6K and 4EBP1 was partially reversed, which is consistent with a previous report that over time upon treatment with BAPTA, a gradual increase in intracellular Ca 2+ was seen (Wei et al., 1998). The relatively short time required for CMDZ and BAPTA to exert their effects on mTORC1 and their faster onset than those on mTORC2 suggested that the inhibition of mTORC1 likely occurred independently of its upstream signaling events, such as phosphorylation of Akt (T308).
Regulation of mTORC1 by cytosolic calcium and CaM occurs proximal to mTORC1 itself
To further explore the level at which Ca 2+ and CaM regulate mTORC1 signaling, we determined the effects of CMDZ and BAPTA on mTORC1 activation in response to various upstream activating stimuli of mTORC1. Similar to previous observations (Gulati et al., 2008), we found that leucine-stimulated mTORC1 activation was inhibited by BAPTA and CMDZ (Figure 3b, Lanes 4 vs. 2 and 6 vs. 2). The activation of mTORC1 by leucine has been shown to be mediated by the small GTPases RagA/B and RagC/D (Kim et al., 2008), and overexpression of constitutively active RagA Q66L /RagC S75N can bypass leucine to activate mTORC1. We found that activation of mTORC1 by either RagA Q66L or RagC S75N remained sensitive to CMDZ (Figure 3c). Next, we determined whether activation of mTORC1 by insulin was also sensitive to CaM blockade. Although insulin strongly increased the phosphorylation of Akt (T308) (Figure 3d, Lanes 2), the mTORC1 activity remained sensitive to CMDZ as well as BAPTA-AM (Figure 3d, Lanes 4 and 6). It has been reported that mTOR is directly bound to and activated by Rheb-GTP (Long et al., 2005). Thus, we used HEK293T, HUVEC and A549 to produce stable cell lines overexpressing constitutively active Rheb N153T as previously described (Yan et al., 2006), and determined their sensitivity to BAPTA and CMDZ. Rheb N153Tinduced phosphorylation of S6K remained sensitive to inhibition by BAPTA and CMDZ in HEK293T, HUVEC and A549 cells (Figure 3e,f, Figure 3-figure supplement 2c,d). Together, these results suggested that the site of regulation of mTORC1 by Ca 2+ and CaM lies proximal to mTORC1 itself.
CaM interacts with mTOR
CaM has been previously reported to indirectly interact with mTORC1, and human vacuolar protein sorting 34 (hVps34) was shown to mediate the interaction between CaM and mTORC1 in HeLa cells (Gulati et al., 2008). To our surprise, when hVps34 was knocked down in HEK293T cells, binding of CaM to mTOR was not affected (Figure 4a), neither was the sensitivity of mTORC1 to CaM (Figure 4-figure supplement 1), ruling out hVps34 as a mediator of CaM-mTOR interaction in HEK293T cells. These results raised the possibility that CaM may directly interact with a subunit of the mTORC1 complex, thereby regulating its kinase activity. Indeed, CaM sepharose could pull down mTOR and raptor, but not PRAS40, in a Ca 2+ -dependent manner (Figure 4b). The interaction between mTOR and CaM was sensitive to detergents and the CaM antagonist W-7 (Figure 4b and Figure 4-figure supplement 2). However, Ca 2+ did not affect the assembly of mTORC1 complex (Figure 4-figure supplement 3), suggesting that one of the interactions of CaM sepharose with mTOR and raptor could be indirect. To identify the subunit in mTORC1 that interacts with CaM, we knocked down raptor and mTOR, respectively, and determined the remaining interaction between Cell lysates were prepared from HEK293T cells transduced with lentiviral shRNAs targeting human mTOR, raptor, hVps34 or scrambled shRNA, followed by CaM sepharose precipitation in the presence of CaCl 2 (1 mM) or EGTA (5 mM). The cell lysates and precipitates were analyzed by immunoblotting to detect the indicated proteins. (b) Endogenous mTORC1 was pulled down by CaM sepharose in a Ca 2+ -dependent manner. HEK293T cells were lysed in CHAPS buffer, and the lysates were incubated with CaM sepharose in the presence of CaCl 2 (1 mM) or EGTA (5 mM). The precipitates were analyzed by immunoblotting. (c and d) Cell lysates were prepared from HEK293T cells in CHAPS buffer, and endogenous mTORC1 was immunoprecipitated by a raptor antibody. ATP (250 mM), Torin1 (100 nM), CMDZ (8 mM), CaM (2 mM) or/and CaCl 2 (1 mM) were added into the kinase reaction as indicated. Phosphorylation of 4EBP1 (c) and S6K (d) were detected by immunoblotting. The plots show the fold of change of phosphorylation of 4EBP1 (c) or S6K (d) compared with control group (first lane) normalized by total GST-tagged protein control. (mean ± s.d., n = 6 and 5 independent experiments, respectively). DOI: 10.7554/eLife.19360.010 The following figure supplements are available for figure 4: (Figure 4a). Knockdown of raptor had no effect on the pulldown of mTOR by CaM sepharose. In contrast, knockdown of mTOR significantly reduced the binding of raptor as well as mTOR to CaM (Figure 4a), suggesting that the interaction between mTOR and CaM is independent of raptor.
Ca 2+ and CaM activate the kinase activity of isolated mTORC1 complex in vitro
Having shown that CaM binds to mTORC1, we asked the question of whether CaM and Ca 2+ had a direct effect on the intrinsic kinase activity of isolated mTORC1 complex in vitro. Thus, endogenous mTORC1 complex was immunoprecipitated by an anti-raptor antibody, and an in vitro kinase assay was performed using purified recombinant 4EBP1 as a substrate (Sarbassov et al., 2004). As shown in Figure 4c, the phosphorylation of 4EBP1 by immunoprecipitated mTORC1 complex was significantly increased in the presence of both CaCl 2 (1 mM) and CaM (2 mM), but not CaM alone, indicating that CaM activates mTORC1 kinase activity in vitro in a Ca 2+ -dependent manner (Figure 4c, top left and top right panels, Lanes 1-3, respectively). Importantly, the activation of mTORC1 by Ca 2+ / CaM was inhibited by Torin 1 (Figure 4c, top left panel, Lane 4), a TOR kinase inhibitor, and CMDZ (Figure 4c, top right panel, Lane 4), indicating that the phosphorylation of 4EBP1 was dependent on TOR kinase activity and CaM. Similar results were obtained from the in vitro kinase assay using purified recombinant S6K as the substrate (Figure 4d). Together, these results demonstrated that binding of CaM to mTORC1 leads to the stimulation of kinase activity of the mTORC1 complex.
Discussion
The work described in this manuscript reveals a novel mechanism of regulation of mTORC1 by lysosomal calcium and CaM (Figure 4-figure supplement 4), shedding new light on the mTOR signaling pathway. In the current model of mTORC1 activation (Dibble and Cantley, 2015;Buerger et al., 2006;Efeyan and Sabatini, 2013;Saito et al., 2005), growth factors, energy, and other inputs signal to mTORC1 primarily through the TSC-Rheb axis; amino acids act by regulating the nucleotide state of the heterodimeric Rag GTPases and promoting the translocation of mTORC1 onto lysosomes, where it interacts with and becomes activated by lysosomally-localized, GTP-bound Rheb (Sancak et al., 2008). Our results have uncovered another role of lysosomal localization of mTORC1, i.e., to receive localized lysosomal calcium stimulation. Integrating our previous observations and the results from the present study, we propose an addition to the current model of mTOR signaling pathway: upon the translocation of mTORC1 onto the lysosome, properly released lysosomal calcium enriches local Ca 2+ concentration, prompting Ca 2+ binding to a local population of CaM, which in turn binds mTORC1 and stimulates the kinase activity of the mTORC1 complex.
The depletion of the homolog of TRPML1, TRPML in Drosophila, results in decreased TORC1 signaling, which was attributed to incomplete autophagy, and was completely reversed by feeding fly larvae with a high-protein diet (Wong et al., 2012;Venkatachalam et al., 2013). However, we showed that in TRPML1-knockdown mammalian cells or mucolipidosis IV human fibroblasts, the inhibited mTORC1 signaling was only partially reversed by leucine or overexpression of constitutively active Rag GTPase, suggesting that there is a difference in the mechanisms of regulation of mTOR by Ca 2+ /CaM between mammalian and fly cells. Interestingly, thapsigargin, which increases cytosolic Ca 2+ , could completely restore phosphorylation of S6K in TRPML1 deficient cells to the control level. Given that increased cytosolic Ca 2+ also positively regulates the Ca 2+ -dependent fusion of lateendosomes and autophagosomes to lysosomes (Grotemeier et al., 2010;Lloyd-Evans et al., 2008;Wong et al., 2012), the rescue effect of thapsigargin might be due to the combined effects of autophagy as well as the direct stimulation of mTORC1 by Ca 2+ /CaM in mammalian cells. On the other hand, TRPML1 is significantly upregulated under amino acids starvation , when mTORC1 dissociates from the lysosomal surface and becomes inactive, indicating that mTORC1 and TRPML1 may form reciprocal regulation loop. In addition, it has been recently reported that under starvation, lysosomal Ca 2+ release through TRPML1 activates local calcineurin, a Ca 2+ , CaM-dependent protein phosphatase, which dephosphorylates TFEB and promotes its nuclear translocation as well as regulates lysosomal biogenesis (Medina et al., 2015), suggesting another local function of lysosomal Ca 2+ .
Our model of regulation of mTORC1 by Ca 2+ and CaM differs from that proposed in a previous report (Gulati et al., 2008), even though some of the experimental observations are in agreement. Similar to previous reports (Gulati et al., 2008;Ke et al., 2013;Graves et al., 1997), we found that mTORC1 activity is sensitive to inhibition by BAPTA-AM and CaM antagonist CMDZ (Figure 3), suggesting that both intracellular calcium and CaM are required for mTORC1 activation. However, the precise mechanism of regulation of mTORC1 by calcium and CaM is distinct in our new model. First, we demonstrated that the lysosomal pool of calcium plays a unique and critical role in mTORC1 activation in mammalian cells. In earlier studies, however, the sources of calcium have been only suggested to be extracellular (Conus et al., 1998;Gulati et al., 2008) or conventional intracellular calcium stores such as the ER (Ke et al., 2013;Graves et al., 1997;Zhou et al., 2010). Second, a previous study showed the CaM associates with mTORC1 complex through hVps34, and calcium and CaM activate mTORC1 via hVps34 activation (Gulati et al., 2008). In an independent study, it was shown that hVps15, but not Ca 2+ /CaM, activates hVps34 (Yan et al., 2009). Similarly, we also found that knockdown of hVps34 had no effect on the interaction between CaM and mTORC1 in HEK293T cells, ruling out involvement of hVps34 in the regulation of mTORC1 via Ca 2+ /CaM, at least in this cell type. We surmise that most of the previous results implicating calcium or CaM in the regulation of mTORC1 may be explained by our current model.
In previous studies, in vitro kinase assay of mTORC1 used EDTA in immunoprecipitation buffer (Kim et al., , 2003, which precluded the detection of any regulatory effect of calcium and CaM. By performing the mTOR kinase assay in the absence or presence of calcium and CaM in vitro, we were able to observe a dramatic activation of mTORC1 by calcium and CaM, revealing the functional consequence of the binding of CaM to mTOR-activation of its intrinsic kinase activity. As such, mTOR is a new type of atypical CaM-dependent kinase. The newly uncovered roles of lysosomal calcium and CaM in the regulation of mTOR signaling not only fill a gap in our understanding of this fundamental signaling pathway, but also offer new molecular targets for discovering and developing novel mTOR inhibitors.
Materials and methods
Cell lines and tissue culture HEK293T (RRID: CVCL_0063, purchased from ATCC, the identity has been authenticated using STR profiling) and A549 (RRID: CVCL_0023, purchased from ATCC, the identity has been authenticated using STR profiling) cells were cultured in low glucose DMEM (Life Technology) supplemented with 10% FBS (Life Technology). Healthy human fibroblasts (Coriell Insititute, GM03440) and mucolipidosis IV human fibroblasts (Coriell Insititute, GM02048) were cultured in EMEM (ATCC) supplemented with 15% FBS. HUVEC (purchased from Lonza) were cultured in EGM media (Lonza). All cells were cultured at 37˚C with the presence of 5% CO 2 . All cell lines were tested for mycoplasma contamination and showed negative result. HEK293T and A549 cells have been authenticated using STR profiling at Johns Hopkins Genetic Resources Core Facility. Match percent was searched and compared with American Tissue Culture Collection database. HEK293T cells showed 100% matching to ATCC HEK293T reference profiling (ATCC number CRL-3216), and A549 cells showed 93% matching to ATCC A549 reference profiling (ATCC number CCL-185). Given ! 80% level of matching indicates that the cell lines are related, we concluded that both HEK293T and A549 cell lines are authenticated.
Leucine starvation and stimulation of the cells
Almost confluent cultures in 6-well plates were washed once with leucine-free low glucose DMEM (US Biological), incubated in leucine-free DMEM for 3 hr, and stimulated with 52 mg/ml leucine for 10 min. For those cells treated with calmidazolium (CMDZ, Cayman Chemical) or BAPTA-AM (Cayman Chemical), compounds were added 1 hr prior to cell harvesting. Cells were processed for biochemical assays as described below.
Growth factor starvation and insulin stimulation of the cells
Almost confluent cultures in 6-well plates were washed once with FBS-free DMEM, incubated in FBS-free DMEM for 24 hr, and stimulated with 600 nM insulin (Life Technology) for 10 min. For those Preparation of p70S6K1, GST-4EBP1 and FLAG-CaM for Use in mTORC1 Kinase Assays HA-GST-PreScission-p70 S6K1 was transfected into HEK293T cells as described above, and after 48 hr the cells were treated with 20 mM LY294002 for 1 hr prior to cell harvesting and lysis. HA-GST-PreSciss-S6K1 was purified as described (Burnett et al., 1998). The purified protein was stored at À20˚C in 20% glycerol.
GST-fused 4EBP1 protein was expressed and purified from BL21 (DE3) Escherichia coli. Bacteria were grown to an OD of 0.8 and induced for 16 hr at 18˚C with 0.5 mM IPTG (American Bioanalytical). Bacteria were pelleted, and lysed in ice-cold PBS containing 1% Triton X-100, 1mg/mL lysozyme (Sigma-Aldrich) and protease inhibitor cocktail by sonication. Cell debris was cleared by centrifugation. The supernatant was mixed with pre-equilibrated glutathione sepharose 4B resin for 1 hr at 4˚C with rotation. After gentle centrifugation, GST-4EBP1 was eluted by 10 mM reduced glutathione, and the protein sample was desalted by PD-10 desalting columns and then eluted by the elution buffer (150 mM NaCl, 40 mM HEPES [pH 7.4]). The purified protein was stored at À20˚C in 20% glycerol.
GST-FLAG-CaM protein was expressed and purified as GST-4EBP1. GST tag was removed by PreScission (GE Healthcare) according to the manufacturer's instruction. The purified protein was stored at À20˚C in 20% glycerol.
Mammalian lentiviral shRNAs
TRC lentiviral shRNAs targeting hTRPML1, mTOR, hVps34 and raptor were obtained from Sigma. The TRC number for each shRNA is as follows: | 2018-04-03T01:06:31.658Z | 2016-10-27T00:00:00.000 | {
"year": 2016,
"sha1": "889331cdffc72f4742faee36a1a9fd14044604d2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.19360",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "889331cdffc72f4742faee36a1a9fd14044604d2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
263189487 | pes2o/s2orc | v3-fos-license | Molecular Marker Development for the Rapid Differentiation of Black Rot Causing Xanthomonas campestris pv. campestris Race 7
Xanthomonas campestris pv. campestris (Xcc) is a plant pathogen of Brassica crops that causes black rot disease throughout the world. At present, 11 physiological races of Xcc (races 1–11) have been reported. The conventional method of using differential cultivars for Xcc race detection is not accurate and it is laborious and time-consuming. Therefore, the development of specific molecular markers has been used as a substitute tool because it offers an accurate and reliable result, particularly a quick diagnosis of Xcc races. Previously, our laboratory has successfully developed race-specific molecular markers for Xcc races 1–6. In this study, specific molecular markers to identify Xcc race 7 have been developed. In the course of study, whole genome sequences of several Xcc races, X. campestris pv. incanae, X. campestris pv. raphani, and X. campestris pv. vesicatoria were aligned to identify variable regions like sequence-characterized amplified regions and insertions and deletions specific to race 7. Primer pairs were designed targeting these regions and validated against 22 samples. The polymerase chain reaction analysis revealed that three primer pairs specifically amplified the DNA fragment corresponding to race 7. The obtained finding clearly demonstrates the efficiency of the newly developed markers in accurately detecting Xcc race 7 among the other races. These results indicated that the newly developed marker can successfully and rapidly detect Xcc race 7 from other races. This study represents the first report on the successful development of specific molecular markers for Xcc race 7.
Xanthomonas campestris pv.campestris (Xcc) is a plant pathogen of Brassica crops that causes black rot disease throughout the world.At present, 11 physiological races of Xcc (races 1-11) have been reported.The conventional method of using differential cultivars for Xcc race detection is not accurate and it is laborious and time-consuming.Therefore, the development of specific molecular markers has been used as a substitute tool because it offers an accurate and reliable result, particularly a quick diagnosis of Xcc races.Previously, our laboratory has successfully developed race-specific molecular markers for Xcc races 1-6.In this study, specific molecular markers to identify Xcc race 7 have been developed.In the course of study, whole genome sequences of several Xcc races, X. campestris pv.incanae, X. campestris pv.raphani, and X. campestris pv.vesicatoria were aligned to identify variable regions like sequence-characterized amplified regions and insertions and deletions specific to race 7. Primer pairs were designed targeting these regions and validated against 22 samples.The polymerase chain reaction analysis re-vealed that three primer pairs specifically amplified the DNA fragment corresponding to race 7. The obtained finding clearly demonstrates the efficiency of the newly developed markers in accurately detecting Xcc race 7 among the other races.These results indicated that the newly developed marker can successfully and rapidly detect Xcc race 7 from other races.This study represents the first report on the successful development of specific molecular markers for Xcc race 7.
Keywords : Brassicaceae, black rot, marker, PCR, Xanthomonas campestris pv.campestris Brassicas, part of the Brassicaceae family, are globally important crop species that generate high economic and export value, such as edible oils, protein meals, vegetables, and condiments (Friedt et al., 2018).Cabbage production worldwide reached a reported 10.6 million metric tons (Food and Agriculture Organization of the United Nations, 2021).Black rot, which is primarily induced by the bacterium Xanthomonas campestris pv.campestris (Xcc), holds the utmost significance as a highly destructive disease for cabbage (Vicente and Holub, 2013;Williams, 1980).The most significant host for Xcc is Brassica oleracea, which includes economically important crops, cabbage, cauliflower, broccoli, brussels sprouts, and kale (Vicente and Holub, 2013).Black rot disease has emerged as a major constraint on cabbage production, and impacting the cabbage-growing area in Korea since the 1970s (Kim, 1986;Park, 2006).The disease affects production, notably of the vegetable Brassica, by inducing lesions on the leaves leading to a diminution in market value (Lema et al., 2012).Necrotic, darkening leaf veins and a chlorotic lesion in the shape of a V that begins at the leaf margin are typical signs of black rot (Cook et al., 1952;Lee et al., 2015;Vicente and Holub, 2013).The vascular pathogen Xcc, which infects plants through wounds, seeds, and insect transmission, is born in the seed (Cook et al., 1952;Williams, 1980).
Based on the gene-for-gene model, interactions between Xcc strains and various Brassica cultivars have led to the identification of 11 races of the Xcc pathogen (Fargier and Manceau, 2007;Kamoun et al., 1992;Vicente et al., 2001).So far, 11 Xcc races have been identified around the world using a set of differential cultivars.For example: in the United Kingdom, 6 races (races 1-6) have been reported by Vicente et al. (2001).In northwestern Spain, 7 races (races 1, 4, 5, 6, 7, 8, and 9) have been identified by Lema et al. (2012).Jensen et al. (2010), 5 races (races 1, 4, 5, 6, and 7) have been detected in Nepal.In south Africa, 4 races of Xcc (races 1, 3, 4, and 6) have been isolated by Chidamba and Bezuidenhout (2012).The newly identified races 10 and 11 have been reported in Portugal (Cruz et al., 2017).Among Xcc races, races 1 and 4 are predominant in Brassica oleracea crops throughout the world, however, race 6 is common in Brassica rapa (Lema et al., 2012;Vicente and Holub, 2013).As of present, the occurrence of Xcc race 7 has been reported in Nepal, Spain and Portugal along with the other Xcc races (Cruz et al., 2017;Jensen et al., 2010;Lema et al., 2012).Despite its constrained distribution, it is important to discriminate Xcc race 7 from the remaining races, since the frequent long-distance trade of contaminated seeds around the world can be associated with the successful dissemination of pathogen (Lema et al., 2012).Xcc exhibits diversity across many nations or regions of the same nation (Popović et al., 2013).Management of the disease includes cultivation of disease-free seeds and most of the time the phytopathogens spreads through the import and export of seeds materials (Lema et al., 2012).Therefore, there are many quarantine policies for prevention of the movement of different pests and pathogens during import and export (Martin et al., 2000).
The cultivation of resistant cultivars remains the most effective, economical, and ecologically sustainable approach for reducing the damage caused by biotic stresses (Yerasu et al., 2019).Subsequently, an alternative strategy including the development and application of resistant cultivars has long been accepted, but in practice has had only modest success (Taylor et al., 2002).Various Brassica cultivars have shown resistance to the black rot disease, but these cultivars have not been substantiated to be resistant to all races of Xcc (Afrin et al., 2019;Vicente et al., 2001).It is essential that one considers the presence of different races and the genetic diversity of the disease (Jensen et al., 2010).Differentiating races of a pathogen is important for disease management since it allows for targeted and effective control strategies.By identifying and understanding the specific races of a pathogen, such as Xcc in the case of black rot in Brassica oleracea, researchers can develop and implement control measures that are tailored to the specific races present (Ignatov et al., 1998).It also allows in monitoring the prevalence and distribution of specific races and track their spread to implement appropriate management strategies.Therefore, accurate and immediate identification of the pathogen Xcc is necessary for effective black rot control of the pathogen.Xcc needs to be immediately and accurately identified in order to control.To facilitate disease control and breeding programs, it is essential to establish a quick and accurate method for detecting Xcc in Brassica seeds and plants (Eichmeier et al., 2019).A real-time polymerase chain reaction (PCR)-based method using the hrpF gene has been developed to distinguish Xcc-infected Brassica seeds from other bacteria (Berg et al., 2006).Additionally, the Xcc genomic structure has been classified and distinguished using repetitive DNA PCR (rep-PCR) based on repetitive DNA sequence elements such as Xcc, X. campestris pv.vesicatoria (Xcv), X. oryzae pv.oryzae (Xoo), and X. campestris pv.musacearum (Aritua et al., 2007;Louws et al., 1995;Vera Cruz et al., 1996).However, these approaches are time-consuming and demanding (Singh et al., 2014).PCR is incredibly effective at diagnosing plant diseases (Zaccardelli et al., 2007).PCR-based technique has been described as a powerful alternative tool for the rapid detection and identification of bacterial strains causing plant disease (Song et al., 2014).
Our research team has made significant advancements in the development of molecular marker-based PCR for racespecific identification of Xcc races 1-6.These newly developed markers offer rapid detection within a few hours, while requiring less labor and providing higher reliability and sensitivity compared to the traditional method that utilizes differential cultivars of the Brassicaceae family for Xcc race determination (Afrin et al., 2018(Afrin et al., , 2019(Afrin et al., , 2020;;Rubel et al., 2017Rubel et al., , 2019b)).In continuation of this work, our objective was to develop a robust and specific molecular marker, designed through the alignment and re-sequencing of whole genome sequences of different Xcc races, capable of accurately detecting Xcc race 7 using a PCR-based approach, thereby distinguishing it from other bacterial strains.
Materials and Methods
Bacterial strains and media.A group of 22 bacterial strains were employed for the analysis.This included nine standard reference races of Xcc (races 1-9), three X.campestris (Xc) pathovars, two Xc subspecies, two plant-infected bacteria and another six Xcc strains in Korea (race undetermined) (Table 1).The bacterial strains were cultured on King's Medium B for 48 h at 30°C (King et al., 1954).
Extraction of genomic DNA.Genomic DNA of all bacterial isolates was extracted using QIAamp DNA Mini Kit (Qiagen, Hilden, Germany) following the manufacturer's instructions.The concentration and purity of the extracted DNA were measured using a Nanodrop ND-1000 spectrophotometer (NanoDrop, Wilmington, DE, USA) and then stored at -20°C until study.
PCR amplification and primer development.
A sevenset of Xcc race-specific molecular markers previously reported from our laboratory was used to amplify DNA fragments (Table 2).Race 7 specific primers were designed using Primer 3 online program (https://primer3.ut.ee/) and the blast tool was used to check the specificity of the primers.In this study, eight primer pairs were designed for the detection of race 7 (Table 3).Thereafter, all these primer pairs were validated with all the bacteria in Table 1.Using TaKaRa TP600 Thermal cycler system (TaKaRa, Tokyo, Japan) race 7 primers were amplified with the following conditions: denaturation at 95°C for 2 min followed by 30 cycles (95°C for 15 s, 70°C for 15 s or 30 s and 72°C, 15 s) and terminated by a final elongation at 72°C for 2 min (Table 3).PCR reaction was prepared for a 10 µl reaction containing 1 µl (50 ng) of DNA, 5 µl of 2× Prime Taq Premix (GenetBio, Daejeon, Korea), 0.5 µl of each forward and reverse primers (0.2 mM) and 3 µl of sterile distilled water.The PCR products were resolved on 2.5% agarose gel for 30 min on an Agaro-Power electrophoresis unit (Bioneer, Daejeon, Korea), and visualized with a GD-1000 gel documentation system (Axygen, Union City, CA, USA) under UV light at 320 nm.All the PCR was conducted with two biological replicates and repeated three times independently.
Cloning and sequencing.The PCR amplicons of race 1 (801 bp) and race 7 (469 bp) from the 'Race 7-1F-1R' primer were excised from the agarose gel and purified using Wizard SV Gel and PCR Cleanup System (Promega, Madison, WI, USA).The purified fragments were cloned using TOPcloner Blunt Kit (Enzynomics, Daejeon, Korea) according to the protocol provided by the manufacturer.The positive clones were selected and further cultured in Sensitivity test.The sensitivity and specificity of three primer pairs of race 7 were tested to check the level of detection by PCR amplification which was aforementioned, 20 ng/μl of purified DNA of race 7 was taken and diluted serially up to 10 -4 dilution.Thereafter, 1 μl of DNA from each dilution (20 ng/μl, 2 ng/μl, 0.2 ng/μl, 0.02 ng/μl, and 0.002 ng/μl) was taken and used for PCR reaction.Sterile distilled water was used as a negative control.
Results
Xcc race 7 specific marker development and their specificity.To establish a PCR-based marker system specifically to detect Xcc race 7, we compared the complete genome sequences of Xcc races 1, 3, 4, 9, Xci, Xcr, and Xcv sequences obtained from the NCBI database.Additionally, we sequenced and aligned DNA fragments from races 2, 5, 6, 7, and 8 with the aforementioned genomes (Fig. 1), facilitating the design of primers specific to race 7.
By analyzing the whole-genome sequences, we were able to realign and compare them, thereby identifying variable regions that were present across the strains.Specifically, we identified specific InDel regions that served as targets for developing unique molecular markers specific to race 7. Ultimately, we designed eight primer pairs to differentiate race 7, from which three primer sets; one sequencecharacterized amplified region (SCAR) marker (30-7-F-R2) and two InDel markers (race 7-1F-1R and Race 7-5F-5R) for specific detection of Xcc race 7 (Table 3) were able to specifically detect the race 7. To validate the efficacy of the newly developed primers, we performed PCR reactions using all the bacterial strains listed in Table 1.
The PCR products showed that: a SCAR marker '30-7-F-R2' was amplified with an approximate size of 320 bp fragment specific to Xcc race 7 only while the other sample was unamplified (Fig. 2A).Thereafter, InDel marker 'Race 7-1F-1R' showed polymorphic amplification to race 7 with an amplicon size of 600 bp and 850 bp for race 7 and Xcc races and two Xc pathovars (Xci and Xcr), respectively (Fig. 2B).Another InDel primer pair 'Race 7-5F-5R' also gave the polymorphic amplification to race 7 in amplicon size of 700 bp for race 7 only and Xcc races and Xci with 800 bp size, while other samples were not detected (Fig. 2C).Therefore, these 3 primer pairs can successfully and rapidly detect Xcc race 7 from other Xcc.
Race determination for Xcc unknown KACC strains.
In this study, the three primer pairs were also effective in detecting the race of unknown Xcc strains obtained from the Korean Agriculture Centre Collection (KACC), namely KACC19132, KACC19133, KACC19134, KACC19135, and KACC19136.These strains exhibited the same specific amplicons as Xcc race 7, confirming their classification as race 7, with one exception-strain KACC10377 (Table 1, Fig. 3A-J).Notably, previous research conducted by Rubel et al. (2019b) identified KACC10377 as race 1.Thus, our findings demonstrate that the race-unknown strains obtained from KACC are predominantly race 7, except for KACC10377, which corresponds to race 1 based on the aforementioned study.
Sensitivity of the developed markers.
To assess the detection sensitivity of the developed markers, PCR amplifications were conducted using five different concentrations of race 7 genomic DNA: 20 ng/μl, 2 ng/μl, 0.2 ng/μl, 0.02 ng/μl, and 0.002 ng/μl.Sterile distilled water was used as a negative control.All three markers successfully amplified DNA at a minimum concentration of 2 ng/μl (Fig. 4A-C).
In contrast, 'Race 7-5F-5R' marker demonstrated the ability to amplify DNA even at lower concentrations, specifically as low as 0.02 ng/μl (Fig. 4B).Cloning and sequencing analysis.Additionally, the PCR amplicons from race 1 and race 7 using the 'Race 7-1F-1R' primer were verified through cloning and sequencing.The size of race 1 amplicon was 801 bp (Supplementary Fig. 1A), but the sequencing result showed an amplicon size of 753 bp (Supplementary Fig. 1B).The size of the race 7 amplicon was 469 bp (Supplementary Fig. 1C).This may be due to the partial sequencing.These sequences were then aligned with the sequences of race 7 amplicon using Clust-alW.The alignment revealed a deletion of 284 bp in race 7 compared to race 1 (Supplementary Fig. 2).
Discussion
The pathogen Xcc is a significant plant pathogen that causes devastating black rot disease in the Brassicaceae family, particularly affecting cabbage crops worldwide.In order to effectively manage this disease accurate diagnosis is crucial.Traditionally, race identification has relied on the use of differential cultivars from the Brassicaceae family, which is a time-consuming process requiring fieldwork over at least one cropping season.Therefore, it is essential to employ proper and reliable techniques for the detection of infected crops.PCR-based markers have been successfully utilized for identifying leaf and seed infections caused by bacterial and fungal pathogens (Thangavelu et al., 2022;Wang et al., 2010).Molecular markers have also been developed for the detection of Xcc races 1 to 6 using whole genome realignment (Afrin et al., 2018(Afrin et al., , 2019(Afrin et al., , 2020;;Rubel et al., 2017Rubel et al., , 2019b)).These markers have proven to be effective in rapidly and accurately distinguishing between Xcc races within a few hours.
Similar PCR-based marker approaches have been applied in the diagnosis of other plant diseases.For instance, Thangavelu et al. (2022) developed race-specific markers for the differentiation of Indian Fusarium oxysporum f. sp.cubense, the causal agent of Fusarium wilt in bananas.Wang et al. (2010) utilized SCAR markers to reliably detect two races (CYR32 and CYR33) of wheat stripe rust caused by Puccinia striiformis f. sp.tritici.Pasquali et al. (2007) successfully employed an inter-retrotransposon sequence-characterized amplified region marker to identify race 1 of F. oxysporum f. sp.lactucae on lettuce.Cho et al. (2011) developed sensitive and specific primers for the detection of bacterial leaf blight caused by Xoo in rice, utilizing real-time and conventional PCR approaches based on an rhs family gene.
In this study, we developed novel PCR amplification primers for the specific detection of Xcc race 7. The avail-ability of genome sequences of Xcc races (1, 3, 4, and 9), as well as other pathovars Xci, Xcr, and Xcv and their alignment, facilitated the discovery of highly variable genomic regions and the development of unique markers specific to Xcc race 7. The SCAR marker (30-7-F-R2) and two InDel markers (Race 7-1F-1R and Race 7-5F-5R) developed in this study reliably and effectively detect Xcc race 7, differentiating it from other Xcc races, Xc subspecies, and other plant-infecting bacteria.Additionally, the five strains of Xcc unknown have amplified with the newly developed race 7-specific markers and this indicates that five unknown KACC strains may belong to race 7 (Fig. 3H-J).Therefore, this finding provides a reliable diagnostic tool for the accurate and rapid detection of Xcc race 7, offering an alternative to the time-consuming and laborious method of using differential cultivars for race determination.
Fig. 1 .
Fig. 1.Alignment of whole genome sequences and identification of variable regions.(A) Line diagram representation of the whole genome length of Xanthomonas campestris pv.campestris (Xcc) race 7. (B) Whole genome alignment of Xcc races 1 to 9 and subspecies to identify the variant regions using Integrative Genomics Viewer (IGV) software with two parallel lines indicating the region where variants are identified.(C) A line diagram representation of the variable region identified; red and blue arrow represents forward and reverse primers respectively.Xci, X. campestris pv.incanae; Xcr, X. campestris pv.raphanin; Xcv, X. campestris pv.vesicatoria.
liquid LB media plate containing ampicillin.The purification of plasmid DNA was extracted using QIAprep Spin Miniprep Kit (Qiagen).Thereafter, the cloned sequences and alignment were performed in Multiple Sequence Alignment by the ClustalW program (https://www.genome.jp/tools-bin/clustalw).
Table 1 .
List of bacterial strains of Xcc races, Xc pathovars and other bacteria used in this study Xcc, Xanthomonas campestris pv.campestris; Xc, X. campestris; NCPPB, The National Collection of Plant Pathogenic Bacteria; KACC, Korean Agriculture Culture Collection, Jeollabuk-do, Korea; ICMP, International Collection of Microorganisms from Plants, Auckland, New Zealand; HRI-W, Horticulture Research International, Wellesbourne, UK.
Table 2 .
List of race-specific primers used in this study
Table 3 .
List of primer pairs designed and validated for the specific detection of race 7 | 2023-09-29T15:03:50.886Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "f48c72ca79458b0db0d1b0b9fcdb2ef991ac8a1d",
"oa_license": null,
"oa_url": "http://www.ppjonline.org/upload/pdf/PPJ-OA-07-2023-0102.pdf",
"oa_status": "BRONZE",
"pdf_src": "Anansi",
"pdf_hash": "bd53c1670b9be99a853187de73a91e82d4b12bcf",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267231886 | pes2o/s2orc | v3-fos-license | Both symbionts and environmental factors contribute to shape the microbiota in a pest insect, Sogatella furcifera
Introduction Bacterial symbionts are prevalent in arthropods globally and play a vital role in the fitness and resistance of hosts. While several symbiont infections have been identified in the white-backed planthopper Sogatella furcifera, the impact of environmental factors on the microbiota within S. furcifera remains elusive. Methods In this study, a total of 142 S. furcifera individuals from 18 populations were collected from 14 locations across six countries (China, Thailand, Myanmar, Cambodia, Vietnam, and Laos) analyzed with 2bRAD-M sequencing, to examine the effects of symbionts on the microbiota in the S. furcifera population, as well as the vital effects of environmental factors on the bacterial communities. Results and discussion Based on the results, in S. furcifera, the presence of symbionts Wolbachia and Cardinium negatively influenced the abundance of other bacteria, including Enterobacter, Acinetobacter, and Lysinibacillus, while Wolbachia infection significantly decreased the diversity of the microbial community. Moreover, several environmental factors, including longitude, latitude, temperature, and precipitation, affected the abundance of symbionts and microbiota diversity in S. furcifera. These results collectively highlight the vital role of Wolbachia in S. furcifera microbiota, as well as the intricate effects of environmental factors on the bacterial communities of S. furcifera.
Introduction
As a significant pest in China and South Asia, the white-backed planthopper (WBPH), scientifically referred to as Sogatella furcifera (Hemiptera: Delphacidae), causes substantial losses on agricultural fields, particularly rice paddies (Savary et al., 2012;Huang and Qin, 2018).S. furcifera causes damage by sucking phloem sap from rice plants and transmitting viruses, including the southern rice black-streaked dwarf virus, to rice hosts (Zhou et al., 2008(Zhou et al., , 2018)).Over the past decade, the frequency of S. furcifera outbreaks has dramatically increased, making it a destructive pest in rice production (Hu et al., 2015;Wang et al., 2018).With its high adaptability, long-distance migratory abilities, and the development of resistance to existing pesticides, managing S. furcifera outbreaks has become increasingly challenging Yang et al. 10.3389/fmicb.2023.1336345Frontiers in Microbiology 02 frontiersin.org(Wang et al., 2019).Coupled with the growing problem of pesticide residue, there is an urgent need to develop novel and efficient methods to mitigate the damage elicited by pest insects, including S. furcifera.Microbial communities, especially bacteria, including endosymbionts, play a vital role in the development, fitness, fecundity, and resistance of host arthropods (Douglas, 2009;McFall-Ngai et al., 2013).These symbiotic bacteria fall into two categories: primary symbionts (obligate symbionts) and secondary symbionts (facultative symbionts), residing predominantly in bacteriocytes within hosts and exerting complex influences on host arthropods (Werren et al., 2008;Zeng et al., 2018;Yang et al., 2022).The former is essential for the development and survival of host invertebrates.For instance, the symbiont Portiera provides essential amino acids (EAA) and B vitamins to whitefly hosts (Wang et al., 2022), whilst the symbiont Buchnera supplies aphid hosts with various nutritional elements and protects them from heat stress (Dunbar et al., 2007;Douglas, 2009).
Wolbachia, the most renowned symbiont in invertebrates, was first discovered in mosquitoes in 1924 (Hertig and Wolbach, 1924).Interestingly, it can infect at least 66% of all arthropod species (Werren, 1997;Miao et al., 2020).In filarial nematodes, Wolbachia is regarded as an obligate mutualist, essential for the development and fertility of host nematodes (Taylor et al., 2005;Wasala et al., 2019).Previously considered reproductive parasitism in invertebrates, capable of manipulating host reproduction through phenomena such as cytoplasmic incompatibility, feminization, parthenogenesis, and male-killing (Werren et al., 2008), recent studies have established the benefits conferred by Wolbachia (Zug and Hammerstein, 2015).Another study corroborated the influence of Wolbachia on microbial communities in small planthoppers (Duan et al., 2020).However, the intricate interaction between Wolbachia and the bacterial community in arthropods warrants further investigations.
Microbial communities, including symbionts in arthropods, are influenced by various biotic and abiotic factors.For example, high temperatures significantly influence symbiont titer and the diversity of the bacterial community in whiteflies (Yang et al., 2022).Similarly, the diet of invertebrate hosts significantly impacts the structure of microbial communities (Santo Domingo et al., 1998;Colman et al., 2012), and both temperature and humidity affect bacterial communities (Behar et al., 2008).Furthermore, earlier research has documented the crucial impact of host genetic backgrounds on microbiota (Suzuki et al., 2019;Gupta and Nair, 2020).Interestingly, the presence of symbionts exerts a significant influence on microbial communities.Symbionts occupy a relatively large space and engage in competition with other bacteria for nutrients.According to a prior study, Spiroplasma infection significantly decreases the titer of Wolbachia in the same host (Goto et al., 2006).Additionally, endosymbionts such as Wolbachia and Cardinium can lower the microbiome diversity of the host Sogatella furcifera (Li et al., 2022).The complex effects of symbionts on the microbiota of S. furcifera require further investigation.
2bRAD-M sequencing is a novel method that is efficient in detecting low-biomass microbiomes at the species level with high fidelity (Sun et al., 2022).In recent years, the damage caused by S. furcifera has drammatically rapidly, particularly in Asia and China.However, the complex interactions between the bacterial community of S. furcifera, including bacterial symbionts and environmental factors, remain poorly understood.Moreover, the potential application of symbionts in the biocontrol of S. furcifera is an area that warrants exploration.In this study, 18 S. furcifera populations sourced from six Asian countries were subjected to 2bRAD-M sequencing in order to unravel the vital factors contributing to shaping microbial communities in S. furcifera.Moreover, this comprehensive study explored the effects of symbionts and environmental factors on bacterial communities.
Collection of Sogatella furcifera populations in China and South Asia
To determine the relationships among symbionts, bacterial communities in S. furcifera, and environmental factors, a total of 142 S. furcifera individuals from 18 populations were collected from 14 locations across six countries (China, Thailand, Myanmar, Cambodia, Vietnam, and Laos) analyzed in this study.Each S. furcifera population comprised at least six independent samples.All S. furcifera samples were preserved in 100% alcohol and subsequently dispatched for 2bRAD-M sequencing.Information about the samples is detailed in Table 1 and Supplementary Table S1, wherein each replicate represents an independent S. furcifera adult.
DNA extraction and 2bRAD-M sequencing
Bacterial communities in S. furcifera were analyzed using the 2bRAD-M sequencing methods.2bRAD-M sequencing and library construction were performed by Qingdao OE Biotech Co., Ltd.(Qingdao, China), following previously established protocols (Wang et al., 2012) with marginal modifications.Briefly, the DNA (100 ng) from each sample was digested using 4 U of the BcgI enzyme (NEB) at 37°C for 3 h.Next, the resulting DNA fragments were ligated with specific adaptors.A mix of 5 μL of digested DNA and 10 μL of ligation master mix, containing 0.2 μM of each adaptor and 800 U of T4 DNA ligase (NEB), underwent a ligation reaction at 4°C for 12 h.
Following ligation, the products were PCR amplified, and the resulting DNA was subjected to electrophoresis on an 8% polyacrylamide gel.An approximately 100-bp band was excised, and the DNA fragments were then diffused in DEPC water at 4°C for 12 h.A PCR test was conducted using primers bearing platform-specific barcodes to introduce sample-specific barcodes.Each PCR sample, totaling 20 μL, contained 25-ng of gel-extracted PCR product, 0.2 μM each of forward and reverse primers, 0.3 mM dNTP mix, 0.4 U Phusion high-fidelity DNA polymerase (NEB), and 1 × Phusion HF buffer.After PCR amplification and electrophoresis, the PCR products were purified using the QIAquick PCR purification kit (50) (Qiagen) and sequenced on the Illumina Nova PE150 platform.
Data analysis of 2bRAD-M sequencing results
To conduct the 2bRAD-M analysis, microbial genome data from the NCBI database, consisting of 173,165 species of fungi, bacteria, and archaea, were employed.Restriction enzymes of 16 type 2B were utilized to fragment the samples, and Perl scripts were used in this process.Thereafter, the RefSeq (GCF) number was used to assign 2bRAD-M tags with microbial genome information, including taxonomic data.Unique 2bRAD tags occurring only once in every GCF were selected as species-specific 2bRAD-M markers, forming a reference database.A detection threshold of 0.0001 (0.01%) relative abundance was set as the default (Franzosa et al., 2018).
To calculate the relative abundance of each bacterium, 2bRAD tags from all samples, post-quality control, were mapped against the 2bRAD marker database containing tags unique to the 26,163 microbial species using a built-in Perl script.To mitigate false positives in species identification, a G score was calculated for each identified species within a given sample.This score, derived from a harmonic means of read coverage of 2bRAD markers belonging to a species and the total number of possible 2bRAD markers for this species, was employed identify species while minimizing errors.A threshold G score of 5 was set to prevent false-positive microbial species discovery (Sun et al., 2022).S: the number of reads assigned to all 2bRAD markers belonging to species i within a sample.
T: number of all 2bRAD markers of species i that have been sequenced within a sample.
The average read coverage of all 2bRAD markers for each species was computed to represent the number of individuals of a species present within a sample at a given sequencing depth.The relative abundance of a species was then calculated as the ratio of the number of microbial individuals belonging to that species against the total number of individuals from known species detected within a sample.
Relative abundance
S: the number of reads assigned to all 2bRAD markers of species i within a sample.
T: the number of all theoretical 2bRAD markers of species i.
Additionally, five environmental factors, namely annual mean temperature, annual precipitation, longitude, latitude, and altitude, were analyzed in the study.Data for these factors were downloaded from the WorldClim website. 1 The impacts of climate and geographical factors on diversity and abundance indexes and symbiont infections were described using two structural equation models (SEM) with Satorra-Bentler correction.To limit heteroscedasticity, the log value of the "precipitation" values was used in the SEMs.SEM models were deemed acceptable when p value >0.05 and CFI > 0.95, after systematically excluding redundant pathways based on a lower AIC value.To standardize each parameter and eliminate variance, SEM coefficients were estimated through standardized transformation.
Prior to analysis, data on Cardinium abundance (shown in Figure 1 left panel), Wolbachia abundance (shown in Figure 1 right panel), and microbial diversities of samples (shown in Figure 2) underwent normality testing using the Kolmogorov-Smirnov test and Levene's test to assess the homogeneity of group variances.Data exhibiting a normal distribution were analyzed using one-way ANOVA with post-hoc Tukey HSD.In cases where the Cardinium abundance, Wolbachia abundance, and diversity data did not conform to normal distribution, they were analyzed using the Kruskal-Wallis test and Dunn's test with Bonferroni correction for multiple comparisons.All statistical analyzes were conducted using SPSS 21.0.PCoA plots illustrating Bray-Curtis intersample distances and classification probabilities were generated using QIIME software (Caporaso et al., 2010).Pearson correlation analysis, performed using SPSS 21.0, was used to explore the relationships between different symbionts and diversity indexes.Furthermore, all graphical representations were generated using GraphPad Prism 9.0.0.
Composition of bacterial communities in different Sogatella furcifera populations
In the current study, a total of 18 S. furcifera populations were analyzed, comprising 9 populations from China and 9 populations from the South Asian region.The predominant bacterial symbionts identified in all S. furcifera populations were Wolbachia and Cardinium.Interestingly, the former was the most abundant bacteria in nearly all S. furcifera populations, being present in every S. furcifera sample.Notably, the relative abundance of Wolbachia exceeded 80% in two Chinese populations, specifically CS (85.95 ± 2.92%, Mean ± SEM) and FN (89.26 ± 3.01%, Mean ± SEM), as well as in all Laos populations, including VE (86.02 ± 1.57%, Mean ± SEM) and SA (86.02 ± 3.58%, Mean ± SEM).Nevertheless, significant variations in Wolbachia abundance were observed among different populations.Meanwhile, 3 S. furcifera populations from China (JY, YT, and TC) exhibited the lowest Wolbachia abundance, significantly lower than the Wolbachiaabundant populations such as FN, VE, and SA (Figure 1 right panel).Cardinium emerged as the second most abundant bacterium in S. furcifera (Figure 1 left panel).In several South Asia populations, including KO (63.88 ± 14.38%, Mean ± SEM), RO (59.96 ± 14.04%, Mean ± SEM), and LU (52.50 ± 15.74%, Mean ± SEM) as well as one Chinese population, YT (27.18 ± 10.38%, Mean ± SEM) Cardinium was the dominant symbiont, with its abundance in the KO population being significantly higher than that in the JY population (11.72 ± 7.78%, Mean ± SEM) (p < 0.05, Kruskal-Wallis test).Besides, Wolbachia abundance was less than 1% in the JY population, while Cardinium abundance was the highest (56.59 ± 11.90%, Mean ± SEM).Additionally, the primary symbiont Portiera, typically associated with whiteflies, was also detected in several S. furcifera populations, including TC, KO, VE, etc. (Figure 3).Principal Component Analysis was conducted to explore differences among bacterial communities of all S. furcifera populations, revealing distinct microbial communities in S. furcifera individuals from FN and JY populations compared to other samples (Supplementary Figure S1).
Microbial diversities of different geographical Sogatella furcifera populations
To elucidate variations in microbial communities among different S. furcifera populations, alpha diversity indices, including Shannon, Simpson, and Chao1, were calculated for each S. furcifera population.Considering that the three alpha indices of all populations did not follow a normal distribution, the Kruskal-Wallis test, and Dunn's test with Bonferroni correction for multiple comparisons were adopted (Figure 2).Intriguingly, populations with low Wolbachia abundance exhibited higher alpha diversities in bacterial communities compared to those with high Wolbachia abundance.Specifically, populations with low Wolbachia abundance, such as JY, TC, and YT, displayed significantly higher Shannon and Simpson indices compared to populations with high Wolbachia abundance, including VE, SA, CX, and FN (Figure 2).The UPGMA hierarchical cluster diagram of different Sogatella furcifera populations with 2bRAD-M sequencing data was shown (Supplementary Figure S2), and the phylogenetic tree of different Sogatella furcifera populations with 2bRAD sequencing results was constructed (Supplementary Figure S3).
Influence of Wolbachia and Cardinium on other bacteria and microbial diversities of Sogatella furcifera
Pearson analysis was used to assess the correlations between symbionts and other bacteria in S. furcifera.As illustrated in Figure 4A, the presence of Wolbachia negatively impacted the abundance of various bacteria, encompassing Enterobacter, Acinetobacter, and Lysinibacillus.Notably, Wolbachia significantly and negatively influenced the abundance of Portiera in S. furcifera (r = −0.518,p < 0.05, Pearson analysis).While the abundance of Wolbachia was negatively correlated with that of Cardinium, the correlation was not statistically significant (r = −0.451,p = 0.06, Pearson analysis).Consistently, Cardinium significantly positively influenced the abundance of Pseudomonas (r = −0.804,p < 0.001, Pearson analysis) and that of various other bacteria, albeit the correlation was not statistically significant (Figure 4B).
Noteworthily, Pearson analysis exposed that the presence of Wolbachia significantly and negatively affected the three diversity The abundance of microbial communities of Sogatella furcifera across 18 populations collected in Asia based on 2bRAD-M sequencing results.The relative abundance of top 15 abundant bacteria at the genus level was shown with different populations.
Effects of environmental factors on bacterial abundance, including symbionts in Sogatella furcifera
The effects of five environmental factors (annual mean temperature, annual precipitation, latitude, longitude, and altitude) on S. furcifera microbiota were determined through Pearson analysis.While both Wolbachia and Cardinium abundance in S. furcifera increased with annual mean temperature, there was no significant correlation between temperature and the abundance of these two symbionts.Notwithstanding, temperature had a significantly negative impact on the abundance of numerous bacteria, such as Enterobacter, Lysinibacillus, and Acinetobacter (Figure 6A).Precipitation had a significant positive influence on Wolbachia abundance (r = 0.489, p < 0.05, Pearson analysis) (Figure 6B).Latitude was significantly and positively correlated with the abundance of many bacteria and was negatively correlated with that of Cardinium and Wolbachia, although the difference was not significant for the latter (Figure 6C).On the other hand, longitude was significantly and negatively correlated with the abundance of Wolbachia (r = 0.750, p < 0.001, Pearson analysis) and positively correlated with the abundance of other bacteria in S. furcifera (Figure 6D).Finally, altitude did not significantly affect bacterial abundance in S. furcifera (Supplementary Figure S5).
Influence of environmental factors on microbial diversities in S. furcifera
Furthermore, the effects of environmental factors on microbiota diversity in S. furcifera were explored.According to Pearson analysis, both latitude and longitude exerted a positive influence on all three diversity indexes (Shannon, Simpson, and Chao1 indexes) of bacterial communities in S. furcifera, whereas altitude had a marginal negative effect on microbiota diversity (Figure 8).In contrast, SEM analysis identified that altitude negatively impacted the Shannon index (r = −0.806,z = −2.096,p < 0.05, SEM), which differed from the results of Pearson analysis (Figure 8G).SEM analysis uncovered that temperature had a negative influence on both the Shannon index (r = −1.879,z = −2.527,p < 0.05, SEM) and Simpson index (r = −3.189,z = −3.493,p < 0.001, SEM) (Figure 9).
Discussion
Symbiotic bacteria play a critical role in the biology, ecology, and evolution of hosts, and various environmental factors significantly impact bacterial communities in invertebrate hosts.Herein, the effects of symbionts on the microbiota in S. furcifera populations and the significant effects of environmental factors on bacterial communities in S. furcifera were analyzed.Our findings revealed that the presence of symbionts Wolbachia and Cardinium in S. furcifera negatively influenced the abundance of numerous other bacteria.Additionally, Wolbachia infection significantly reduced the diversity of microbial communities in S. furcifera.Several environmental factors, including longitude, latitude, temperature, and precipitation, were found to impact the abundance of symbionts and the microbial diversity in S. furcifera.
In the current study, the bacterial community composition in S. furcifera was abundant, primarily consisting of two dominant symbionts, namely Wolbachia and Cardinium (Figure 3).These results are consistent with the findings of previous studies on S. furcifera microbiota (Li et al., 2020(Li et al., , 2021(Li et al., , 2022)).Notably, the presence of a primary symbiont, Portiera, in the S. furcifera microbiota, with significant infection rates was noted in various populations such as RO (5.03 ± 10.38%, Mean ± SEM) and KO (2.73 ± 1.40%, Mean ± SEM) (Figure 3).The horizontal transmission of symbionts between different arthropod species has been well-documented (Werren et al., 2008;Zhang et al., 2016).Research has established that horizontal transmission can occur between different phloem sap-feeding insect species, accounting for the presence of Portiera within S. furcifera (Gonella et al., 2015).Previous studies confirmed a vital influence of genetic backgroud on the microbiome in invertebrates (Suzuki et al., 2019;Gupta and Nair, 2020), however, no direct relationship between S. furcifera genetic phylogenetic tree and UPGMA hierarchical cluster diagram of microbiome in S. furcifera discovered (Supplementary Figures S2, S3).
Interactions among different bacteria, especially symbionts, are complex in arthropods.Specifically, the presence of one symbiont generally influences the infection patterns of other symbionts (Evans and Armstrong, 2006;Zhao et al., 2018;Fromont et al., 2019;Duan et al., 2020).This phenomenon is not surprising, given that symbionts are primarily restricted to bacteriocytes and reproductive tissues within invertebrate hosts, leading to competition for limited nutrition and space (Engel and Moran, 2013;Douglas, 2015;Yang et al., 2019).For example, in Laodelphax striatellus, Wolbachia infection negatively affects the abundance of 154 other bacterial genera within hosts (Duan et al., 2020).Conflicting interactions between symbionts, such as Cardinium and Hamiltonella in whiteflies, have been observed (Zhao et al., 2018), while other bacteria in honeybees decrease the development rate of the primary symbiont Paenibacillus larvae (Evans and Armstrong, 2006).Comparable results were obtained in our study; Wolbachia negatively influenced the abundance of 13 primary genera of bacteria, while Cardinium negatively influenced the abundance of 11 other genera (Figure 4).This observation suggests a competitive relationship among bacteria in S. furcifera.However, Cardinium infection was positively correlated with the abundance of Pseudomonas and Proteus, indicating a cooperative interaction between Cardinium and these two bacteria (Figure 4B).Similar interactions have been described in other studies.For example, the presence of Wolbachia in the fruit fly Drosophila neotestacea promotes the infection of Spiroplasma (Fromont et al., 2019).In Laodelphax striatellus, Wolbachia infection increases the abundance of other bacteria, such as Spiroplasma and Ralstonia (Duan et al., 2020).These complex interactions among bacteria, especially symbionts, necessitate further exploration.
Symbiont plays a pivotal role in shaping the microbial community of invertebrates and invariably influences the host's microbiota structure.Symbiont infections frequently lead to a reduction in bacterial diversities in arthropod hosts (Duan et al., 2020;Li et al., 2020Li et al., , 2022)).In the present study, Wolbachia infection decreased the diversity of the bacterial community in S. furcifera (Figure 5).This finding is in line with the result of previous studies in Aedes aegypti, wherein the presence of Wolbachia reduced the diversity of resident bacteria in mosquitoes (Audsley et al., 2018).Similar reductions in microbial diversities have been observed in the small brown planthopper Laodelphax striatellus (Zhang et al., 2020) and Drosophila melanogaster (Ye et al., 2017).These studies collectively underscore the negative influence of symbionts on the bacterial communities of arthropods.
The influence of environmental factors on arthropod microbial communities has been explored in previous research.Studies on global invertebrates indicated that temperature significantly affects the occurrence of Wolbachia and Cardinium infections in host arthropods (Charlesworth et al., 2019).Similar temperature effects on symbionts were corroborated in other studies (Corbin et al., 2016).However, in this study, the impact of temperature on symbionts (Cardinium and Wolbachia) was not significant in S. furcifera (Figures 6A, 7), suggesting weak temperature effects on symbiont infections in S. furcifera.Other environmental factors also influence symbiont infections in hosts.For example, in the spider mite Tetranychus truncatus, Wolbachia infection was significantly influenced by annual mean temperatures, whereas the rates of Cardinium and Spiroplasma infections were correlated with altitude (Zhu et al., 2018).Additionally,
FIGURE 1
FIGURE 1 Relative infection abundance of Cardinium (left panel), Wolbachia (right panel) and collection sites (middle panel) of Sogatella furcifera across 18 geographical locations by 2bRAD-M sequencing.As the symbiont infection data do not follow a normal distribution, the difference of Wolbachia abundance in various S. furcifera populations was analyzed by Kruskal-Wallis test and Dunn's test with Bonferroni correction for multiple comparisons by SPSS 21.0, different letters represent significant difference.
FIGURE 3
FIGURE 3 Alpha diversities of all 18 Sogatella furcifera MED populations collected in Asia.The Shannon index (A), Simpson index (B) and Chao1 index (C) were present based on 2bRAD-M sequencing results, respectively.As the diversity data do not follow a normal distribution, they were analyzed by Kruskal-Wallis test and Dunn's test with Bonferroni correction for multiple comparisons by SPSS 21.0, different letters represent significant difference.
FIGURE 4
FIGURE 4Relationships between the proportions of other main 13 bacteria and the proportions of Wolbachia (A) or the proportions of Cardinium (B) among all 18 Sogatella furcifera populations by Pearson correlation analysis (SPSS 21.0) based on 2bRAD-M sequencing results.r-values and p values of each linear regression plots are provided."ns" means no significant; asterisks indicate significant difference the two compared group, *p < 0.05; **p < 0.01; ***p < 0.001.
FIGURE 5
FIGURE 5 Relationships between the abundance of Wolbachia and the Shannon diversity index (A), Simpson index (B) and Chao1 index (C) among all 18 Sogatella furcifera populations by Pearson correlation analysis (SPSS 21.0) based on 2bRAD-M sequencing results.r-values and p values of each linear regression plots are provided.
FIGURE 6
FIGURE 6 Relationships between the proportions of main 14 bacteria and the annual mean temperature (A), annual precipitation (B), latitude (C) and longitude (D) among all 18 Sogatella furcifera populations by Pearson correlation analysis (SPSS 21.0) based on 2bRAD-M sequencing results.r-values and p values of each linear regression plots are provided."ns" means no significant; asterisks indicate significant difference the two compared group, *p < 0.05; **p < 0.01; ***p < 0.001.
FIGURE 7
FIGURE 7 Path diagram the structural equation model (SEM) for environmental factors (including annual mean temperature, annual precipitation, longitude, altitued and latitude) and symbionts' abundance (including Wolbachia and Cardinium).Statistically significant negative paths are indicated by blue arrows, while positive paths are indicated by red arrows.The r values in each box indicate the amount of variation in that variable explained by the input arrows.Numbers next to arrows are unstandardized slopes.
FIGURE 8
FIGURE 8 Relationships between the microbiota diverisity indexes and environmental factors in Sogatella furcifera.The relationships between latitude and Shannon diversity index (A), Simpson index (B) and Chao1 index (C), between longitude and Shannon diversity index (D), Simpson index (E) and Chao1 index (F), between altitude and Shannon diversity index (G), Simpson index (H) and Chao1 index (I) were shown, respectively, all analysis were used by Pearson correlation analysis (SPSS 21.0) based on 2bRAD-M sequencing results.r-values and p values of each linear regression plots are provided.
TABLE 1
Collection information of Sogatella furcifera populations used in this experiment. | 2024-01-26T16:25:05.444Z | 2024-01-24T00:00:00.000 | {
"year": 2024,
"sha1": "13a59a6b753ac58c2fea0143aea9fa72a2508e88",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1336345/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebb93e6231f6c0648eb9fefd0383c1ff7072169c",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
119267549 | pes2o/s2orc | v3-fos-license | AGN feedback in action: a new powerful wind in 1SXPSJ050819.8+172149?
Galaxy merging is widely accepted to be a key driving factor in galaxy formation and evolution, while the feedback from AGN is thought to regulate the BH-bulge coevolution and the star formation process. In this context, we focused on 1SXPSJ050819.8+172149, a local (z=0.0175) Seyfert 1.9 galaxy (L_bol~4x10^43 ergs/s). The source belongs to an IR-luminous interacting pair of galaxies, characterized by a luminosity for the whole system (due to the combination of star formation and accretion) of log(L_IR/L_sun)=11.2. We present the first detailed description of the 0.3-10keV spectrum of 1SXPSJ050819.8+172149, monitored by Swift with 9 pointings performed in less than 1 month. The X-ray emission of 1SXPSJ050819.8+172149 is analysed by combining all the Swift pointings, for a total of ~72ks XRT net exposure. The averaged Swift-BAT spectrum from the 70-month survey is also analysed. The slope of the continuum is ~1.8, with an intrinsic column density NH~2.4x10^22 cm-2, and a deabsorbed luminosity L(2-10keV)~4x10^42 ergs/s. Our observations provide a tentative (2.1sigma) detection of a blue-shifted FeXXVI absorption line (rest-frame E~7.8 keV), suggesting the discovery for a new candidate powerful wind in this source. The physical properties of the outflow cannot be firmly assessed, due to the low statistics of the spectrum and to the observed energy of the line, too close to the higher boundary of the Swift-XRT bandpass. However, our analysis suggests that, if the detection is confirmed, the line could be associated with a high-velocity (vout~0.1c) outflow most likely launched within 80r_S. To our knowledge this is the first detection of a previously unknown ultrafast wind with Swift. The high NH suggested by the observed equivalent width of the line (EW~ -230eV, although with large uncertainties), would imply a kinetic output strong enough to be comparable to the AGN bolometric luminosity.
Introduction
The observational evidence for the presence of inactive Super Massive Black Holes (SMBHs; M BH ∼ 10 6 − 10 10 M ⊙ ) at the centre of most, if not all, the local galaxies, and the observed correlation between several properties of the galaxy's bulge and the central SMBH mass (Ferrarese & Merritt 2000;Gebhardt et al. 2000), suggest that the SMBH accretion and the assembly of the galaxies bulges are intimately related (see Kormendy & Ho 2013, for a recent review). Funnelling of gas in the nuclear regions, as triggered by galaxy interactions, can activate both efficient accretion onto the SMBH, and a burst of star formation. A key ingredient in regulating their evolution should be the feedback from the Active Galactic Nuclei (AGN); being conservative, while building its mass the SMBH can release an amount of energy larger than ∼ 30 times the binding energy of the host bulge (see a review in Fabian 2012). Even if only a small fraction of this energy is transferred to the gas in the galaxy, then an active nucleus can have a profound effect on the evolution of its host (Di Matteo et al. 2005). Powerful (kinetical or radiative) outflows of gas driven by luminous quasars are invoked as ⋆ Based on observations obtained with the Swift satellite. ⋆⋆ E-mail: lucia.ballo@brera.inaf.it (LB) a key mechanism to blow away the gas in the galaxy and thereby quench star formation, coincidentally starving the SMBH of fuel (King & Pounds 2015).
AGN winds with a range of physical properties have been revealed by observations at various energies, from radio up to X-rays. Outflows of molecular or neutral atomic gas, with velocities up to ∼ 1000 − 2000 km s −1 and extending on kpc scales, have been observed at mm (e.g., Feruglio et al. 2010;Cicone et al. 2014) and radio frequencies (e.g., Morganti et al. 2005;Teng et al. 2013) in a few dozen AGN in dusty star forming sources and/or radio galaxies. Mass outflows of ionized gas with similar velocities at distances consistent with the narrow line region zone have been detected in the optical/ultraviolet (UV), both in the [O iii] emission line profiles (e.g., Crenshaw & Kraemer 2005;Cano-Díaz et al. 2012;Cresci et al. 2015), and through the observation of broad absorption line systems (e.g., Dai et al. 2008;Borguet et al. 2013). In X-rays, mildly ionized warm absorbers are observed in more than half of unobscured AGN (Crenshaw et al. 2003;Blustin et al. 2005). The observed velocities of ∼ 500 − 1000 km s −1 imply a kinetic power rather low when compared to the bolometric luminosity. However, Crenshaw & Kraemer (2012) found that, when summed over all the absorbers, the total power carried by these structures A&A proofs: manuscript no. ms_swj050819 can reach, in some cases, the minimum level required for AGN feedback (L/L bol ∼ 0.5 − 5%; e.g., Di Matteo et al. 2005;Hopkins & Elvis 2010). In the last years, highly blue-shifted Fe Kshell absorption lines at rest-frame energies E 7 keV, observed in XMM-Newton or Suzaku spectra of luminous AGN, revealed the presence of high column density (N H ∼ 10 23 cm −2 ) and fast (v > 0.1c) winds (e.g., Tombesi et al. 2010;Gofford et al. 2013;Tombesi et al. 2015). Their derived kinetic power is systematically higher than the minimum fraction of bolometric luminosity required by AGN feedback models in order to regulate the growth of an SMBH and its galactic bulge (e.g., Hopkins & Elvis 2010). Only very recently, outflows over a range of scales have been detected and studied within the same source, thus allowing us to explore the connection between large-scale molecular outflows and accretion-disk activity (Feruglio et al. 2015;Tombesi et al. 2015).
Finding observational evidence of the effects of such accretion-related feedback and characterising the magnitude of mass outflows from AGN are among the major challenges of the current extragalactic astronomy. Here we present 9 coadded Swift observations (performed in less than 1 month) of the interacting infrared (IR) galaxy 1SXPS J050819.8+172149. The spectrum, the first for this source covering the energy range E ∼ 7 − 10 keV, provides us with a tentative detection of a possible new ultrafast wind (v out ∼ 0.1c). To our knowledge, this is the first time that a previously unknown outflow is revealed by Swift.
The source is described in Sect. 2, while the analysis of the Swift-XRT data is presented in Sect. 3 and the results are summarised in Sect. 4. Throughout the paper we assume a flat ΛCDM cosmology with H 0 = 71 km s −1 Mpc −1 , Ω Λ = 0.7 and Ω M = 0.3.
The high 8 − 1000 µm luminosity of the system, log(L IR /L ⊙ ) = 11.2 (Armus et al. 2009), implies a classification as Luminous Infrared Galaxy (LIRG, defined as having L IR ≃ 10 11−12 L ⊙ ; Sanders & Mirabel 1996). Included for this reason in the Great Observatories All-Sky LIRG Survey (GOALS; Armus et al. 2009), the system has been targeted with several observational facilities, and multiwavelength information (both spectroscopic and photometric) has been collected. The available data span from the UV (GALEX; Howell et al. 2010) up to the mid-IR (Spitzer/IRS, IRAC, and MIPS; Díaz-Santos et al. 2010Petric et al. 2011;Inami et al. 2013;Stierwalt et al. 2013Stierwalt et al. , 2014, see also Valiante et al. 2009;Alonso-Herrero et al. 2012 and far-IR (HERSCHEL/PACS; Díaz-Santos et al. 2013) bands. Photometry from the 2MASS Redshift Survey is reported by Huchra et al. (2012). At radio wavelengths, the relatively strong (S ν ∼ 34 mJy) NVSS emission (Condon et al. 1998), detected halfway between the nuclei and slightly elongated in their direction, is probably due to the combined contribution of both galaxies.
The system is an early-stage merger (Stierwalt et al. 2013) known to host an AGN optically classified as Seyfert 1.9 (see e.g. Motch et al. 1998; Véron-Cetty & Véron 2001; Kollatschny in the Spitzer/IRS spectra, the Seyfert 1.9 nucleus has been associated with the western galaxy of the system, coincident with 1SXPS J050819.8+172149 (Petric et al. 2011;Alonso-Herrero et al. 2012;Stierwalt et al. 2013Stierwalt et al. , 2014, see also the optical classification reported by Alonso-Herrero et al. 2013). The Spitzer spectra are suggestive of a relatively unobscured AGN (consistent with the optical classification as Seyfert 1.9), energetically important in the mid-IR (Stierwalt et al. 2013). From a decomposition of the mid-IR spectra into AGN and starburst components, Alonso-Herrero et al. (2012) estimated a bolometric luminosity due to the accretion of L bol ∼ 10 10 L ⊙ . Instead, no signature of active accretion is found for the eastern galaxy, which shows all the typical properties of a star forming source both in the mid-IR spectra and from the UV photometry. The BH mass in 1SXPS J050819.8+172149 is among the highest observed in local LIRGs, M BH ∼ 1.15 × 10 8 M ⊙ (as calculated from the velocity dispersion of the core of the [O iii]λ5007 line; Alonso-Herrero et al. 2013). This implies that the black hole is radiating at a low fraction of its Eddington luminosity 1 , with Eddington ratio log λ Edd ≡ log L bol /L Edd ∼ −2.5. The star formation rate (SFR) derived for this source falls in the lower tail of the distribution found for local LIRGs (nuclear Alonso-Herrero et al. 2013), implying a ratio between SFR and BH accretion rate of log SFR/ṁ BH ∼ 2 (by assuming a mass-energy conversion efficiency ǫ = 0.1), similar to the values found for Seyfert galaxies.
XRT data analysis
Besides the basic analysis reported in the Swift-XRT point source catalogue (Evans et al. 2014), in the soft-medium X-ray energy range (E 10 keV) the only published information up to now comes from the ROSAT All Sky Survey (see Kollatschny et al. 2008, and references therein), providing a soft X-ray luminosity (de-absorbed by our Galaxy) of L 0.1-2.4 keV ∼ 6.6 × 10 41 ergs s −1 . At higher energies, 1SXPS J050819.8+172149 has been detected by the Swift-Burst Alert Telescope (BAT; see Baumgartner et al. 2013), while Ackermann et al. (2012) reported only a Fermi 95% confidence level upper limit of L 0.1-10 GeV ∼ 1.6 × 10 42 ergs s −1 .
Recently, our group has been awarded a Swift (Gehrels et al. 2004) program for this source (PI P. Severgnini): 9 pointings performed with the X-ray telescope (XRT; Burrows et al. 2005) in the standard photon counting (PC) mode between 2014-10-15 and 2014-11-11, for a total of ∼ 72 ks net exposure (ObsID from 00049706003 to 00049706011; see Table 1).
We generated images, light curves, and spectra, including the background and ancillary response files, with the online XRT data product generator 2 (Evans et al. 2007; the appropriate spectral response files have been identified in the calibration database. The source appears point-like in the XRT image and centered at the position of the western nucleus, without any evident elongation toward the position of the second galaxy. We note that at the angular resolution of XRT, 18 ′′ half-power diameter, the emission of the two galaxies, located at a distance of ∼ 29.4 ′′ , can be resolved; by assuming a power-law model with Γ = 2, we estimated a 3σ upper limit to the 0.3−10 keV emission at the eastern source position of ∼ 7 × 10 −14 ergs cm −2 s −1 . Source events were extracted from a circular region with a radius of 20 pixels (which corresponds to an Encircled Energy Fraction of 90%, Moretti et al. 2005, 1 pixel ∼ 2.36 arcsec), while background events were extracted from an annulus region centred on the source with inner and outer radii of 60 and 180 pixels, respectively; all sources identified in the image were removed from the background region.
We extracted a light curve binned at the duration of each individual observation. 1SXPS J050819.8+172149 was detected in all observations, with 0.3−10 keV signal-to-noise ratios (S /N) ranging from 15 to 44. The average count rates in the total (0.3 − 10 keV), soft (0.3 − 2 keV), and hard (2 − 10 keV) XRT energy ranges are ∼ 0.09, ∼ 0.03, and ∼ 0.07 counts s −1 , respectively. Small deviations from these values, of a factor lower than 2, are observed in the light curves; however, there is no evidence of spectral variability in the ratio of count rates observed in the hard and soft bands. We do not find any significant pile-up problem.
In order to increase the statistics, we co-added the XRT datasets. Source and background spectra were extracted from the merged event lists, and the former was binned in order to have at least 20 total counts per energy channel. The net count rates in the 0.3 − 10 keV, 0.3 − 2 keV, and 2 − 10 keV energy ranges are (8.2 ± 0.1) × 10 −2 , (2.3 ± 0.1) × 10 −2 , and (5.8 ± 0.1) × 10 −2 counts s −1 , respectively. The S /N achieved in the same energy ranges are 76, 40, and 64, respectively.
Spectral fits were performed in the 0.3−10 keV energy range using the X-ray spectral fitting package XSPEC (Arnaud 1996) v12.8.2. Uncertainties are quoted at the 90% confidence level for one parameter of interest (∆χ 2 = 2.71). All the models discussed in the following assume Galactic absorption with a column density of N H,Gal = 1.84 × 10 21 cm −2 (Kalberla et al. 2005). To model both Galactic and intrinsic absorptions we used the (z)phabs model in XSPEC, adopting cross-sections and abundances of Wilms et al. (2000).
A fit with a simple absorbed power law clearly provides a poor representation of the XRT data (χ 2 /d.o.f. = 280.8/229); the photon index is Γ = 1.49 ± 0.09 and the column density is N H = (1.8 ± 0.2) × 10 22 cm −2 . Above ∼ 5 keV, i.e. in the energy range where the iron K complex is expected, residuals are present both in emission and in absorption (see Fig. 1 In principle, the emission observed at low energies can be associated with the accreting nucleus (i.e., due to scattering off optically thin ionised gas) and/or the host galaxy (a soft thermal emission is a characteristic signature in all known starburst galaxies). Phenomenologically, the observed soft excess can be accounted for equally well by adding to the previous model: −0.09 keV. In the latter parametrisation, the luminosity attributed to the thermal component is L mek 0.5-2 keV ∼ 2.6×10 40 ergs s −1 , which would imply a SFR ∼ 0.5−2.9 M ⊙ yr −1 (e.g., Ranalli et al. 2003;Mas-Hesse et al. 2008;Mineo et al. 2012), consistent with the nuclear star formation properties of the host (see Sect. 2). In both cases, the luminosity observed in the range covered by ROSAT, L 0.1-2.4 keV ∼ 5.6 × 10 41 ergs s −1 , is in agreement with the value derived by Kollatschny et al. (2008).
While the most plausible hypothesis is that both components contribute to the emission observed at low energies in the XRT spectrum, the quality of the present data does not allow us to discriminate between the two contributions, and even less to disentangle them. In the following, we assume an unabsorbed power law, checking that the inclusion of a thermal component in place of the power law does not affect the main results presented here.
The addition of a narrow (50 eV) Gaussian emission line to the absorbed plus unabsorbed power laws results in an improvement in the fit (∆χ 2 /∆d.o.f. = 11.3/2). The line parameters are E = 6.40 ± 0.09 keV, consistent with neutral Fe Kα, I = 1.5 ± 0.7 × 10 −5 photons cm −2 s −1 , and EW∼ 200 eV. The strength of the Fe Kα line, coupled with the hard photon index (Γ ∼ 1.62), could suggest the presence of neutral reflection (Reynolds et al. 1994;Matt et al. 1996Matt et al. , 2000. This possible component was then included in the model by replacing A&A proofs: manuscript no. ms_swj050819 Table 2. Summary of the Swift (XRT+BAT) parameters of the best-fit model described in Sect. 3, where a high-energy absorption feature is superimposed to a continuum composed by an absorbed power law plus a reflection component, with the addition of a scattered power law. Notes. Errors are quoted at the 90% confidence level for 1 parameter of interest (∆χ 2 = 2.71). Col. (1) To improve the determination of the slope of the primary power law and the amount of reflection, it is fundamental to know the shape and intensity of the emission at energies higher than 10 keV. Therefore we fitted the XRT data simultaneously with the averaged Swift-BAT spectrum of 1SXPS J050819.8+172149 obtained from the 70-month survey archive 3 (SWIFT J0508.1+1727). The data reduction and extraction procedure of the eight-channel spectrum is described in Baumgartner et al. (2013). To fit the pre-processed, backgroundsubtracted BAT spectrum, we used the latest calibration response as of 2013 May. 1SXPS J050819.8+172149 was detected in the 14 − 100 keV band with a count rate of (3.9 ± 0.3) × 10 −5 counts s −1 , which corresponds to a 14 − 195 keV flux of 2.6 ± 0.5 × 10 −11 ergs cm −2 s −1 (Baumgartner et al. 2013).
Direct, reflected and scattered continuum Absorption line
During the fit, the only free parameter of the pexmon component was the reflection scaling factor, R. We tied the pexmon photon index and normalization to that of the primary power law, and we fixed the cutoff energy at 1000 keV (i.e., consistent with no measurable cut-off), the inclination angle at 60 • , and the abundances of heavy elements at their Solar values. We also allow to vary the cross-normalisation factor between the XRT data and the average BAT spectrum.
The baseline model then consists of an absorbed power law and an unabsorbed one plus a reflection component: the photon index and intrinsic absorption of the primary emission are Γ = 1.75 ± 0.09 (in agreement with the values typically observed in unobscured AGN; Piconcelli et al. 2005;Mateos et al. 2010;Corral et al. 2011) and N H = (2.4±0.2)×10 22 cm −2 , while the reflection fraction and the strength of the scattered component are R ∼ 1.1 and A scatt /A intr ∼ 2%. The cross-normalisation factor between the XRT data and the average BAT spectrum is 0.96.
The best-fitting parameters are reported in Table 2, while the unfolded XRT and BAT spectra are shown in Fig. 3. We note that the significance of the detection can depend on the accuracy of the determination of the continuum shape. In particular, we checked if the observed shape can be explained with a mix of a stronger reflection edge plus a steeper and lower continuum. However, even if we assume the combination of reflection strength and intrinsic spectral shape that minimises the intensity of the trough, the normalisation of the Gaussian line is still inconsistent with the null value at a confidence level of 90%, in agreement with the significance of the detection derived from the simulations (see below).
The high energy range where the line is observed, very near to the end of the XRT bandpass, could raise concerns of possible artefacts due to the background. However, the S /N and number of source counts collected between 8 and 10 keV (∼ 7 and ∼ 56 counts, respectively; last 3 bins in Fig. 2) strongly support that the spectral rising that defines the line is real. We performed extensive simulations testing the null hypothesis that the spectrum is well fitted by a model that does not include the 7.8 keV absorption feature, as done in Markowitz et al. (2006, see also Porquet et al. 2004. Briefly, to take into account the observed background and the uncertainty in the continuum, we first generated a fake spectrum for a 72 ks exposure assuming the best fit found for the continuum. We then fitted this model to the fake spectrum, and starting from the new best-fit parameters we re-run a simulation with the same exposure. The baseline model has been fitted to this final fake spectrum, and the derived χ 2 has been compared with the minimum value of χ 2 obtained when a narrow (σ = 10 eV) Gaussian component was included.
We stepped the centroid energy of the absorption line over the 6.5 − 9 keV range in increments of 0.125 keV, fitting separately each time to derive the lowest value of χ 2 . The whole process has been repeated 400 times, and we estimated a 4% probability of detecting a similar feature by chance. The observed energy of the line suggested by the data is not consistent with any of the atomic transitions 4 expected at energies 7 keV (e.g., Kallman et al. 2004). The most likely explanation is that the centroid of the line is blueshifted, implying that the material responsible for the observed feature is outflowing. Prime candidates for the origin of the line are the inner Kshell resonances from moderately-highly ionized Fe Kα, as observed in other AGN. Indeed, assuming Kβ absorption by moderately ionized Fe, we would expect to observe also strong Kα absorption by the same species (see e.g. the detailed discussion in Markowitz et al. 2006). Being conservative, we can identify the line with the Fe xxvi at E = 6.966 keV resonant absorption; in this case, the observed centroid would indicate a substantial blueshifted velocity of (0.11 ± 0.03)c. An origin in Fe Kα at a lower ionization would imply an even higher blueshift.
The observed outflow velocity of v out ∼ 0.11c would translate in a lower limit on the radial distance, corresponding to the escape radius at which the material is able to leave the system. When a spherical geometry is assumed, this limit is 4 http://physics.nist.gov/PhysRefData/ASD/lines_form.html R ≥ R esc = 2GM/v 2 out , where M is the enclosed mass producing the inward gravitational force. Under the reasonable hypothesis that this corresponds to the mass of the central BH, assuming the estimate reported in the literature (M BH ∼ 1.15 × 10 8 M ⊙ , see Sect. 2), we have R esc ∼ 2.7 × 10 15 cm. This translates to a possible launch radius of 10 −3 pc, or ∼ 80 r S (Schwarzschild radii, r S = 2GM BH /c 2 ) .
The quality of the available data prevents a more detailed spectral analysis, but the ionization level of the Fe responsible of the absorption and the observed EW∼ −230 eV (though with large uncertainties) would suggest that we are dealing with a high-ionization (ionization parameter 5 log ξ ∼ 3 ergs cm s −1 ), high-column density (N H ∼ 10 23 cm −2 ) outflow (e.g., Gofford et al. 2013).
Summary
In this paper, we have reported on our 0.3−10 keV observation of the Seyfert 1.9 galaxy 1SXPS J050819.8+172149, a member of the local LIRG pair known as CGCG 468-002. The Swift-XRT data have been analysed jointly with the Swift-BAT spectrum, averaged over 70 months. The continuum is well described by an intrinsic power law with photon index Γ ∼ 1.8, absorbed by a column of N H ∼ 2.4 × 10 22 cm −2 . A reflected component is also observed, with a reflection fraction of ∼ 1.4, while a weak soft scattered component (∼ 2% of scattering fraction) can account for the observed soft emission. The de-absorbed luminosity is L 2-10 keV ∼ 4 × 10 42 ergs s −1 .
Our Swift monitoring (the first observation of this source extending up to energies E ∼ 7 − 10 keV), performed in less than 1 month, provides us a tentative detection (∼ 2.1σ significance) of an absorption trough at a rest-frame energy of ∼ 7.8 keV. When the feature is described by a simple Gaussian absorption line, its properties (e.g., energy and EW) are consistent with an origin in a material moving with a velocity of ∼ 0.11c. To our knowledge, this would be the first detection with Swift of a previously unknown high-velocity outflow.
The low statistics of the data and the high energy of the observed residuals, near the higher boundary of the bandpass of XRT, do not allow us to test more physically consistent models (e.g., grids of photoionized absorbers generated with the XS-TAR photoionization code; Kallman et al. 2004). However, if the detection is confirmed, the observed EW and the derived velocity suggest physical parameters typical of an extremely powerful outflow, as observed in only a handful of AGN (e.g., the Ultra Luminous Infrared Galaxy/quasar Mrk 231; Feruglio et al. 2015). In this case, the kinetic output could match or exceed the typical fraction of bolometric luminosities required for AGN feedback.
In fact, 1SXPS J050819.8+172149 could resemble Mrk 231: the source studied here is hosted in a star forming merging system and (possibly) shows evidence of a powerful disk wind. However, these characteristics are combined in 1SXPS J050819.8+172149 with a lower level of activity, both of accretion and star formation, than observed in Mrk 231. This would make 1SXPS J050819.8+172149 quite unique among the extremely powerful AGN winds studied so far. Merging systems as the one hosting this source are the objects where we expect to better observe the interplay between star formation and ac-A&A proofs: manuscript no. ms_swj050819 cretion, since both phenomena can be triggered by galaxy interactions. Indeed, they are the objects were the coexistence of disk winds and molecular outflows has been found so far (e.g., Mrk 231, Feruglio et al. 2015;IRAS F11119+3257, Tombesi et al. 2015). In addition, comparing the accretion and star forming properties reported in Sect. 2, 1SXPS J050819.8+172149 seems to be one of the few examples of a source that, after a recent episode of star formation, is in a transition phase between star forming-dominated (HII-LIRG) and accretion-dominated (Seyfert-LIRG) state (Alonso-Herrero et al. 2013). The quenching of the star formation can be related (at least partly) to the increasing of the AGN activity, as expected from the co-evolution models. An outflow powerful enough to affect the environment beyond the SMBH's gravitational sphere of influence, as the one possibly detected in the XRT data, could in principle play a significant role in this process.
Better statistics and higher resolution observations, extending at energies above ∼ 10 keV, are needed in order to confirm the presence of the feature, improve the significance of this detection and asses the properties of the associated wind, namely the column density and the ionisation state, and then the radial location with respect to the central source. The knowledge of these parameters would allow us to estimate the mass outflow rate and the kinetic power, to be compared with the energetic of the accretion. | 2015-06-30T09:22:02.000Z | 2015-06-30T00:00:00.000 | {
"year": 2015,
"sha1": "9e77b515f1a0c80d028bb8658a1cdf267dfb44e4",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2015/09/aa26571-15.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "9e77b515f1a0c80d028bb8658a1cdf267dfb44e4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
251352961 | pes2o/s2orc | v3-fos-license | Advanced Collagen-Based Composites as Fertilizers Obtained by Recycling Lime Pelts Waste Resulted during Leather Manufacture
Recent trends in ecological agriculture practices are focused on finding optimal solutions for reuse and recycling of pelt waste from tannery industry. In this context, new collagen-based hydrogels with NPK nutrients encapsulated have been functionalized with synthetic and natural additives, including starch and dolomite, to be used as composite fertilizers. Possible interaction mechanisms are presented in case of each synthetic or natural additive, ranging from strong linkages as a result of esterification reactions until hydrogen bonds and ionic valences. Such interactions are responsible for nutrient release towards soil and plants. These fertilizers have been adequately characterized for their physical chemical and biochemical properties, including nutrient content, and tested on three Greek poor soils and one Romanian normal soil samples. A series of agrochemical tests have been developed by evaluation of uptake and leaching of nutrients on mixtures of sand and soils. It was observed that the clay soil exhibits a higher adsorption capacity than the loam soil for most of nutrients leached from the composite fertilizers tested, with this being correlated with a slower control release towards cultivated plants, thus assuring efficiency of these collagen-based composite fertilizers. The most significant effect was obtained in the case of collagen-based fertilizer functionalized with starch.
Introduction
According to the principle of "circular economy", most of the treatment techniques applied to waste mainly aim at their reuse and recycling while substantially reducing environmental pollution [1]. In this respect, recent strategies have been launched to treat organic waste in order to obtain valuable polymer composites to be recycled in industry and agriculture. Indeed, agriculture productivity can be much improved by using different organic and inorganic amendments, such as brown coals and natural zeolites [2], biochar and straw [3], bio-solids [4], organic compost produced from slaughterhouse waste [5], micro-algal biomass [6,7], farmyard manure and crops residues [8], etc.
These treatments are made in order to ameliorate the biological transformation of organic matter for obtaining a stabilized polymeric bio-composite avoiding any potential risks for plants and soils, while these products are applied as organic amendments. By adding such organic amendments as compost into the soil, major influences are exerted on the physical-chemical properties of the soil, able to modify the acids-bases relationships, with beneficial effects on increasing total cationic exchange and buffering capacities of the soils, as has been intensively studied during the last decades [9][10][11].
Our interest is focused on bovine leather recycling, which is generally directed towards obtaining protein composites by biochemical treatments using microorganism enzymes and obtaining protein hydrolysates and protein binders with different uses. Such organic biopolymers are an important source of raw materials for agriculture, as the protein waste matrix provides sufficient elements to improve the composition of poor and degraded soils, and many plants can benefit from elements such as nitrogen, calcium, magnesium, sodium, and potassium [12,13].
A way to valorize the untreated fleshing and trimming, bovine hide waste is the product of a three-dimensional molecular network named hydrogels by cross-linking the proteins hydrolyzed with polymers based on polyacrylamide, polyvinyl alcohol, oligo oxyethylene methacrylate, acrylic acid, maleic acid, cellulose, starch, and gum, which form three-dimensional molecular networks. The hydrogels enriched with nutrients C, N, P, and K can be used as efficient amendments in agriculture for degraded soils [14,15].
Such an approach meets the stringent demand to counteract the decline of soil fertility and productivity, which is in line with growing interest in the general improvement of the quality of soils by adding organic amendments from different sources [16,17].
Indeed, for ecological reasons, other solutions for reuse and recycling of pelt waste from the tannery industry have been proposed, one of them being for collagen hydrolysate with micronutrients incorporated to be used as fertilizers for poor soil's rehabilitation [18][19][20].
Our previous works [21][22][23] reported on the synthesis and physical-chemical characterization of novel collagen-based hydrogels by recycling the protein hide waste from the leather industry in ecological conditions and obtaining advanced collagen-based composites with N, P, and K nutrients encapsulated and functionalized with two synthetic polymers, namely poly-acrylamide and poly(sodium 4-styrenesulfonate-co-glycidyl methacrylate) (P(SSNa-co-GMAx), and two natural compounds, namely starch and dolomite, in order to be used as fertilizers for poor soil amendment. The reason why the last two natural compounds were selected for collagen functionalization was to test their efficacy on improving fertilizer quality by using starch as an organic and dolomite as a mineral amendment, as they are both available in high amounts and as waste to be recycled.
The current work aims at testing these new biocomposite fertilizers for their nutritive characteristics and nutrient-leaching properties. In this respect, detailed study of the physical-chemical and biochemical characterization of both fertilizers and soils was conducted in order to establish their biocompatibility and possible interactions. For each additive encapsulated, the intercalation mechanisms were analyzed and correlated with the uptake and leaching of nutrients on sand and soils. The beneficial effects in improving soil quality were proven by testing the biological amelioration of soil fertility.
Materials
Multipolymeric collagen-based agro-hydrogels were prepared, using as raw material the limed hide waste (no haired) from fleshing and trimming bovine hides (lime fleshing), which was provided by SC PIELOREX tannery, Jilava, Ilfov county, Romania. Collagen hydrolysate was obtained according to our previous studies [24,25]. Briefly, gelatin hide was subjected to acid hydrolysis in presence of potassium phosphate, and the protein hydrolysate was functionalized with 5% synthetic and natural polymers or mineral waste, such as poly-acrylamide, poly(sodium 4-styrenesulfonate-co-glycidyl methacrylate) (P(SSNa-co-GMAx), starch, or dolomite, in order to obtain efficient composite fertilizers.
It is obvious that during acid hydrolysis for obtaining collagen hydrogel, the calcium content provided by lime is almost completely removed, and its remnant contribution will Exchangeable (accessible, mobile) phosphorus (EP%) followed these steps: extraction according to Ègner-Riehm-Domingo procedure and was then determined by spectrophotometry (CINTRA 404 UV-VIS spectrometer) with molybdenum blue after reduction with ascorbic acid [26].
Biochemical Analysis of Soils
Soil samples considered as poor soils were collected from three different regions of western Greece: Kernitsa Achaias (S1), Neochori Messolonghiou (S2), and University Patras (S3). For comparison purposes, a fourth sample of normal soil was collected from Aldeni, Buzau, Romania (S4). They belong to the following textural classes: S1-loam soil (S1-L), S2-clay loam soil (S2-CL), S3-silty clay loam (S3-SiCL), and S4-sandy clay loam (S4-SCL). The fertility state of poor and normal soils was assessed by the function of their nutrient content; indeed, for the samples S1-S3, extractible phosphorous was below detection limit, while extractible potassium was below 50 mg/kg K 2 O; this means a low-toreduced fertility state. In contrast, for soil S4, phosphorous was present in the amount of 21 mg/kg P 2 O 5 and potassium 144 mg/kg K 2 O, with such values corresponding to a high fertility state (>20 mg/kg P 2 O 5 and >80 mg/kg K 2 O).
All samples were collected in sterile containers at a depth of 10 to 30 cm and stored at 4 • C in the dark for no more than 48 h prior to microbial examination. Serial dilutions of soil samples were made, followed by sterile deionized water addition in order to achieve 10 −1 to 10 −9 g of soil/mL suspensions. In every following technique described, the proper dilutions within this interval were chosen in order to be applied on the selective media, and the developed microorganisms are expressed as colony-forming units per gram (CFU/g). Each experiment was conducted in triplicate.
The number of CFU of cultivable aerobic mesophilic bacteria was performed by standard plate counting method: 1 mL of the chosen diluted samples, prepared as described above, were added to a Petri dish plate (9 cm) and followed by the addition of 15 mL of Plate Count Agar (PCA) (Condalab, Madrid, Spain), which was prepared according to manufactures instructions. Plates were then incubated aerobically for 24 to 72 h at 30 ± 1 • C. In order to determine the number of CFU of cultivable proteolytic bacteria present in the soil samples, the proper dilutions were counted onto Plate Count Agar supplemented with 10 g/L of skimmed milk (PCA-SM) (Condalab, Madrid, Spain) in Petri dishes (9 cm). Plates were incubated aerobically for 24 to 72 h at 30 ± 1 • C. The number of CFU of cultivable soil yeasts and molds (fungi) was determined using the proper dilutions of the soil and spread on Petri dishes (9 cm) containing 20 mL of Rose Bengal Chloramphenicol Agar (PBCA) (Condalab, Madrid, Spain), which was prepared according to the manufacturer instructions. All plates were incubated aerobically for 3 to 5 days at 20.5 ± 1 • C. To determine the number of CFU of cultivable actinomyces in the soil samples, the proper dilutions of soil were used in order to inoculate Petri dishes (9 cm) containing 20 mL of Sheep Blood Agar (Condalab, Madrid, Spain), which was prepared according to the manufacturer instructions. All plates were incubated in anaerobic conditions for 3 to 7 days at 35 ± 1 • C.
Microbial biomass of Romanian soil S4-SCL amended with fertilizer was determined by indirect methods, such as substrate-induced respiration and fumigation extraction methods [28], according to SR ISO 14240-2001, Part 1 and Part 2.
Evaluation of functional bacteria of soil microorganisms was made by CLPP method: Community Level Physiological Profiling [29].
Evaluation of Exchangeable NPK Nutrients of Agro-Hydrogels in Soils
Method. Soil poor in nutritive elements sampled from the western Greece region was placed in an aluminum vessel and mixed with the agro-hydrogel as liquid, dried, or gel in various final proportions: 1%, 5%, 10%, 30%, and 50%. Portions of these mixtures were sampled at different time intervals (e.g., 1, 3, 7, 14, 21, and 30 days) and analyzed for ammonium leaching. Changeable ammonium could be evaluated by elution of the soil mixture with a solution 2.0 M KCl, and ammonium nitrogen was determined by a modified Kjeldahl method. The experiments were performed in triplicate. On the graphs are shown the average values. The standard deviation was between ±5%.
Determination of Leached Ammonium and Phosphate Ions
The agrochemical tests for characterization of fertilizers with slow release and determination of their percolation degree in soils are mainly defined by SR EN 13266/2002-Fertilizers with slow dissolution rate-Determination of nutrients leaching and SR CEN/TR 14405/2009-Waste characterization. Behavioral tests on leaching. Percolation test in ascendant counter-flux.
Method. An amount of 5 g of each fertilizer tested was completely dissolved into 5 L distilled water and transferred through a column. A constant elution flux of 225 mL/h was assured for each fertilizer. Volumes of 200 mL of leaching were collected and analyzed for their ammonium, nitrate, and phosphate ions contents. Experiments were performed at 25 • C in triplicate.
Characterization of Compounded Hydrogels with Nutrients Encapsulated
Physical-chemical characterization of composite agro-hydrogels is given in Tables 1 and 2. The fertilizer quality is mainly conferred by the content of NPK nutrients available for plant growth and their leaching in soil solutions. The absolute values in nutrient content are reflected by the elemental analysis data given in Table 1. According to these data, one may observe that all the fertilizers exhibits quite similar chemical composition, as functionalization with 5% synthetic or natural additive does not result in a major change in proportion of NPK content, with this corresponding to the general chemical formula N 10 P 6 K 10 , which is significant for agriculture application.
After the preliminary agrochemical tests, a special attention was directed to the collagen hydrogels functionalized with natural compounds, such as starch (AMI) and dolomite (DO), which fulfilled both requirements referring to fertilization efficiency and those related to economic aspects, as these are available as waste and are therefore cost-effective. In this respect, the agrochemical characteristics of AMI and DO fertilizers were determined by values of nutrient content soluble in water, expressed by total nitrogen, phosphorus, potassium, and total content of soluble salts. The values of pH and electrical conductivity were determined in aqueous suspensions of various concentrations ranging between 0.5% and 10%. The results are collected in Table 2, and the methods used are in accordance with European Commission Regulation 2003/2003, adjusted due to the organic matrix of these fertilizers, and also in accordance with procedure described in [22].
Biochemical Characterization of Soils
The data on the microbial analysis of Greek soils by standard plate counting method are available in the Table 3. As can be seen from Table 3, the number of microorganisms in samples S1-L-S3-SiCL is at the level of magnitude order of 10 6 , which proves a low microbial activity, with these soils being poor in organic compounds.
For a normal soil sample, special microbiological tests were conducted in order to prove the biological amelioration of soil fertility while applying the best-performant collagen hydrogel functionalized with starch. These experiments were performed on a sandy clay loam soil (S4-SCL-Aldeni-Buzau, Romania), having as reference sample the moist soil as compared with two samples of soil humectated with aqueous suspensions 0.1% and 0.2% of AMI hydrogels. The main biologic indicators are collected in Table 4. This normal soil sample (S4-SCL) exhibits an initial number of microorganisms at the magnitude order of 10 9 , being a fertile soil rich in biodegradable organic compounds when compared with data from Table 3 for the three poor soils, which are characterized by a number of microorganisms of three magnitude orders lower.
Agrochemical Tests of New Fertilizers on Soils
Tests on leaching ammonium and phosphate ions from composite fertilizers were performed on columns filled with gravel and sand in one series of experiments and in another series on columns filled with gravel and a mixture of sand with each of two Greek soils (80 g sand and 20 g soil from samples S1-L and S2-CL). The initial content in nitrogen and phosphorus as water-soluble ions in the fertilizers tested, Ref-CH, AMI, and PSSG, are as follows: 426.5, 363.75, and 588.5 mg N/g and 75, 85, and 83 mg P/g, respectively.
The evolution in the time of ions leaching as ammonium and phosphate on columns with gravel and sand is illustrated in Figure 1a,b.
From the analysis of Figure 1a, one may observe that in the case of the AMI fertilizer based on starch, an amount of 95% of nitrogen as ammonium leached during the first 7 days, while the remainder of 3% as released during next 13-day period.
In case of the PSSG fertilizer, four zones can be distinguished. One zone of rapid release was in the first 2 days when circa 45% of ammonium nitrogen leached, followed by a slow-release zone of 2% during the next 10 days, with the third zone being once again a rapid release of circa 43% during 20 days, and the remaining nitrogen was slowly released with a rate of 3% per day. The difference between the ammonium-releasing rate of these two fertilizers is in accordance with stronger covalent bonds established in case of synthetic polymer PSSG, when there are some esterification reactions involving various functional groups of the collagen chains, while in the case of starch, the hydrogen bonds are mainly responsible for the affinity of the bio-fertilizer components, as will be discussed in the next section.
Reference fertilizer Ref-CH not functionalized has a relative constant rate of ammonium nitrogen leaching after 2-3 days, when 20% of nitrogen was rapidly released; during the next 28 days, the rate of release was 2%/day, while in the last 40 days, the release slowed down to 2%/day. are mainly responsible for the affinity of the bio-fertilizer components, as will be discussed in the next section.
Reference fertilizer Ref-CH not functionalized has a relative constant rate of ammonium nitrogen leaching after 2-3 days, when 20% of nitrogen was rapidly released; during the next 28 days, the rate of release was 2%/day, while in the last 40 days, the release slowed down to 2%/day. In contrast to ammonium nitrogen release, in the case of phosphorus, the leaching behavior is quite similar for the three fertilizers tested on sand: with very rapid release during the first five days, when over 90% of phosphorus was released by Ref-CH and PSSG fertilizers and over 95% in case of AMI. In the next 60 days, the rest of the phosphorus was released, until 98% in case of AMI, 92% for Ref-CH, and 80% for PSSG. This similar behavior of the three fertilizers can be explained by similar structures of these composites, of which the polymers are encapsulated inside the hydrogel matrix between two collagen chains, while the potassium phosphate groups remain outside the capsule, facilitating their easier release towards soil and plants.
The above experiments can be relevant for the sandy soil sampled from Romania (S4-SCL). Similar experiments were carried out for leaching ammonium and phosphate ions of these fertilizers on columns filled with a mixture of sand and two of the tested poor soils (S1-L and S2-CL) in a proportion 4:1 of sand to soil.
Aliquots sampled from solutions of each fertilizer were analyzed before (fraction adsorbed) and after passing (fraction leached) through the columns filled with mixtures of sand and soils S1-L and S2-CL, respectively, and the data were graphically represented in Figures 2a-5a and 2b-5b for the fractions adsorbed and leached, respectively, while the most representative events during their evolution in time are collected in Table 5.
Aliquots sampled from solutions of each fertilizer were analyzed before (fraction adsorbed) and after passing (fraction leached) through the columns filled with mixtures of sand and soils S1-L and S2-CL, respectively, and the data were graphically represented in Figures 2a-5a and 2b-5b for the fractions adsorbed and leached, respectively, while the most representative events during their evolution in time are collected in Table 5. From Figure 2a, one may observe that the soil S1-L reached the maximum adsorption capacity of phosphorus after one hour in the case of the AMI fertilizer, after 2 h for Ref-CH, and after 6 h for PSSG, while for ammonium nitrogen, the adsorption capacity reached its maximum values after 6 h for all three fertilizers tested (see Table 5).
Concomitantly, the leachability of both phosphorus and ammonium continuously increased until their exhaustion for all fertilizers on both soils (Figures 2b-5b). From Figure 2a, one may observe that the soil S1-L reached the maximum adsorption capacity of phosphorus after one hour in the case of the AMI fertilizer, after 2 h for Ref-CH, and after 6 h for PSSG, while for ammonium nitrogen, the adsorption capacity reached its maximum values after 6 h for all three fertilizers tested (see Table 5).
Concomitantly, the leachability of both phosphorus and ammonium continuously increased until their exhaustion for all fertilizers on both soils (Figures 2b-5b).
Physical-Chemical Interactions Inside Polymeric Composites
The main structural transformations produced during the process of extraction of collagen by acid hydrolysis from leather waste are discussed in the reference [30]. The triple helix of collagen is further unfolded in the steps of obtaining of hydrogel, when numerous functional groups are activated to assure high affinity of the new polymeric composites. The main step in this hydrolysis process is functionalization with adequate nutrients to confer soil fertility. While the nitrogen content is assured by the amine and amide groups of collagen hydrolysate, potassium and phosphorous are acquired by final hydrolysis with K2HPO4, when potassium phosphate anions are attached at polymeric chain by means of carbonyl groups.
One may expect that the new polymer composites may have proper fertilization qualities, as phosphorus and potassium nutrients will be easily available to be transferred into the soil and then towards plants since they are ionic species obtained by encapsulating dipotassium phosphate salt into the collagen matrix. However, the nitrogen is available as covalently linked into protein compounds and should be more difficult to biodegrade into exchangeable ammonium species, the best source of nitrogen for plants. For this reason, an indirect method was proposed for testing the availability of ammonium nitrogen for plants.
The structure of composite fertilizers as determined by the active groups on gelatin hydrolysate, such as amine, carbonyl, carboxyl, hydroxyl groups, etc. The possible reactions and structures of new composite fertilizers functionalized with poly-acrylamide and poly(sodium 4-styrenesulfonate-co-glycidyl methacrylate) (P(SSNa-co-GMAx-PSSG) and two natural compounds, starch and dolomite, are presented in Table 6. One may observe that in all the cases, the functionalized agent is interspersed between collagen hydrolysate chains, but the interaction mechanisms are different, resulting in linkages of various strengths. Thus, in case of synthetic polymers poly-acrylamide and PSSG, dominant interactions can be assessed as esterification and amidation reactions, leading to rather From Figure 2a, one may observe that the soil S1-L reached the maximum adsorption capacity of phosphorus after one hour in the case of the AMI fertilizer, after 2 h for Ref-CH, and after 6 h for PSSG, while for ammonium nitrogen, the adsorption capacity reached its maximum values after 6 h for all three fertilizers tested (see Table 5).
Concomitantly, the leachability of both phosphorus and ammonium continuously increased until their exhaustion for all fertilizers on both soils (Figures 2b-5b).
Physical-Chemical Interactions Inside Polymeric Composites
The main structural transformations produced during the process of extraction of collagen by acid hydrolysis from leather waste are discussed in the reference [30]. The triple helix of collagen is further unfolded in the steps of obtaining of hydrogel, when numerous functional groups are activated to assure high affinity of the new polymeric composites. The main step in this hydrolysis process is functionalization with adequate nutrients to confer soil fertility. While the nitrogen content is assured by the amine and amide groups of collagen hydrolysate, potassium and phosphorous are acquired by final hydrolysis with K 2 HPO 4 , when potassium phosphate anions are attached at polymeric chain by means of carbonyl groups.
One may expect that the new polymer composites may have proper fertilization qualities, as phosphorus and potassium nutrients will be easily available to be transferred into the soil and then towards plants since they are ionic species obtained by encapsulating dipotassium phosphate salt into the collagen matrix. However, the nitrogen is available as covalently linked into protein compounds and should be more difficult to biodegrade into exchangeable ammonium species, the best source of nitrogen for plants. For this reason, an indirect method was proposed for testing the availability of ammonium nitrogen for plants.
The structure of composite fertilizers as determined by the active groups on gelatin hydrolysate, such as amine, carbonyl, carboxyl, hydroxyl groups, etc. The possible reactions and structures of new composite fertilizers functionalized with poly-acrylamide and poly(sodium 4-styrenesulfonate-co-glycidyl methacrylate) (P(SSNa-co-GMAx-PSSG) and two natural compounds, starch and dolomite, are presented in Table 6. One may observe that in all the cases, the functionalized agent is interspersed between collagen hydrolysate chains, but the interaction mechanisms are different, resulting in linkages of various strengths. Thus, in case of synthetic polymers poly-acrylamide and PSSG, dominant interactions can be assessed as esterification and amidation reactions, leading to rather strong covalent bonds between the additive and collagenous matrix. While polyacrylamide has many reactive groups able to link the collagenous chains in a double-stranded coil manner by means of carboxyl, hydroxyl, carboxyl, and di-imine groups, in the case of PSSG, only marginal carboxylic groups are available for esterification, thus keeping the collagen chains at a greater distance. However, in both cases, the polymeric additive seems to be encapsulated inside the collagen matrix, while potassium phosphate groups remain outside the capsules, thus being available for ionic exchange during the fertilization process.
Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured by means of numerous hydrogen bonds established by glycoside groups of starch with hydroxyl and amino groups of collagen, while only a few carboxyl groups can be involved in esterification reactions. Such rather low-or medium-strong bonds add to this biopolymer composite enough stability when is applied on the soil and at the same time assure an easy release of NPK nutrients together with additional biodegradable organic carbon content.
When dolomite is encapsulated within NPK collagen hydrogel, the composite consistency is mainly assured by means of hydrogen bonds between carbonate oxygen and hydrogenated groups of hydrogel, such as carboxyl and hydroxyl, but some ionic interactions caused by calcium and magnesium cations could not be neglected, which could facilitate the ionic exchange toward soil and plants.
These interaction mechanisms will provide the theoretical support for further interpretations of the specific behavior of these new composite fertilizers on uptake and leaching toward various types of soils. PSSG, only marginal carboxylic groups are available for esterification, thus keeping the collagen chains at a greater distance. However, in both cases, the polymeric additive seems to be encapsulated inside the collagen matrix, while potassium phosphate groups remain outside the capsules, thus being available for ionic exchange during the fertilization process.
Dolomite, natural ore [33] CO3 − Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured diacrylamide [31] -NH 2 (amino groups) >C=O (carbonyl groups) collagen chains at a greater distance. However, in both cases, the polymeric additive seems to be encapsulated inside the collagen matrix, while potassium phosphate groups remain outside the capsules, thus being available for ionic exchange during the fertilization process.
Dolomite, natural ore [33] CO3 − Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured 2 Poly(sodium 4-styrenesulfonate-coglycidyl methacrylate) (P(SSNa-co-GMAx), synthetic polymer collagen chains at a greater distance. However, in both cases, the polymeric additive seems to be encapsulated inside the collagen matrix, while potassium phosphate groups remain outside the capsules, thus being available for ionic exchange during the fertilization process.
Dolomite, natural ore [33] CO3 − Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured [21] >C=O (carbonyl groups) collagen chains at a greater distance. However, in both cases, the polymeric additive seems to be encapsulated inside the collagen matrix, while potassium phosphate groups remain outside the capsules, thus being available for ionic exchange during the fertilization process.
Dolomite, natural ore [33] CO3 − Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured Dolomite, natural ore [33] CO3 − Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured [32] -OH hydroxyl -CH 2 -O-CH 2glycoside linkages collagen chains at a greater distance. However, in both cases, the polymeric additive seems to be encapsulated inside the collagen matrix, while potassium phosphate groups remain outside the capsules, thus being available for ionic exchange during the fertilization process.
Dolomite, natural ore [33] CO3 − Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured Dolomite, natural ore [33] CO3 − Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured [33] CO 3 − collagen chains at a greater distance. However, in both cases, the polymeric additive seems to be encapsulated inside the collagen matrix, while potassium phosphate groups remain outside the capsules, thus being available for ionic exchange during the fertilization process.
Dolomite, natural ore [33] CO3 − Completely different types of interactions are established during encapsulation of starch and dolomite into hydrogel matrix. The highly saturated structure of natural polymer starch, dominated by α1-4 glycoside linkages along linear chains of amylose and α1-6 glycosidic linkages at branch points in amylopectin, confers a lower reactivity with collagen groups. In this way, higher compatibility between the two biopolymers is assured
Behavior of Fertilizer Nutrients during Uptake and Leaching on Sand and Soils
Improving soils' fertility by adding various amendments containing organic components is mainly analyzed by means of mineralization of carbon and nitrogen [34] and their controlled release [35].
In this context, a comprehensive study was performed on these new agro-hydrogels referring to their biodegradability in aerobic conditions in water and in composting environment, completed by the leaching of the ammonium, nitrate, and phosphate ions on columns filled with gravel and sand mixed with each of poor soils tested.
The big difference noticed between the leaching rates of the ammonium and phosphorous nutrients can be explained by the special structure of these composite fertilizers, which contain phosphorus and potassium as ionic species incorporated into collagen hydrolysate matrix as a result of encapsulation with dipotassium phosphate, while the nitrogen is available in the covalent form of protein compounds and needs several degradation steps to be transformed from organic into inorganic species as nitrate and ammonium.
A comparative analysis of adsorption isotherms from Figures 2a, 3a, 4a and 5a provides the values for maximum adsorption capacity (A max ) and time to reach the equilibrium on the two tested soils for phosphorus and nitrogen nutrients (Table 5).
In most of cases, the average time for reaching adsorption equilibrium was 6-7 h except for reference fertilizer (Ref-CH) and that functionalized with starch (AMI) for phosphorus as a result of its encapsulation in ionic form, as it is a more easily exchangeable species than nitrogen, as explained before. At the same time, the leaching period was quite similar in all the cases, around 8-9 h, which is correlated with nutrient control leaching from soil to the plants. These time intervals in hours seem to be rather short, but we should take into account that these laboratory experiments on columns were rather accelerated, as only a proportion of 4:1 sand to soil was used. It is obvious that in real agriculture conditions in the soils, these time intervals will be much longer, in accordance with vegetation period of plants, as it was also established by Anghel et al. [35].
Another useful remark after analysis of adsorption capacities values from Table 5 is related to the dominant concentrations of ammonium nitrogen in all the cases studied, as this is in accordance with initial nutrient content of the fertilizers previously presented in Section 3.1, and their atomic ratio of N 10 P 6 K 10 was calculated using elemental analysis data from Table 1; these are values that recommend these collagen-based composites as important source of protein origin nitrogen for amendment of poor soils. These data are in agreement with the results obtained by Lima et al. [36] for the use of leather waste as a nitrogen source for plant growth.
When comparing the two poor soils analyzed here, one may notice that the clay soil (S2-CL) exhibits a higher adsorption capacity than loam soil (S1-L) for most of the nutrients released from the fertilizers tested, and this higher adsorption capacity on soil could be correlated with a slower control release towards cultivated plants, thus assuring the efficiency of these composite fertilizers.
Biological Amelioration of Soils' Fertility
Laboratory tests emphasized the beneficial effects on soil amelioration provided by protein structure of the collagen hydrogel functionalized with starch (AMI) on some biologic indicators, as shown in Table 4.
For the soil samples treated with AMI fertilizer suspensions, the number of CFU of cultivable mesophilic organotrope bacteria was significantly increased in both concentration variants by 61-81%, indicating the stimulation of development of this taxonomic group. Simultaneously, there was an increase of species diversity belonging to Bacillus strain by developing some species attesting to the improvement in soil humidity and aeration, such as B. subtilis and B. egaterium. Frequently, some species were isolated from the zymophenic flora belonging to the Pseudomonas strain.
The number of CFU cultivable mesophilic fungi, with a decreasing tendency, does not significantly differ as a function of treatment. In both variants, there is a number of 7-9 similar species, the most dominant being the Aspergillus strain, accompanied by Cephalosporium and Penicillium. The Cladosporium herbarum strain, with a role in aggregating soil particles, was isolated in both fertilization variants with protein biopolymers based on collagen hydrolysate and functionalized with starch and dolomite.
Physiological activities of microflora, expressed by the amount of CO 2 released during respiratory processes of soil, were also influenced for a 5% probability by the treatments applied, which showed an increased evolution.
The microbial biomass significantly increased by 19% for the variant with 0.1% hydrogel AMI against reference soil, with this value being mainly supported by the bacterial component of soil microflora, which assures intensification of metabolic activities by means of CO 2 release. This amendment variant applied for a sugar beet subculture on this sandy clay soil assures an important multiplying effect of microbial mass and soil respiration together with an equilibrated microscopic bacteria/fungi balance.
One may conclude that the use of AMI collagen hydrogel with encapsulated starch for poor soil structure amelioration positively influenced the micro-colonies in soil, creating adequate conditions for maintaining strain diversity and their equilibrium balance and thus enhancing their physiologic activities as a result of improving the biophysical properties of soil.
Conclusions
A systematic study is presented on reusing hide waste resulting from the leather industry by capitalizing on their valuable protein components to obtain efficient collagenbased hydrolysates as composite fertilizers for poor soils' amelioration. A series of polymer composite fertilizers was obtained by the functionalization of collagen hydrolysates with some synthetic and natural compounds after encapsulation with P and K nutrients. Two representative organic and mineral amendments, starch and dolomite, were selected, as both of them are available in high amounts as waste to be recycled.
Possible interaction mechanisms during encapsulation of nutrients and synthetic or natural polymers/ore are discussed in tight connection with their uptake and leaching on studied soils and the effects on improving soil fertility.
All the composite fertilizers tested exhibited a similar composition in atomic ratio of N 10 P 6 K 10 , which is significant for agriculture applications.
The study of nutrient leaching on soils S1-L and S2-CL revealed that the soil adsorbs nitrogen and phosphorus as ammonium and phosphate ions, respectively.
The initial number of microorganisms in three poor soils was at the level of magnitude order of 10 6 , which proves a low microbial activity, with these soils having a low organic content, while for a normal soil, the level of magnitude order of 10 10 proves a high microbial activity in accordance with its high fertility.
Finally, it was demonstrated that the use of collagen hydrogel with encapsulated starch for poor soil structure amelioration positively influenced the micro-colonies in soil, creating adequate conditions for maintaining strain diversity and their equilibrium balance and thus enhancing their physiologic activities as a result of improving the biophysical properties of soil. | 2022-08-06T15:19:06.253Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "8002d03773cc4e18d8a2cde1b7503e483258cceb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/14/15/3169/pdf?version=1659521356",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "332297290a340391f03f852930d5898722566c04",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
45233529 | pes2o/s2orc | v3-fos-license | Rigorous analysis of highly tunable cylindrical Transverse Magnetic mode re-entrant cavities
Cylindrical re-entrant cavities are unique three-dimensional structures that resonate with their electric and magnetic fields in separate parts of the cavity. To further understand these devices, we undertake rigorous analysis of the properties of the resonance using in-house developed Finite Element Method (FEM) software capable of dealing with small gap structures of extreme aspect ratio. Comparisons between the FEM method and experiments are consistent and we illustrate where predictions using established lumped element models work well and where they are limited. With the aid of the modeling we design a highly tunable cavity that can be tuned from 2 GHz to 22 GHz just by inserting a post into a fixed dimensioned cylindrical cavity. We show this is possible as the mode structure transforms from a re-entrant mode during the tuning process to a standard cylindrical Transverse Magnetic (TM) mode.
Cylindrical re-entrant cavities are unique three-dimensional structures that resonate with their electric and magnetic fields in separate parts of the cavity. To further understand these devices, we undertake rigorous analysis of the properties of the resonance using "in-house" developed Finite Element Method (FEM) software capable of dealing with small gap structures of extreme aspect ratio. Comparisons between the FEM method and experiments are consistent and we illustrate where predictions using established lumped element models work well and where they are limited. With the aid of the modeling we design a highly tunable cavity that can be tuned from 2 GHz to 22 GHz just by inserting a post into a fixed dimensioned cylindrical cavity. We show this is possible as the mode structure transforms from a re-entrant mode during the tuning process to a standard cylindrical Transverse Magnetic (TM) mode.
I. INTRODUCTION
Microwave re-entrant cavities have been studied over 50 years for a variety of applications. The development of accurate structure modeling based on lumped element analysis [1][2][3][4][5][6][7] has made them very valuable for creating small volume high-Q resonators for filtering applications from room to cryogenic temperatures in comparison with large Whispering Gallery and Bragg mode resonant cavities [6][7][8][9][10][11]. Re-entrant cavities have also been used for producing displacement sensors for gravitational bar detectors [12][13][14][15][16][17], oscillators using Gunn diodes [18] and electron beam tubes [18], and for millimetre wave resonators [20][21]. Furthermore, the re-entrant cavity has a high frequency tuning capability, which make them very attractive for telecommunication systems [22][23][24][25] and characterizing dielectric materials as a function of frequency [26][27]. The electromagnetic field pattern of the re-entrant mode presents a very high capacitance between the internal post and the lid of the cavity. This high electric field confinement within the gap has been of some interest in plasma physics for developing high power electron beam microwave tubes. Mostly using rectangular re-entrant cavities it has been possible to generate output power of a few kWatts. For electron beam device applications, the design of klystron cavities (often at lower frequencies) use standard lumped circuit models adjusted to include plasmawave effects [28][29][30][31][32]. Cylindrical re-entrant cavities have also been used for studying absorption processes in small liquid and solid materials [33][34], and also the breakdown of gases to E-field [35].
The microwave re-entrant cavity's high frequency selectivity to gap size fluctuations makes it also a unique device, capable of being developed for mechanical transducer applications [15][16][17]36] and for investigating the dynamical Casimir Effect [37]. The re-entrant cavity could also prove to be a useful tool for cavity based searches for axions [38] and axion-like particles, where the ability to tune over a large frequency range while maintaining low electrical losses and high E-field intensity would enable sensitive experiments that cover a wide region of parameter space.
Nevertheless, throughout all this research there is a lack of understanding of rigorous analysis on the mode structure as the position of the post inside the cavity widely varies to the regime where the approximations no longer hold. For example, lumped element models are only valid under certain approximations [39][40][41][42][43]. In this paper we undertake rigorous analysis using the Finite Element Method (FEM) and compare with the lumped element model and experimental results. First we compare results in the region where the approximation is valid, and then we construct a highly tunable cavity, which goes beyond the simple lumped model and can be tuned from 2 to 22 GHz. The lower frequency of 2GHz corresponds to a 3μm gap size achieved with a macroscopic tuning apparatus. It is shown that the mode structure transforms from a re-entrant mode during the tuning process to a standard cylindrical Transverse Magnetic (TM) mode by the time the post is finally removed. We also explain with the aid of field plots why the LCR model fails with very large gap sizes. General design and modeling of re-entrant cavities requires the calculation of the resonance frequency, f, with respect to the gap spacing x and the sensitivity df/dx. In Fig 1, we present the general topology with a conical post [2,3]. For high sensitivity metrological applications the figure of merit to optimize is
II. COMPUTATIONAL AND EXPERIMENTAL RESULTS
, which allows maximum transductance of displacement to electrical energy [25]. The factor of merit has been chosen as such it is independent from the resonance frequency, metal cavity losses, and just dependent from the frequency sensitivity with the gap size (df/dx). It is then applicable to any cavity resonance sizes and any metal enclosure for a given gap size.
The geometric factor (GF) is a parameter, which relates the resonant modes coupling to the metallic losses of the material, and is given by the following formula (here Q 0 is the unloaded quality factor and Rs is the surface resistance): Note that GF only depends on the mode field pattern, and is in the units Ohms.
Using finite element electromagnetic analysis for various topologies, we have calculated the sensitivity for gap sizes of order a nanometer to a millimeter, which are shown in fig. 3. The calculations have been conducted using "in-house" software developed at XLIM institute in France, dedicated to microwave resonator design. The fringing fields near the gap are difficult to model accurately using numerical methods. The extreme aspect ratio of the gap spacing to cavity height is very challenging for mesh creation as the ratio can be greater than 100. It is usually necessary to use very large numbers of mesh cells to obtain good accuracy and the computational time can then become very large. This XLIM software enables us to perform the simulations where typical commercial simulation software fail when the ratio of the gap to the cavity height becomes too large. It is then necessary to mesh differently depending of this dimensional ratio with respect to the resonance frequency. We used non-equidistant weighted points triangular mesh cell where the field is maximum to enhance the precision of the calculations. It is then optimized with a freeware gmsh. The finer the mesh from nano to micrometer gaps, the bigger the required calculation power required. This is made available with the CALI (Calcul en Limousin) server. To give consistent results at extra small gaps, we implemented a thousand of wavelength mesh cell to discretize the gap. This requires a few tens of Gb of memory to conduct the calculations even though all symmetries of the structure are used. This obviously needs a lot of time for computing the solutions with very large matrices. An example of such a mesh is shown in fig. 2. Figure 4 shows the equivalent model of the re-entrant cavity following the conventional process for any resonant structure in the microwave regime. A resonant cavity can be represented with an equivalent LCR circuit where LC is related to frequency and R gives information on the losses. In the case of a re-entrant cavity, we have an additional capacitance, which models the field discontinuity, i.e. the gap formed by the post and the top lid. The formula for approximating the resonance frequency of the fundamental re-entrant cavity mode [3] is given below in equation (2):
A. Lumped model
Here μ 0 , ε 0 are the permeability and the permittivity of free space respectively, σ is the conductivity of the metal conductor and δ represents the skin depth. The corresponding GF using the lumped circuit analysis [19] is given by equation (3): Thus, the Q-factor may be calculated knowing the surface resistance of the metal by combing with equation (1).
B. Comparison between modeling and experiments
Measurements have been undertaken and compared with the finite element analysis and the lumped element model [3][4]. Various copper foil gaskets (as shown in figure 5) of a few micrometers thick (with 15% uncertainty) were used to vary the gap spacing by placing the gaskets between the lid of the cavity and the cylindrical wall. The measurements were conducted using a vector network analyzer with a magnetic probe inserted through the side of the cavity in a one-port configuration to couple to the azimuthal magnetic field component. The resonant frequency and unloaded Q-factor were derived from the measured reflection coefficient S 11 [44][45]. Due consideration was given to frequency pulling and external losses induced by the loop probe, by measuring the resonant frequency and the Q-factor of the cavity as a function of the probe position. In this way we made sure measurements were taken in the under coupled regime where the influence of the probe is minimized and we measure the intrinsic properties of the resonator. This thorough procedure ensures the coupling of the field with the probe is negligible. The error-bars in the measured frequency data are obtained from the thickness tolerance (±15%) of the copper foil. The triangle data points in figure 6 are measured by calibrating the gap spacing from the frequency measurements of the circled data points. Measurement and computational frequency results for the re-entrant cavity at 296.5 K are given in Table I. Resonant frequency and Q-factor computations were performed for gap sizes between 1nm and 1mm. Measurements were conducted between 10 and 100μm. The differences between frequencies computed from the finite element analysis and those measured in experiment do not exceed more than 0.1%. Resonant frequencies measured from the network analyser are as precise as 0.001% and Q-factor values are approximately given to 10% error. Numerical accuracy is not a limiting factor in such a configuration.
The frequency and Q-Factor plots show very good consistency between the modeling with lumped element, Finite Element Analysis and the experiments. In this way we have confirmed the accuracy of our modeling. The difference in Q-factor values between simulations and measurement for very small gap is assumed to be directly coming from the copper oxidation. The oxidation is a very lossy dielectric material which could result in lowering the Q value.
III. DESIGN OF A TUNEABLE RE-ENTRANT CAVITY
A highly tunable microwave cavity was designed by implementing a fine threaded screw mechanism, which enabled a rotation-free translation of the metal pin as shown in figure 7. During translation a tight fit kept reasonable electrical contact of the pin to the cavity. In such a case we were able to reproduce previous measurements and reduce the gap size to as low as 3μm and as high as 1.4mm (empty cavity configuration) as shown on figure 8. The Hφ field component was excited by implementing a loop probe. The loop was positioned to stay in the under-coupled regime (no field disturbance from the probe) for measuring the unloaded Q-factor at each gap size. The 3μm-gap size limitation comes from the pin and the top lid not being parallel with the metal surface due to roughness from lathing and milling. A comparison between the equivalent lumped circuit model (LCR), the Finite Elements Method (FEM) and the measurements has been conducted and is illustrated in figure 8. It shows a good agreement between FEM and the measurements but highlights the limitation and inconsistency of the LCR model for a gap greater than 20 μm. This could be explained by the fact that the LCR equivalent circuit doesn't take into account the mode change from a re-entrant to a pure Transverse Magnetic (TM) mode. figure 9) electric field fringing becomes significant and the magnetic field starts to be drawn away from the post. The lumped elements circuit model and the rigorous analysis starts to diverge at this point because the capacitance and inductance can no longer be only approximated as due to the gap and the post. However, for smaller gaps below 100 m the model is as accurate as the finite element method, and much more easily to implement.
We also characterize the tuneable re-entrant cavity in term of loss between the simulation and the measurements. All Q-factors are given for a very low coupling coefficient for which the values are the highest due to the minimization of probe losses. A comparison is made before and after the surface oxidation of copper is removed (surface cleaning). These results are plotted in figure 10.
(a) (b) Figure 10: Report of (a) the Q-factor measurements and (b) the surface resistance compared with the simulation data.
From the Q-factor measurement of the empty cavity mode (TM 010 ) at 22GHz (where the central post is removed), the cavity loss is only due to the copper electric conductivity. Thus we can retrieve the electric metal conductivity value of about 1.39.10 7 S/m for this cavity. We simulate the Q-factor and the surface resistance (Rs) versus frequency, corresponding to the blue curves in figure 10, which shows the usual square root dependence on frequency of losses. In figure 10, it is also evident for smaller gaps, i.e. frequencies below 8 GHz, an extra degradation of the Q-factor exists. This is mainly due to the lossy dielectric effect (copper oxidation), which is clearly shown to improve after cleaning. From 10 to 15GHz, the drop in Q values is assumed to be mainly related to field leakage between the adjustable central post and the guiding hole from the cavity base (see in figure 7) as it has been already reported similar work, where a specially designed mechanism was implemented to maintain electrical connection with tuning [46]. To further improve these results over the tuning range this type of mechanism could be implemented.
CONCLUSION
In this work we have shown the validity of the LCR model and its limitations. Rigorous "in-house" finite element analysis has been implemented and allows accurate analysis at large gap sizes where the LCR model breaks down. Analysis was undertaken to understand the mode structure transformation from re-entrant mode to Transverse Magnetic mode during the removal process of the central post. The limitations on the LCR model was then determined and explained by the structure of the field changing. In practice we achieved gap sizes as low as 3μm and a solution for reducing machining roughness was discussed. For gap spacing of below 20μm equivalent to 1.4% of the total central post height, it was demonstrated that the LCR equivalent model could be used directly for both resonant frequency and Qfactor modeling, at larger gap spacing, for accuracy, a rigorous technique should be used. Based on the rigorous analysis a highly tunable capacity was made, which allowed tuning from 2 to 22 GHz. During the tuning process it was shown the re-entrant mode transformed to the standard TM 0,1,0 cavity mode, which was beyond the capabilities of the LCR model. | 2018-04-03T06:01:48.912Z | 2013-08-13T00:00:00.000 | {
"year": 2013,
"sha1": "db55bc5e3bdb56fe5ec5d3fc0111a014f68d83b3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1308.2755",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "06bc85aef664eb4129eed4529fc9aedc1cc854d9",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
266926176 | pes2o/s2orc | v3-fos-license | Imposition of Criminal Sanctions on Corporations and/or Corporate Control Personnel Who Commit Money Laundering Crimes
This research purpose is to describe law enforcement arrangements for corporations and/or corporate control personnel for money laundering crimes. The author uses a normative juridical approach, using primary and secondary data. Data analysis uses qualitative analysis. In Indonesia, legal regulations regarding the prevention and eradication of money laundering crimes were initially regulated in Law Number 15 of 2002 concerning the Crime of Money Laundering (UUTPPU) which was later revised into Law Number 25 of 2003 and subsequently revoked and replaced by Law-Law Number 8 of 2010 concerning Prevention and Eradication of the Crime of Money Laundering. The results show that perpetrators of money laundering crimes are subject to sanctions based on Articles 6, 7, 8, 9, and 10 of Law Number 8 of 2010 concerning Money Laundering Crimes. Apart from that, to anticipate the occurrence of money laundering criminal attempts in Indonesia by postponing transactions with assets suspected to originate from criminal acts. Blocking of assets known to originate from criminal acts, and Temporary suspension of transactions related to money laundering crimes.
Introduction
Recently, the Indonesian people have felt angry with the findings from BNN regarding the crime of money laundering worth 15 billion Rupiah, originating from narcotics crimes and carried out by former narcotics convicts (Mahardhika, 2023).Not to mention other assets that were not reported to the state, as well as deposit boxes found in other people's names, allegedly to avoid government suspicion, with truly fantastic values, such as the case of Rafael Alun, and other criminal acts of money laundering such as those committed by the former Head of the Agency.National Land Agency (BPN) which often flexes a luxurious lifestyle in the city of Makassar (Lestari, 2023).
There are various formulations related to the meaning of money laundering or the crime of money laundering the formulation involves a process of laundering money obtained from crime and laundered through a financial institution (bank) or financial service provider so that the illicit money gets an appearance as legitimate or halal money.(Eleanora, 2011) The crime of money laundering is an organized crime, which requires special efforts to overcome it, both at the national and international levels.[3]The consequences of the criminal practice of money laundering will damage the country's economic system and even have a negative impact on the country.For this reason, efforts to prevent and eradicate the crime of money laundering require a strong legal basis to guarantee legal certainty, especially during the pandemic and even after the COVID-19 pandemic.Efforts to restore the national economy by the government through serious planning may be hampered by perpetrators of money laundering crimes (Nelwan, 2023).This implementation can be carried out by mutually strengthening and collaborating between the antimoney laundering regimes that have been established.(Andrikasmi, 2022) In Indonesia, legal regulations regarding the prevention and eradication of money laundering crimes were initially regulated in Law Number 15 of 2002 concerning the Crime of Money Laundering (UUTPPU) which was later revised into Law Number 25 of 2003 and subsequently revoked and replaced by Law-Law Number 8 of 2010 concerning Prevention and Eradication of the Crime of Money Laundering, which is anti-money laundering in Indonesia.In this law, there is an institution that acts as financial intelligence, namely the Financial Transaction Reports and Analysis Center (PPATK).The duties, functions, and authority of PPATK are contained in Article 39, namely that PPATK has the task of preventing and eradicating criminal acts of money laundering.Then in Article 40, it is explained that to carry out the tasks referred to in Article 39, PPATK has the following functions: a. prevention and eradication of money laundering crimes; b. management of data and information obtained by PPATK; c. supervision of the reporting party's compliance; and d. analysis or examination of financial transaction reports and information that indicate criminal acts of money laundering and/or other criminal acts as intended in Article 2 paragraph (Fadhli, 2018).
Nowadays, corporations play an increasingly important role in people's lives, especially in the economic sector.The doubts in the past about placing corporations as subjects of criminal law that could commit criminal acts and at the same time be held accountable in criminal cases have shifted.The doctrine that characterizes the 1886 Dutch WetVan Strafrecht (KUHP), namely "Universitas delinquere non potest" or "Societas delinquere non potest" (legal entities cannot commit criminal acts), has changed in connection with the acceptance of the concept of functional perpetrators (functional Daderschap).(Tambunan., 2016) According to Rolling, "offenders include corporations in the Daderschap functioneel (functional actors) because corporations in the modern world have an important role in economic life which has many functions, namely as employers, producers, price setters, users of foreign exchange, etc." (Batubara, 2016) Because in practice it is not easy to determine whether there is or is not a fault in a corporation, it turns out that in its development, especially regarding corporate criminal liability, it is known that there is a new view, or let's say a slightly different view, that, especially in the responsibility of legal entities, the principle of fault does not apply.So, criminal liability which refers to the doctrine of strict liability (absolute/strict liability) and vicarious liability (liability imposed on another person/substitute liability) which in principle is a deviation from the principle of fault (mens rea), should be taken into consideration in the application of corporate responsibility in criminal law.However, in England, there is no abandonment of the principle of mens rea (fault) in corporate criminal liability, because in England there is a principle of identification.Based on this principle, corporations are held accountable the same as individuals.(Muladi dan Dwidja Priyatno, 2010) The subject of the crime of money laundering can be seen from the provisions contained in Law Number 8 of 2010 concerning the Prevention and Eradication of the Crime of Money Laundering.The subjects of the crime of money laundering are individuals and corporations.Individuals as legal subjects of money laundering crimes can be understood by looking at Article 1 Paragraph 9, Article 3, Article 4, Article 5, Article 10, and so on.From Article 1 paragraph 9 of Law Number 8 of 2010 concerning the Prevention and Eradication of the Crime of Money Laundering, it is emphasized that every person consists of an individual or a corporation.Corporations as the subject of the crime of money laundering are also explained in Article 1 paragraph 9 of Law Number 8 of 2010 concerning Prevention and Eradication of the Crime of Money Laundering and so on, where in Article 1 paragraph 9 it is said that each person is an individual or a corporation.Corporations in Article 1 paragraph 1 of Law Number 8 of 2010 concerning the Prevention and Eradication of the Crime of Money Laundering are organized groups of people and/or assets, whether they are legal entities or non-legal entities (Jayaningprang, 2023).
The problem in this paper is "How are criminal sanctions imposed on corporations and/or corporate control personnel who commit money laundering crimes?"
Materials and Methods
The method used in writing this applied paper is a descriptive-analytical method, namely by using data that clearly describes problems directly in the field, analysis is carried out and conclusions are drawn to solve a problem.The data collection method is through observation and literature study to obtain solutions to problems in preparing this paper.
Imposition of Criminal Sanctions on Corporations and/or Corporate Control Personnel Who Commit the Crime of Money Laundering
Corporations have contributed a lot to the development of a country, especially in the economic sector.However, corporations also often have negative impacts from activities such as environmental pollution, tax manipulation, exploitation of workers, fraud, and money laundering crimes.Therefore, this impact has made the law a regulator and protector of society, which must pay attention and regulate corporate activities.(Nasichin & Nofita, 2021) Initially, lawmakers were of the view that only humans could be the subject of criminal acts.So, initially, a corporation cannot be the subject of a criminal act.We can see this in the history of the formulation of Article 59 of the Criminal Code, especially in the way the offense is formulated, which is always preceded by the phrase whoever.However, the facts show that we will not find an opportunity to sue corporations before a criminal court.Nevertheless, legislators in formulating offenses are often forced to take into account the fact that humans carry out actions within or through organizations that exist within civil law or outside it, appear as a single unit, and are therefore recognized and treated as legal entities/corporations.In the Criminal Code, lawmakers will refer to corporate managers or commissioners if they are faced with such a situation.(Nasichin & Nofita, 2021) According to civil law, a corporation is a legal entity (legal person).However, in criminal law, the definition of corporation does not only include legal entities, such as limited liability companies, foundations, cooperatives, or associations that have been legalized as legal entities that are classified as corporations.According to criminal law, firms, limited liability companies CVs, and partnerships or matchups are included in corporations.Apart from that, what is also referred to as a corporation according to criminal law is a group of people who are organized and have leadership and carry out legal acts, such as entering into agreements in the context of business activities or social activities carried out by their management for and on behalf of that group of people.
The crime of money laundering is included in formal legal acts.The crime of money laundering is a crime that has a distinctive characteristic, this crime is not a single crime but a multiple crime.This crime is characterized by the form of money laundering, which is a crime that is a follow-up crime or continued crime, while the main crime or original crime is called a predicate offense or core crime or some countries formulate it as an unlawful activity, namely an original crime that produces money which is then carried out in the laundering process.(Hanafi Amrani, 2015) In the development of the criminal evidence system, something new was also introduced, namely the system of reversal of the burden of proof (Omkering van het bewijslast).The system of reversing the burden of proof, or what is better known to the public as reverse evidence is a system that places the burden of proof on the suspect.It means that generally when referring to the Criminal Procedure Code, the person who has the right to prove the defendant's guilt is the public prosecutor, but the defendant's reverse proof system (legal advisor) will prove otherwise that the defendant has not been legally and convincingly proven guilty of committing the crime charged.(0 Hiariej, 2012) Law enforcement aims to provide an atmosphere of calm in society, as well as a deterrent effect on other people so that they do not commit criminal acts.However, that does not mean there are no problems in law enforcement.Soerjono Soekanto views that law enforcement cannot be separated from the factors that influence it.These factors can influence the power of law to work effectively in society.
Talking about criminal acts and criminal responsibility, in principle, is an inseparable part of discussing the criminal law system.Mardjono Reksodiputro said that in the development of criminal law in Indonesia there are three systems of corporate responsibility as the subject of criminal acts, namely: Corporate managers as makers, responsible managers, corporations as makers, responsible managers, and corporations as responsible makers.(MahrusAli, 2015) The first accountability system explains that accountability is characterized by efforts to limit the nature of criminal acts committed by corporations to individuals (natuurlijk person).So, if a criminal act occurs within a corporate environment, the criminal act is deemed to have been committed by the management of that corporation.In this first system, the drafters of the Criminal Code still accept the principle of "Universitas delinquere non potest" [legal entities (corporations) cannot be punished].This principle applied in the last century to all Continental European countries.This is in line with individual criminal law opinions from the classical school that prevailed at that time and later the modern school in criminal law.In the Explanatory Memory of the Criminal Code which came into force on 1 September 1886, it can be read: a criminal act can only be committed by an individual (natuurlijk person).Fictional thinking about the nature of legal entities (recht person) does not apply to the field of criminal law.In this first system, managers who do not fulfill obligations that are corporate obligations can be declared responsible.
The second system of responsibility is characterized by the recognition that arises in the formulation of the law that a criminal act can be committed by a union or business entity (corporation), but responsibility for this falls on the management of the legal entity (corporation).Gradually, criminal responsibility shifts from the management members to those who order them or are prohibited from doing so if they neglect to truly lead the corporation.In this accountability system, corporations can be the perpetrators of criminal acts, but those responsible are the management members, as long as it is stated explicitly in the regulations.(Mahrus Ali, 2015) The third accountability system is the beginning of direct responsibility from the corporation.In this system, the possibility of suing corporations and holding them accountable under criminal law is opened.The thing that can be used as a basis for justification and the reason that corporations are both creators and at the same time responsible is that in various economic and fiscal offenses, the profits obtained by corporations or the losses suffered by society can be so large, that it will not be possible to balance them if the punishment is only imposed on corporate managers.The reason was also put forward that by simply punishing the management there was no or no guarantee that the corporation would not repeat the offense.By punishing corporations with a type and severity that is appropriate to the nature of the corporation, it is hoped that corporations can be forced to comply with the relevant regulations.(Mahrus Ali, 2015) Finding the basis for corporate responsibility is not easy.Because corporations as subjects of criminal acts do not have the same mental state as natural humans.However, this problem can be overcome if we accept the concept of functional behavior (functional daderschap).This means that a person cannot escape responsibility because the person concerned has delegated responsibility to another person even though the person concerned does not know what his subordinates have done.[9] In other words, a person who has delegated authority to his subordinates or proxies to act for and on his behalf must still be responsible for the actions carried out by the recipient of the delegation if the recipient of the delegation commits a criminal act, even if he does not know what his subordinates have done.So delegation cannot be used as an excuse for an employer to immediately assume criminal responsibility solely because the criminal act has been committed by his subordinates who have received a delegation of authority from him.Regarding the issue of intentionality and negligence in corporations, psychological issues and inner attitudes can be addressed by looking at whether the discrepancies in the actions of the management are covered by company politics or are within the real activities of a particular company.
The criminal sanctions against corporations and/or corporate control personnel who are perpetrators of money laundering crimes are subject to sanctions under Articles 6, 7, 8, 9, and 10 of Law Number 8 of 2010 concerning the Prevention and Eradication of the Crime of Money Laundering, with a maximum threat of a prison sentence of 20 years and a fine of IDR 10 billion.
Article 6 (1) If the criminal act of Money Laundering as intended in Article 3, Article 4, and Article 5 is committed by a Corporation, the penalty shall be imposed on the Corporation and/or the Corporation's Control Personnel.(2) A penalty is imposed on a Corporation if the crime of Money Laundering: a. carried out or ordered by Corporate Control Personnel; b. carried out to fulfill the aims and objectives of the Corporation; c. carried out by the duties and functions of the perpetrator or giver of the order; and d. carried out to provide benefits to the Corporation.Article 7 (1) The principal penalty imposed on a Corporation is a maximum fine of IDR 100,000,000,000.00(one hundred billion rupiah).
(2) In addition to the fine as intended in paragraph (1), additional penalties may also be imposed on Corporations in the form of a. announcement of the judge's decision; b. freezing of part or all of the Corporation's business activities; c. revocation of business license; d. dissolution and/or prohibition of the Corporation; e.g.confiscation of Corporation assets for the state; and/or f. takeover of corporations by the state.Article 8 If the convict's assets are insufficient to pay the fine as intended in Article 3, Article 4, and Article 5, the fine is replaced by a maximum imprisonment of 1 (one) year and 4 (four) months.Article 7 regulates a maximum fine of IDR 100,000,000,000.00(one hundred billion rupiah), and additional penalties in the form of: a. Announcement of the judge's decision; b.Suspension of part or all of the Corporation's business activities; c.Revocation of business license; d.Dissolution and/or prohibition of the Corporation; e. Confiscation of corporate assets for the state; and/or f.Corporate Takeover by the state.
Fair and humane law enforcement can be interpreted as meaning that the law does not move in a vacuum, or only looks at one side, on the contrary, the law always moves dynamically following the changes and developments of the times in the concept of criminal law reform, so that legal reform requires policies that according to conditions or needs at that time.Several efforts or innovations in law enforcement can be expressed in the form of policies that deal with law enforcement for money laundering crimes.
Placing the crime of money laundering as an independent crime or as a follow-up crime is not contradictory, but both understandings are correct if each is placed in the right context.This is considered correct, that the opinion regarding the crime of money laundering as a follow-up crime, is correct if placed in the context of the factual occurrence of the crime of money laundering.The opinion that the crime of money laundering is an independent crime is correct if placed in the context of part of the evidence for the money laundering offense.This conclusion can be built with the following arguments.[15]The perspective of the crime of money laundering as a follow-up crime captures the position of the crime of money laundering from the point of view of the factual occurrence of the offense.So, this point of view will see that in the event of a money laundering crime, there must be a result of the crime (proceed of crime) against which actions are taken that cause the proceeds of the crime to be hidden or disguised.(Direktorat Hukum PPATK, 2015) The main criminal sanction against a corporation that commits a money laundering crime is a fine of 100,000,000,000.00(one hundred billion rupiah).Additional criminal sanctions include the announcement of a judge's decision, freezing of part or all of a corporation's business activities, revocation of a business license, dissolution and/or prohibition of a corporation, confiscation of corporate assets for the state and/or takeover of a corporation by the state.
In Article 9 of Law no. 8 of 2010 concerning the Prevention and Eradication of the Crime of Money Laundering, it is explained that a substitute crime is that a corporation that is unable to pay a fine is replaced by confiscation of corporate assets or Corporate Control Personnel whose value is the same as the fine imposed.In addition, if the sale of confiscated corporate assets is insufficient, imprisonment instead of a fine is imposed on the Corporate Control Personnel considering the fines that have been paid.
Conclusion
Money laundering is a method or process of changing money originating from illegal (haram) sources into money that appears to be halal.In Indonesia, the crime of money laundering is regulated in Law No. 8 of 2010 concerning the Prevention and Eradication of Money Laundering.Meanwhile, corporations themselves are the subject of money laundering crimes regulated in Article 6 paragraph (1) of Law No. 8 of 2010 concerning the Prevention and Eradication of Money Laundering.Meanwhile, a corporation that commits a money laundering crime can be held criminally responsible if it has fulfilled the elements of punishment, namely the corporation's ability to take responsibility, the existence of an error, and no reason to remove the crime from the corporation.
Fair and humane law enforcement can be interpreted as meaning that the law does not move in a vacuum, or only looks at one side, on the contrary, the law always moves dynamically following the changes and developments of the times in the concept of criminal law reform, so that legal reform requires policies that according to conditions or needs at that time.Several efforts or innovations in law enforcement can be expressed in the form of policies that deal with law enforcement for money laundering crimes.
e-ISSN: 2723-6692 p-ISSN: 2723-6595 If the Corporation is unable to pay the criminal fine as intended in Article 7 paragraph (1), the criminal fine is replaced by confiscation of assets belonging to the Corporation or Corporate Control Personnel whose value is the same as the criminal fine imposed.(2) If the sale of confiscated assets belonging to the Corporation as intended in paragraph (1) is insufficient, imprisonment instead of a fine is imposed on the Corporation Control Personnel taking into account the fine that has been paid.Article 10 Every person within or outside the territory of the Unitary State of the Republic of Indonesia who participates in carrying out attempts, assistance, or criminal conspiracy to commit the crime of money laundering shall be punished with the same crime as intended in Article 3, Article 4, and Article 5. Law Number 25 of 2003 concerning the Crime of Laundering, namely: Only a few articles have been changed, but those regulating corporations still apply Law Number 15 of 2002.Law Number 8 of 2010 concerning the Prevention and Eradication of the Crime of Laundering Article 6, 7, 8 and 9. | 2024-01-11T16:18:22.111Z | 2024-01-08T00:00:00.000 | {
"year": 2024,
"sha1": "24cadae3c7e13eff6d5aabfdde3ba3b189607e12",
"oa_license": "CCBYSA",
"oa_url": "https://jiss.publikasiindonesia.id/index.php/jiss/article/download/938/1673",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d135a30cda2f5e197ed563cca8122a541e120c72",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
49427410 | pes2o/s2orc | v3-fos-license | Vitamin D deficiency and risk of cardiovascular diseases: a narrative review
Vitamin D, a fat-soluble prohormone, has wide-ranging roles in the regulation of many physiological processes through their interactions with the vitamin D receptors (VDR). It plays a major role in bones and calcium metabolism. Vitamin D deficiency is not uncommon and it has been associated with many health-related issues, including skeletal and non-skeletal complications. The association of low vitamin D and cardiovascular diseases and risk factors has been explored in both animal and human studies. However, studies and trials on the effect of vitamin D supplementation on cardiovascular risk factors and hypertension are conflicting with inconsistent results. Therefore, large, well-powered randomized controlled trials are warranted. If successful, supplementation with easy and low-cost vitamin D can impact our health positively. Here, we summarized the evidence for the association of vitamin D, cardiovascular diseases and risk factors, including coronary artery diseases, stroke, and hypertension, and mortality, with special consideration to resistant hypertension.
Background
Vitamin D is metabolized by hepatic 25-hydroxylase then renal 1a-hydroxylase into its active form, calcitriol, which exerts its function on the vitamin D receptor (VDR) in nearly 30 different tissues [1]. Most of the nutritional requirements of vitamin D are derived from cutaneous solar ultraviolet radiation (80-100%) [2] and to a lesser extent from foods naturally containing or fortified with vitamin D [3]. The best measurement for vitamin D status is its metabolite 25-hydroxyvitamin D (25[OH]D) level [1,4].
In this narrative review, we aimed to summarize the evidence for the association of vitamin D deficiency with cardiovascular diseases and risk factors, including coronary artery diseases, stroke, and hypertension.
Epidemiology
Vitamin D deficiency is widespread, the lowest vitamin D levels are commonly found in regions such as the Middle East and South Asia and the main risk factors were attributed to elderly women, higher latitude, winter season, less sunlight exposure, skin pigmentation, dietary intake and low vitamin D fortified foods [27]. It was estimated that the prevalence of vitamin D deficiency is approximately 30-50% of the general population [28]. Furthermore, vitamin D deficiency is still common in sunshine countries [29]. In a large Middle Eastern study of 60,979 patients from 136 countries with yearlong sunlight, 82.5% of studied patients were found to have vitamin D insufficiency [30].
There is an epidemic of vitamin D deficiency worldwide, which represents a major factor of many chronic diseases and has led some authors to suggest annual vitamin D measurement coupled with adequate intake and greater awareness of its consequences [4,31]. In the United States, there was an increasing prevalence of vitamin D deficiency observed from a sample of 18,158 individuals between 1988 and 1994 compared with a sample of 20,289 individuals between 2000 and 2004 with 5-9 nmol/l decrease in vitamin D levels [32].
Vitamin D levels were found to be lowest in Blacks, followed by Hispanics and Chinese, and adequate in Whites (Multi-Ethnic Study of Atherosclerosis MESA) [33]. Another study done by Yetley in 2008 demonstrated that non-Hispanic blacks and Mexican Americans tend to have lower levels of vitamin D in comparison with non-Hispanic whites [34]. He also found vitamin D to be significantly lower among obese and non-college educated individuals, as well as those with poor health statuses, hypertension, low high-density lipoprotein levels and low milk consumption. Furthermore, the level of vitamin D deficiency was found to be alarmingly lower in winter and spring in a study done in British adults [35].
Vitamin D and cardiovascular diseases
Vitamin D deficiency has been linked to several cardiovascular risk factors [36,37]. Through increased renin and angiotensin II synthesis, vitamin D deficiency can increase the production of reactive oxygen species and G protein RhoA, resulting in inhibition of the pathways necessary for intracellular glucose transporter and thus the development of insulin resistance and metabolic syndrome [25]. In addition, direct effects of vitamin D upon smooth muscle calcification and proliferation could contribute to their effects on cardiovascular health [38]. In the Inter99 study of 6784 individuals, high vitamin D level was associated with a favorable lipid profile and lower incidence of metabolic syndrome [39].
Furthermore, in an analysis of NHANES III 1988-1994, low vitamin D was associated with cardiovascular disease (CVD) [7] and select CVD risk factors, including diabetes mellitus (DM), obesity, and hypertriglyceridemia [24]. In a prospective nested case-control study between 1993 and 1999 of 18,225 US men (Health Professionals Follow-Up Study), low vitamin D was associated with a higher risk of myocardial infarction in comparison with sufficient 25(OH)D after multivariate adjustment [11]. Kim and colleagues have found a high prevalence of hypovitaminosis D in individuals with cardiovascular diseases, namely coronary heart disease and heart failure, after controlling for age, race and gender, using data from NHANES 2001-2004 [8].
Additional prospective study of the Integrated Intermountain Healthcare system database of 41,504 patients has shown an association between vitamin D deficiency and an increase in the prevalence of DM, HTN, hyperlipidemia, and peripheral vascular disease (PVD) (P < 0.0001) as well as with incident death, heart failure, coronary artery disease/myocardial infarction, stroke and their composite [40]. Also, low serum 25(OH)D was identified as casually associated with increased risk for CVD on the basis of Hill's criteria for causality in a biological system [41]. In a meta-analysis of 19 prospective studies in 65,994 participants, Wang et al. have demonstrated a linear and inverse association between circulating vitamin D level and risk of cardiovascular diseases [42].
Vitamin D deficiency and coronary artery disease
The association of vitamin D deficiency with coronary artery diseases (CADs) have been investigated in many studies [43][44][45]. In 1978, a Danish study found that low vitamin D levels were significantly associated with angina and myocardial infarction [46]. In a multicenter US cohort study evaluating patients admitted with acute coronary syndrome (ACS), about 95% of patients were found to have low vitamin D levels [47]. In a study conducted by Dziedzie et al., low vitamin D levels were observed in patients with myocardial infarction history [48]. In a case-control study (n = 240), Roy et al. reported that severe vitamin D deficiency was associated with increased risk of acute myocardial infarction after adjusting for risk factors [49]. Similar findings were reported from Health Professionals Follow-up Study which included 18,225 participants. In this study, at 10-year follow-up, participants with normal vitamin D level had about half the risk of myocardial infarction [11]. In a large prospective study (n = 10,170), low vitamin D levels were found to be associated with increased risk of ischemic heart disease, myocardial infarction, and early death during 9 years of follow-up [50]. Additionally, in a meta-analysis of 18 studies, low vitamin D levels were found to have an increased risk of ischemic heart disease and early death [50].
Vitamin D and hypertension
Hypertension is the most common presentation to primary care providers [51] and represents a major chronic health disease in developed countries [52]. The prevalence of hypertension in adults is approximately 29% [53] with an estimated 1.6 billion cases of hypertension expected in 2025 [54].
Pathophysiology
It is hypothesized that vitamin D deficiency increases blood pressure through the renin-angiotensin system. Earlier animal studies demonstrated that vitamin D receptor-null (VDR-null) mice have a several-fold increase in renin expression and plasma angiotensin II production, which leads to hypertension, cardiac hypertrophy and increased water intake. In addition, renin suppression was observed in wild-type mice after 1,25(OH)2D3 injection. Therefore, 1,25(OH)2D3 was considered a novel negative endocrine regulator of the renin-angiotensin axis [55]. A later study showed profound heart hypertrophy in vitamin D receptor knockout (VDR-KO) mice, which suggested direct blunting of cardiomyocyte hypertrophy by calcitriol [56]. Through a central antioxidative mechanism, 1,25(OH)2D3 has normalized overactivation of the central renin-angiotensin system in 1-alpha-hydroxylase knockout mice [57]. Furthermore, using mouse models, the elimination of VDR in vascular endothelial cells resulted in a reduction of endothelial nitric oxide synthase expression and impaired endothelial relaxation [58].
In 2011, a study conducted by Argacha et al. revealed that vitamin D-deficient male rats have increased systolic blood pressure, superoxide anion production, angiotensin II and atrial natriuretic peptide with observed changes in 51 cardiac gene expressions important in the regulation of oxidative stress and myocardial hypertrophy [59]. Also, another study of vitamin D-deficient mice showed increased systolic blood pressure, diastolic blood pressure, high plasma renin-angiotensin activity and reduced urinary sodium excretion, which was reversed after 6 weeks of a vitamin D-sufficient chow diet [60]. In the same study, vitamin D-deficient mice on a high-fat diet had increased atherosclerosis in their aorta with increased macrophage infiltration, fat deposition, and endoplasmic reticulum stress activation. These results indicate vitamin D deficiency is associated with the development of hypertension and accelerated atherosclerosis [60]. In another study on double-transgenic rats, vitamin D-depleted rats were shown to exacerbate hypertension (HTN) and impact the renin-angiotensin system, which can contribute to end-organ damage [61].
For the first time in humans, a prospective cohort study of 3316 patients (1997)(1998)(1999)(2000) in southwest Ludwigshafen (Ludwigshafen Risk and Cardiovascular Health LURIC Study) showed a steady increase of plasma renin concentration with declining levels of 25(OH)D and 1,25(OH)2D, as well as a similar increase in angiotensin 2 [62]. Another study showed increased renin-angiotensin system activity in obese hypertensive individuals with low 25(OH)D [63]. Furthermore, another study, which included 375 hypertensive and 146 normotensive individuals, showed that genetic variation at the Fok1 polymorphism of the vitamin D receptor gene and 25(OH)D levels were associated with plasma renin activity in hypertension, a finding that supports the vitamin D-VDR complex as a renin regulator in humans [64]. Therefore, vitamin D analogs have been suggested to be used as renin inhibitors similar to ACE inhibitors and ARBs for patients with hyperreninemia, which can benefit patients with metabolic syndrome and/or hypertension [25]. Other mechanisms that can lead to hypertension in vitamin D-deficient patients are arterial stiffness [65,66], endothelial dysfunction [67], and hyperparathyroidism [68].
Studies regarding vitamin D and hypertension
There is accumulating evidence for the association between vitamin D and blood pressure. An earlier analysis of NHANES III 1988-1994 of 12,644 participants aged > 20 years showed an inverse association between vitamin D level and blood pressure [69]. Similar results were obtained from analysis of NHANES 2003-2006 of 7228 participants [70], the Insulin Resistance Atherosclerosis Family Study (IReSFS) [71], and the Kaiser Permanente Southern California health plan [72].
Forman and colleagues have also demonstrated an inverse association between vitamin D and risk of incident hypertension from two prospective cohort studies including 613 (followed for 4-8 years) and 38,388 (followed for 16-18 years) men from the Health Professionals' Follow-Up Study and 1198 (followed for 4-8 years) and 77,531 (followed for 16-18 years) women from the Nurses' Health Study. Their results, combining men and women with measured 25(OH)D levels, showed a pooled relative risk of 3.18 (95% confidence interval [CI] 1.39-7.29) [73].
Worldwide studies have also demonstrated such an association. In a cross-sectional study of 833 Caucasian males in Uppsala (central Sweden), a threefold higher prevalence of confirmed hypertension was found in participants with 25(OH)D levels < 37.5 nmol/L [74]. Additionally, a cross-sectional analysis of 1460 participants in Shanghai showed high prevalence of vitamin D deficiency (55.8%) in middle-aged and elderly Chinese men [75]. In adolescents (aged 13-15), a study of 1441 Peruvians showed an inverse association between vitamin D deficiency and blood pressure, which may predispose risk of HTN later in adulthood [76].
Vitamin D and aging related cardiovascular disease and hypertension
Older adults are at increased risk for vitamin D deficiency, largely due to reduced vitamin D intake and decreased cutaneous synthesis [77,78]. Beyond skeletal health, accumulated evidence has linked vitamin D deficiency to cardiovascular diseases and hypertension in older patients. Advancing age is associated with increased cardiovascular diseases due to vascular endothelial dysfunction as indicated by decreased peripheral arterial endothelium-dependent dilatation [79]. The mechanisms underpinning this association have been attributed mainly to the reductions in nitric oxide synthesis and increases in oxidative stress with aging [79]. Furthermore, advancing age is associated with reduced blood vessels walls compliance and increased incidence of hypertension [80]. Vitamin D deficiency has been found to modulate the vascular endothelial function with aging [79] and, therefore, increase the incidence of hypertension. In a study conducted by Kestenbaum et al., 2312 older participants (≥ 65 years) without cardiovascular disease at baseline were followed for a median period of 14 years [81]. Their results showed that low 25(OH)D was associated with incident cardiovascular disease and mortality. Furthermore, in a cross-sectional study conducted by Dorjgochoo et al., low 25(OH)D levels were associated with hypertension among older adults [75].
Vitamin D and resistant hypertension
Resistant hypertension is an increasingly common health problem and considered as a strong risk factor cardiovascular disease [82]. It is defined as any blood pressure above the target despite adherence to three antihypertensive agents, including a diuretic, with optimal doses or the use of at least four antihypertensive agents regardless of the blood pressure level [83,84]. Over the past 2 decades, the prevalence of resistant hypertension has almost doubled from 5.5% in 1988-1994 to 11.8% in 2005-2008 [82]. Many factors have been attributed to resistant hypertension such as obesity and excessive adipose tissue as well as hyperaldosteronism [85]. Low vitamin D was linked to resistant hypertension secondary to increased adiposity and metabolic disturbances, including insulin resistance [86]. Furthermore, vitamin D deficiency was found to be associated with increased aldosterone levels [87].
Several studies have demonstrated the relation between vitamin D and resistant hypertension. In a study of 150 patients, lower vitamin D level was associated with resistant hypertension [88]. Additionally, in a study of patients with resistant hypertension (N = 101) who underwent renal sympathetic denervation (RD), low vitamin D was associated with a decreased systolic blood pressure response to RD [89].
Vitamin D and cerebrovascular accident
Cerebrovascular accident (CVA) is the most devastating neurological conditions which can cause physical impairment and even death. Accumulating evidence suggests that vitamin D deficiency is associated with increased risk of CVA [90]. The underlining mechanisms have been largely attributed to the association of vitamin D with cardiovascular risk factors such as hypertension and DM. In addition, epidemiological studies have suggested that vitamin D deficiency is an independent risk factor for CVA [42]. In a study conducted by Sun et al.
(n = 464), low vitamin D levels were associated with increased risk of developing CVA in comparison with high levels [91]. In the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study, vitamin D deficiency was found to be a risk factor for incident CVA unrelated to race [92]. Furthermore, vitamin D level was found to be a predictor of both severity at admission and favorable functional outcome in patients with ischemic CVA [90].
Vitamin D and mortality
In a prospective cohort study of 3258 patients in southwest Germany (Cardiac Center Ludwigshafen) with a median follow-up of 7.7 years showed that low vitamin D level is independently associated with higher all-cause mortality (HR 2.08; 95% CI 1.60-2.70) and cardiovascular mortality (HR 2.22; 95% CI 1.57-3.13) [93]. Additionally, in the Uppsala Longitudinal Study of Adult Men of 1194 elderly men, both low and high serum 25(OH)D levels were associated with increased risk of overall and cancer mortality, however, only low level was associated with cardiovascular mortality [94]. In Finland, a study of 1136 participants from Kuopio Ischaemic Heart Disease Risk Factor (KIHD) Study showed that vitamin D deficiency was associated with a higher risk of death [95].
The rate of all-cause mortality of 13,331 adults > 19 years from the NHANES III Linked Mortality Files (1988-994) was independently higher by 26% for individuals with low vitamin D levels (25[OH]D < 17.8 ng/ml) compared to the highest quartile [96]. Additionally, in a sample of 3408 individuals aged > 64, low baseline 25(OH)D levels were associated with increased all-cause mortality risk after adjusting for demographics, season, and cardiovascular risk factors (hazard ratio 0.95; 95% CI 0.92-0.98, per 10 nmol/L 25[OH]D) [97]. A similar result was obtained from NHANES 2001-2004 analysis with an increase in all-cause and CVD mortality [98,99]. A large meta-analysis of 8 prospective cohort studies across the US and Europe of 26,018 individuals showed a remarkable consistency of the association between 25(OH)D level and all-cause and cause-specific mortality [100]. Additionally, a meta-analysis found a nonlinear decrease in mortality with increasing 25(OH)D levels in 14 prospective cohort studies (n = 62,548) [101].
Vitamin D supplementation
Multiple studies were done to evaluate the effect of Vitamin D supplementation on cardiovascular disease and mortality. In a randomized clinical trial of 5108 community-residents adults aged 50-84 year, monthly high-dose vitamin D supplementation (100,000 IU) did not prevent cardiovascular disease compared with placebo [102]. Furthermore, the EVITA (Effect of Vitamin D on All-cause Mortality in Heart Failure) randomized trial, did not show a reduction of mortality in patients with advanced heart failure received a daily vitamin D of 4000 IU compared with placebo [103]. Also, in a systematic review and meta-analysis of 18 trials and 13 observational studies, there were uncertain associations between vitamin D status and cardiometabolic outcomes [104]. In addition, another meta-analysis by Wang et al. showed a linear inverse association between 25(OH)D and risk of CVD [105]. While Ford et al.'s meta-analysis showed some benefits of vitamin D supplementation on cardiac failure, it did not show benefits on myocardial infarction/stroke [106].
With the proven association between vitamin D and hypertension, several studies were conducted to see whether vitamin D supplementation would help in treating hypertension. However, these studies resulted in different outcomes and recommendations. Some studies have shown some beneficial outcomes with vitamin D supplementation in reducing blood pressure in patients with low baseline vitamin D levels [39,[106][107][108][109]. In a study of 112 patients conducted in Denmark, a 20 weeks' supply of 3000 IU cholecalciferol in winter resulted in a nonsignificant reduction of 3/1 mmHg (P 0.26/0.18), however, significant results were obtained in patients with low baseline 25(OH)D (< 32 ng/ml) of 4/3 mmHg (P 0.05/0.01) [108]. Another study targeted females over 69 years old showed the benefit of 8 weeks' supplementation of vitamin D3 (800 IU) and calcium (1200 mg) on systolic blood pressure by a 5 mmHg or more decrease in SBP in 60 subjects (81%) (P = 0.04) [107]. In a randomized controlled trial of 283 African Americans between 2008 and 2010 showed for each 1 ng/ml increase in 25(OH)D there was a 0.2 mmHg reduction in SBP (P = 0.02) after 3 months' supplementation of (doses 1000, 2000 or 4000 IU) cholecalciferol [106]. On the other hand, some studies did not show any reduction in blood pressure with vitamin D supplementation [110][111][112]. As we see in a randomized controlled trial of 161 predominately white individuals, large doses of vitamin D3 (200,000 for 2 months then 100,000 monthly) up to 18 months showed no effect on BP [111]. Also, the DAYLIGHT randomized controlled trial showed no benefit of vitamin D supplementation on BP [112]. A similar result was found in Austria in the Styrian Vitamin D Hypertension Trial (2011-2014) of 200 participants after 8 weeks of vitamin D3 2800 IU [110].
Therefore, multiple meta-analyses were conducted to study the benefit of replacing vitamin D in hypertensive patients. But again, these meta-analyses also had mixed results. Beveridge et al. did a meta-analysis of 46 trials and concluded there was no effect of vitamin D supplementation on blood pressure [113]. A meta-analysis done by Wu et al. of 8 randomized controlled trials studying the effect of calcium and vitamin D supplementation on blood pressure showed no meaningful effect on daytime office BP [114]. Furthermore, two systemic reviews and meta-analyses attributed the inconsistency in evidence regarding vitamin D supplementation's effect on blood pressure to the heterogeneity in study design [115,116]. However, in a mendelian randomization study, Vimaleswaran et al. have found a genetic evidence that increased vitamin D concentrations are causally associated with reduced blood pressure and the risk of hypertension [117].
With regards to the efficacy of vitamin D supplementation on reducing CVA, available evidence are conflicting [118]. However, in a recent small scale randomized clinical trial, a single dose of 6 lac IU of Cholecalciferol Intramuscular (IM) injection was associated with a significant improvement in the stroke outcome after three months [119]. We hope that the ongoing Vitamin D and Omega-3 Trial (VITAL) would shed some light on the role of vitamin D supplementation in reducing cardiovascular events, including CVA [120,121].
Conclusions
With high prevalence globally, vitamin D deficiency is not uncommon. It is associated with adverse health-related problems. Current evidence suggests a higher risk of cardiovascular diseases and risk factors with lower vitamin D levels. Furthermore, low vitamin D is associated with hypertension and higher cardiovascular and all-cause mortality. The benefit of vitamin D supplementation to ameliorate the major adverse cardiovascular diseases and hypertension are conflicting with many confounding biases. Therefore, larger randomized clinical trials are warranted to explore the benefits of vitamin D supplementation, which would at least reduce the impact of such high health problems.
Acknowledgments
We would like to thank Katherine Negele, editorial assistant, research department, Hurley Medical Center, for assistance with manuscript editing.
Availability of data and materials All data generated or analysed during this study are included in this published article.
Authors' contributions BK, AA, SA, MO: designing, systematic review, interpretation, and manuscript drafting. MH and GB: critical revision and interpretation, and contributed to manuscript writing. All authors read and approved the final manuscript.
Ethics approval and consent to participate Not applicable
Competing interests MH has received a research grant from Abbott. The remaining authors declare that they have no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 2018-06-27T03:11:50.099Z | 2018-06-22T00:00:00.000 | {
"year": 2018,
"sha1": "0d478945a41447c22eb4a5fc5f83eedb25a7f691",
"oa_license": "CCBY",
"oa_url": "https://clinicalhypertension.biomedcentral.com/track/pdf/10.1186/s40885-018-0094-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9b4a447ea3aee38e08f72b00a54ea99d2cb7391",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221477847 | pes2o/s2orc | v3-fos-license | Hemodialysis Vascular access maintenance in the Covid-19 pandemic: Positioning paper from the Interventional Nephrology Committee of the Brazilian Society of Nephrology
ABSTRACT Vascular accesses for hemodialysis are considered the patient’s lifeline and their maintenance is essential for treatment continuity. Following the example of institutions in other countries affected by the Covid-19 pandemic, the Brazilian Society of Nephrology developed these guidelines for healthcare services, elaborating on the importance of carrying out procedures for the preparation and preservation of vascular accesses. Creating definitive accesses for hemodialysis, grafts and arteriovenous fistulas are non-elective procedures, as well as the transition from the use of non-tunneled catheters to tunneled catheters, which cause less morbidity. In the case of patients with suspected or confirmed coronavirus infection, one may postpone the procedures for the quarantine period, to avoid spreading the disease.
AbstRAct
Vascular accesses for hemodialysis are considered the patient's lifeline and their maintenance is essential for treatment continuity. Following the example of institutions in other countries affected by the Covid-19 pandemic, the Brazilian Society of Nephrology developed these guidelines for healthcare services, elaborating on the importance of carrying out procedures for the preparation and preservation of vascular accesses. Creating definitive accesses for hemodialysis, grafts and arteriovenous fistulas are non-elective procedures, as well as the transition from the use of non-tunneled catheters to tunneled catheters, which cause less morbidity. In the case of patients with suspected or confirmed coronavirus infection, one may postpone the procedures for the quarantine period, to avoid spreading the disease. Due to the uncertainty about the duration of the Covid-19, pandemic and the importance of vascular access in maintaining hemodialysis (HD), the Brazilian Society of Nephrology (SBN) prepared this technical note with instructions on how to perform these procedures.
Keywords
With the overload of health systems and the risk of contamination in the hospital setting, preference should be given to performing procedures on an outpatient facility. Whenever possible, suspected or confirmed cases of Covid-19 should have their procedures postponed for the quarantine period, to prevent the virus from spreading.
Patients with short-term catheters represent the most critical cases from the vascular access point of view; 1 therefore, they must be a priority when exchanging for a tunneled catheter or confection of a fistula, as these accesses need less exchange procedures for patency maintenance.
new hd aCCesses
Procedures that guarantee vascular access for HD should NOT be considered elective procedures; therefore, they should not be postponed. Delaying the onset of RRT due to lack of vascular access carries a risk of worsening the patient's clinical condition. This definition includes: • Exchanges of short-term catheters for tunneled catheters should NOT be considered elective, due to the morbidity associated with the prolonged use of short-term catheters. 1,2 • Catheter insertions in patients starting HD.
• Creating fistulas in HD patients: » Creation of arteriovenous fistulas is NOT an elective procedure. Patients who are candidates for arteriovenous fistula creation and who would benefit from ealry catheter removal should be sent to surgery, preferably in an outpatient facility.
» As for arteriovenous fistulas creation, patients must be assessed individually. For example, in elderly patients, already using tunneled catheters and without complications, fistula creation may be delayed due to the mortality of this population with Covid-19.
» In the post-operative period, we suggest reducing the number of consultations. The assessment can be done by the nephrologist at the HD facility, usually at 7 and 30 days, with physical examination or Doppler ultrasound if avaiable. If a consultation with a vascular surgeon is needed , we suggest using electronic consultation if possible, to reduce patient exposure.
hd aCCess dysfunCTion
In cases of patients already on HD and at risk of access loss due to stenosis or cases with thrombosis, treatment avoids the need for catheter implantation and the need for more procedures, 1,3-5 with greater patient exposure and healthcare system overload. This definition includes and should NOT be considered elective procedures: • Exchange of catheters with dysfunction: » In cases of catheter dysfunction, the administration of thrombolytics, if available at the clinic, is the preferred method of treatment, avoiding surgical procedures for exchanges and patient exposure. 1,6 • Endovascular intervention in arteriovenous fistulas or grafts with clinical signs of dysfunction (for example, low flow, impossibility of punctures, cloth aspiration, etc.), to avoid loss of access and consequent catheter implantation. These procedures should be performed on an outpatient basis if possible. 7 • Procedures for arteriovenous fistulas or grafts salvage (thrombolysis or thrombectomies).
oTher emergenCies ThaT are noT Considered eleCTiVe • Infections associated with vascular access requiring a surgical approach are also NOT considered elective. This definition includes: » Withdrawal or exchange of tunneled catheters due to catheter-related bacteremia.
» Deactivation of arteriovenous fistulas with infection without response to the use of antibiotics.
» Removal of infected arteriovenous grafts.
• Bleeding related to vascular accesses requiring surgical approach are NOT considered elective procedures.
Elective procedures are considered and, therefore, must be postponed: • Outpatient consultations to monitor vascular access.
• Preoperative mapping for fistula creation.
These guidelines are issued on an urgent basis and in the face of an uncertain evolution of the magnitude of the epidemic in Brazil and in dialysis units; therefore, they may be updated in the coming weeks. The conducts must be reassessed weekly in each service. | 2020-08-27T09:08:55.754Z | 2020-08-13T00:00:00.000 | {
"year": 2020,
"sha1": "4314ebd27e6cc91612c124bc2dd9cabeeb1a2b75",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/jbn/a/gdLxZrQ4rWGfhdZTgqCNXCj/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a967a449f6432bc8fb44cf0cf54c7de903f58911",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4535497 | pes2o/s2orc | v3-fos-license | Cortical Circuits from Scratch: A Metaplastic Architecture for the Emergence of Lognormal Firing Rates and Realistic Topology
Our current understanding of neuroplasticity paints a picture of a complex interconnected system of dependent processes, which shape cortical structure so as to produce an efficient information processing system. Indeed, the cooperation of these processes is associated with robust, stable, adaptable networks with characteristic features of activity and synaptic topology. However, combining the actions of these mechanisms in models has proven exceptionally difficult and to date no model has been able to do so without significant hand-tuning. Until such a model exists that can successfully combine these mechanisms to form a stable circuits with realistic features, our ability to study neuroplasticity in the context of (more realistic) dynamic networks and potentially reap whatever rewards these features and mechanisms imbue biological networks with is hindered. We introduce a model which combines five known plasticity mechanisms that act on the network as well as a unique metaplastic mechanism which acts on other plasticity mechanisms, to produce a neural circuit model which is both stable and capable of broadly reproducing many characteristic features of cortical networks. The MANA (metaplastic artificial neural architecture) represents the first model of its kind in that it is able to self-organize realistic, nonrandom features of cortical networks, from a null initial state (no synaptic connectivity or neuronal differentiation) with no hand-tuning of relevant variables. In the same vein as models like the SORN (self-organizing recurrent network) MANA represents further progress toward the reverse engineering of the brain at the network level.
Introduction Motivation and Goals
What makes brains especially powerful, efficient and capable information processors? What about them so easily enables dynamic, real-time learning and cognition? In his 2007 paper What can AI get from Neuroscience?, Steve Potter compares modern artificial intelligence to a hypothetical group of energy researchers who are aware of an alien power-plant discovered in the jungle which appears to provide virtually 1. Reproduce as many characteristic features known to exist in biological tissue as possible 2. Self-organize the reproduced features from (1)-resisting "hand tuning" as much as possible.
3. Self-organizing mechanisms from (2) must have an analog in living tissue, except in cases which would lead to violations of (1) or (2). Their relationship to said analog may be phenomenological in nature. 4. The resulting network must be stable insofar as the features from (1) are not transient and reliably conform with (1).
Such constraints minimize the odds that some aspect of the circuit as a network will be missed with respect to possible computational benefits. These criteria also maximize the ability of such a network to operate as a model for scientific purposes to better understand how the brain functions. MANA: The Metaplastic Artificial Neural Architecture, is the first model which conforms to all these guidelines, and indeed a proof of concept that for how to go about constructing a model conforming to (1)(2)(3)(4) with our current understanding of neuroplasticity. MANA uses 5 known mechanisms of plasticity and one hypothetical metaplastic mechanism-referred to as such since it is responsible for dynamically governing the evolution of the homeostatic set-points of other plasticity mechanisms (i.e. it is a plasticity mechanism of plasticity mechanisms as opposed to an agent of plastic change acting directly on network properties) and has no direct known analog in living brains. This combination allows a circuit which replicates a wide variety of features to be self-organized from a null initial state (thus conforming to (2)). Specifically we start MANA with no synaptic connections and uniform target firing rates (TFRs) amongst its neurons, such that the resulting synaptic topology and firing rate distribution are completely the result of plasticity driven growth and pruning, the metaplastic mechanism and the synergy of all plasticity mechanisms involved. We focus here only on the attaining of these different features and not the computational aspects of the circuit as prior to this work no model capable of fully doing what we have outlined existed. Merely creating a model which does is itself a major undertaking, and the entire focus of this paper. Before the computational power of such a circuit can be tested, before certain mechanisms or features can be deemed superfluous, before any further investigation with respect to how the synergy of different mechanisms from (2) produce (1), a model which conforms to all four criteria must first exist. Detailing the first of that class of models is the subject of this paper.
Context and Other Work
Crucial to the development and self-organization of any neural circuit is the differentiation of neurons and synapses into distinct functional roles. Differences in connectivity patterns and cortical cell classes improves information encoding by broadening the available strategies for information processing [18], while simultaneously similar motifs in the relationships between these neurons are found across areas of cortex and species [17]. The maintenance and control of such distinguishing properties in the face of perturbation is equally important, as a functional role which doesn't meaningfully persist across a consistent range of perturbations (i.e. one which lacks robustness) is effectively useless. Many empirical and computational studies have focused on the nature and mechanisms of this robustness in its many flavors, including: intrinsic neuronal excitability [19][20][21][22][23][24] and regulation of synaptic efficacy both as it directly relates to firing rate homeostasis [23,[25][26][27] and as addressing the inherent instability of additive Hebbian spike-timing dependent plasticity (STDP) [28][29][30]. The difficulty of implementing multiple concurrently active plasticity mechanisms effectively in recurrent neural networks [31,32] has lead to a relative dearth of such models with a few very notable exceptions (in particular, though not exhaustively: [33][34][35][36][37][38]). In particular, the pioneering work on the SORN model demonstrated that the synergy of a mere 3 plasticity mechanisms (4 counting synaptic growth/pruning) can account for a multitude of observed features in cortical microcircuits [33][34][35] and very much paved the way for the work detailed here. Indeed the core of MANA's mechanisms are inspired by the SORN, due in part to the demonstration of their benefits and stability in previous work [33].
In particular, work on the SORN has demonstrated that a wide array of circuit features and behaviors can be self-organized entirely via approximations to well known plasticity rules when the distribution of TFRs is hand-tuned to a lognormal distribution [35]. Additionally, the dynamics of excitatory (Exc.) →inhibitory (Inh.), Inh. →Exc., and/or Inh. →Inh. synapses are often fully or partially ignored depending upon the self-organizing model in question [33][34][35][36]38]. Building on these works, it can be concluded that in order to self-organize from a null initial state we require rules for the dynamics of inhibitory synapses as well as some mechanism for self-organizing the TFRs of neurons, which stand in addition to the pre-established mechanisms underlying the SORN and SORN-like models. In the former case of inhibitory dynamics there exists literature on inhibitory STDP (iSTDP) from which we can draw upon for the model's inhibitory dynamics [39][40][41][42](for a review see: [43]). However, in the latter case, while there has been work regarding the necessary conditions for lognormal firing rates [44] [45], and putative rules for achieving them [45] [38] there exists no such literature on mechanisms for the evolution of the set-points of firing rate homeostasis specifically. We introduce such a mechanism: a metaplastic rule for the evolution of the set points of homeostatic plasticity and this metaplastic rule constitutes the "M" in MANA.
In spite of the progress in modeling homeostatic mechanisms, very few modelers have focused on the second piece of the self-organization puzzle: differentiation, or how exactly the set points that homeostasis aims to achieve come about. From a purely logical standpoint one can observe that in order for homeostatic mechanisms to exist in the first place there must be a point (or set of points comprising a manifold) in the neuron's state space which the mechanism in question makes robust to perturbations. Such is intrinsic to the notion of homeostasis. Likewise, many models (computational and conceptual) assume such set points [22, 33-35, 38, 46, 47], but to date very few models have studied how such set points are arrived at, the effect of their transient instability, or otherwise included them in a self-organizing model. The formulation of such a rule as presented here is, then, a possible logical next step forward for this class of model.
Materials and Methods
All simulations used Simbrain 3.0 (http://simbrain.net, [48]) as a library for most basic neural network functions, with custom source code written for more esoteric features of the model.
Without initial recurrent connections the model requires some sort of external drive in order to self organize. To this end the 24 tokens used in Jeffrey Elman's 1993 paper on grammatical structure and simple recurrent networks [49] were each converted to 100 distinct Poisson spike trains of a duration of 200 ms.
These tokens were then arranged them according to the rules of the toy grammar from the same paper. The grammar includes a significant amount of temporal dependencies up to several words apart (also from [49]).
While much less complicated than a real living cortical circuit MANA is still considerably more complex than other models in a similar vein as a result of its all-inclusive goals. While this section as a whole gives a detailed account of its mechanisms, Fig. 1 provides a high-level overview that many readers may find convenient. mechanisms. This map can be used as a quick reference and high-level overview of what mechanisms, MANA employs, what aspects of the network they act upon and are influenced by and how direct or indirect the actions of any one mechanism are on another. Acronyms are as follows: MHP: meta-homeostatic plasticity (how TFRs change), HP: homeostatic plasticity (how neurons alter thresholds to maintain TFR), SN: synaptic normalization (how neurons maintain a constant incoming total Exc./Inh. current across all afferent synapses), STDP: Spike-timing dependent plasticity (how synapses change strength in response to pre-and post-synaptic spikes), SS: synaptic scaling (how neurons shift and scale their Exc./Inh. balance to help maintain the TFR), and Growth/Pruning: self-explanatory (how synapses are removed from or added to the network). Green arrows indicate where aspects, attributes, or properties of the network are used as parameters or variables for plasticity mechanisms. Purple arrows indicate action of a plasticity mechanism on a variable, and black arrows indicate direct influence between network attributes/variables. The direct influences (in the form of directly altering or directly being used as an input to a function) are all clearly visible, but perhaps more interesting is that this map can be used to trace all indirect influences. For instance spikes/spike times are the parameters for STDP, which alters synaptic weights, which in turn affect spiking, which are used to estimate the firing rates, which are used as compared to TFRs to alter thresholds, and so on.
Neuron and Synapse Models
Simulations consisted of 924 recurrently connected single-compartment leaky-integrate and fire neurons with firing rate adaptation which formed the recurrent reservoir. These were driven by 100 input neurons which lacked any dynamics of their own and received no connections from the 924 reservoir neurons. Connections from the inputs to the recurrent reservoir were initialized to a sparsity of 25%, had their weights drawn from a N (µ = 3; σ = 1), and their delays drawn from a uniform distribution [0. 25,6] ms. All input neurons in the model are excitatory and thus any weight values less than zero had their sign flipped. Input synapses behaved in exactly the same manner as reservoir synapses and were subject to all of the same plasticity mechanisms including growth and pruning. Reservoir neurons (hereafter referred to simply as "neurons") were modeled as leaky integrate and fire neurons with adaptation and were updated using the following: Where V is the membrane potential, V l is the leak reversal (-70 mV), w is the adaptation current, and dot-notation is being used to denote derivatives. Spike-frequency adaptation was incremented by b (15 nA and 10 nA for excitatory and inhibitory neurons respectively) and the membrane potential was set to the reset value (-55 mV; where ←indicates assignment) whenever an action potential occurred. Spike-frequency adaptation and decayed with time constant τ w of 144 ms. Neurons generate an action potential (spike) if their membrane potential exceeds their threshold (θ) which is initialized to -50 mV, but is dynamic (governed by HP, see Subsection: Homeostatic Plasticity). All neurons had a refractory period during which the membrane potential was held constant at the reset value (V reset ) and no action potentials could be generated and was set to 3(2)ms for excitatory(inhibitory) neurons. The membrane capacitance (C m ), was drawn from N (26, 1.5) (N (23, 2.5))nF for excitatory(inhibitory) neurons. I bg , and I noise are the synaptic input, background, and noise currents, impinging on the cells.
Neurons were randomly embedded within a rectangular prism in 3-D space. Distance was not tied to any specific unit and merely existed as a value from which to derive synaptic delays. For all recurrent →recurrent synapses delays were proportional to distance between pre-and post-synaptic neurons in the prism, averaging 2.5 ms, but as low as 0.5 ms and as high as 6 ms.
Post-synaptic currents (PSCs) were modeled as an instantaneous jump and decay, with dynamic jumps representing short-term plasticity as modeled by Use, Depression, Facilitation (UDF) [50].
The UDF model is designed to account for the temporary differences in post-synaptic response caused by depletion of neurotransmitter (depression) and influx of calcium between spikes (facilitation) [51]. The synaptic parameters U (use), D (depression time constant), and F (facilitation time constant) fixed, being drawn from different normal distributions depending on whether the pre-and post-synaptic neurons were excitatory or inhibitory. The mean values for U, D, and F (with D and F expressed in seconds) were: .5, 1.1, .05 (EE), .05, .125. 1.2 (EI), .25, .7, .02 (IE), and .32, .144, .06 (II), with standard deviations set to half the mean, and negative values re-drawn from the distribution until positive. This is consistent with [50], and uses the same parameters found in much of the liquid state literature (when UDF is included) [52]. Here ∆ k is the most recent inter-spike interval (ISI) for neuron k, where the ISI is calculated as the difference between the last spike arrival and the arrival of the current spike (since synapses in the model have delays). The value w k represents the strength or weight of outgoing synapse k. In the final equation q psr is taken to be the total post-synaptic response impinging on synapse k's post-synaptic neuron, and where τ psr is a decay time constant set to 3(6)ms for excitatory(inhibitory) pre-synaptic neurons. Lastly δ(t − t arr ) is the Dirac-delta function of the current time subtracted from the arrival time of the pre-synaptic spike at the post-synaptic terminal. This is not the same as the spike time of the pre-synaptic neuron due to synaptic delay.
Firing-Rate Mechanisms
The cornerstone of MANA is it's 2nd order plasticity mechanism (meta-homeostatic plasticity (MHP)) which changes TFRs using a local rule. However, in order to maintain or change a TFR the cell requires some sort of mechanism for determining its average depolarization rate over some time-scale. Average intracellular calcium would seem to fill this role nicely [53] [54], and although [25] points out that its exact role with respect to homeostatic plasticity has not been established, it has been used effectively for maintenance of depolarization dynamics in single-compartment Hodgkin-Huxley model neurons [22]. Here an exponential rise and decay function was used as a proxy for a running average to allow the cell to estimate firing rate: Notably, the rate of rise and decay is tied to the TFR of the cell. The dependence of the time constant τ on TFR gave parity between high and low activity neurons. In the former case the estimated firing rate (EFR) will increase more with each spike, but more quickly decay, while in the latter case, the instantaneous effect of individual spikes is diminished, but they are also less quickly forgotten. This is ideal since by definition a neuron which fires quickly must do so at least fairly regularly, while on the other hand the nature of being a low activity neuron is such that activity is spread over long periods of time. Similarly, this minimizes the impact of how firing rates are estimated on the dynamics of individual neurons. For instance, uniform application of a large τ κ would bias high firing rate neurons toward bursting more than low firing rate neurons due to the longer amount of time spikes are "remembered". Here ν κ refers to the raw firing rate estimate (in kHz), whileν is the final firing rate estimate in Hz ultimately used in the HP and MHP terms.
Homeostatic Plasticity
Findings from [20] indicated that neurons (regardless of fosGFP gene expression, associated with higher firing rate cells) significantly altered their membrane thresholds for spike generation. Furthermore other self-organizing models have also used alterations to firing threshold as their primary firing-rate homeostasis mechanism [33]. In our model, HP acted upon the neuron's firing threshold primarily, θ as well in the following manner: Where λ hp is the HP constant which is initialized to 10 4 , and increased to 10 5 by the end of the simulation exponentially with a learning rate of 5 × 10 −6 ms. This was to allow TFR more direct influence on EFR early in the simulation, since the former had lognormal biasing (see Sec. Meta Homeostatic Plasticity) . The exponential term was set to reflect a proportional representation of the difference between estimated and target firing rate. That is, for a neuron with a TFR of 10 Hz the homeostasis equation alters the threshold equally for an EFR of 100 Hz or 1 Hz, moving the θ up or down respectively to maintain homeostasis. This allows neurons to fluctuate about their TFRs somewhat without the threshold changing too rapidly in response in a manner that better reflects their behavior. While eventually HP will silence the cell in reaction to the overabundance activity, this gives the cell more freedom to become especially active for some (perhaps salient) specific input.
Meta Homeostatic Plasticity
The various formulations of firing rate homeostasis imply that for each neuron there exists an individual or range of TFRs, deviations from which activate homeostatic compensatory mechanisms. To date a major component missing from extant self-organizing models has been some mechanism whereby the sets points of homeostatic plasticity are self-organized. This is due in part to the lack of an empirically observed, mathematically rigorous description of the phenomena of homeostatic set-point organization, along the same lines as-for instance-synaptic plasticity and STDP. Also problematic is the inherent possibility for extreme instability presented when the set-point of a homeostatic mechanism is allowed to be dynamic. This is further complicated by the constraint that neurons in living neural networks can differ in average firing rate by orders of magnitude and that the overall distribution of firing rates across populations has been consistently well fit by lognormal distributions in particular [4][5][6][7].
While a well-formulated empirical account of this phenomena is missing, differences between high and low activity neurons in terms of their relationships to other neurons and gene expression have been documented. The immediate early gene (IEG) c-fos is well correlated with increased activity in vivo, for instance [15] , and sustained elevated spiking activity has been shown to drive the expression of c-fos [55][56][57]. Specifically expression of c-fos always follows increases in spiking activity and appears to be signaled by increases in intracellular calcium following influx through voltage dependent ion channels [55]. In some cases this expression can occur as a result of neural firing alone [56] [57], while in others it has been demonstrated that elevated activity is insufficient for expression of c-fos, which can only occur if the elevated neuronal firing is a result of increased synaptic activity [55]. While sensory deprivation does not diminish the presence of c-fos expressing neurons, it does diminish the differences in wiring between them and c-fos negative neurons [16].
Meta-homeostatic plasticity (MHP) introduced here, represents a phenomenological account of how neurons self-organize their homeostatic set-points, i.e. their TFRs, which is both stable and produces lognormal distributions of target and empirical firing rates. The rule is loosely based upon the known relationships between elevated neuronal and synaptic activity and the expression of c-fos, where it is hypothesized that c-fos acts in some way as a marker, indicator, or direct instantiation of a TFR variable or the process that governs it. However it should be reiterated that since this process is not fully understood the mechanism here is phenomenological in nature, merely providing a possible means by which TFRs can evolve in a stable way that results in a lognormal distribution over the population. Models including homeostatic plasticity need set-points and MHP provides a means of allowing those set-points to be self-organized in a manner producing realistic results.
MHP uses the following formulation, which assumes that TFRs evolve based on local firing rates and the firing rates of incoming neighbors. The relationship between a neuron's firing rate and the firing rates of its in-neighbors is such that in-neighbors with lower firing rates exert a positive force while in-neighbors with higher firing rates exert a negative force. This repulsive force decays based on the difference between preand post-synaptic firing rates such that the greatest positive or negative force is exerted by in-neighbors whose firing rates are very close to that of the post-synaptic cell. Alternatively, this can be thought of from the perspective of the in-neighbors, and by this token every neuron in the network can be viewed as pushing the TFRs of their out-neighbors with similar levels of activity away based on their own firing rate at any given moment. In this way changes in the sign of the derivative of TFR (or large changes in general) are precipitated by the firing rate of a post-synaptic neuron crossing above or below the firing rate of one of its in-neighbors (thus changing the sign of the contribution of that in-neighbor to MHP). This can be seen in Fig. 2 which displays the EFRs of the in-neighbors of a given neuron superimposed over the EFR of the post-synaptic neuron as well as how this effects the neuron's TFR and its derivative. Thus in this formulation, a neuron's TFR evolves as a function of how its empirical firing rate relates to the firing rates of its in-neighbors, with the latter constraining its evolution within the context of its neighbors in the network. This prevents over-synchronization or "clumping" of many neurons around the same firing rate, which is difficult to prevent if local empirical firing rate (and no explicit information about the firing rates of other neurons in the network) fully governs the evolution of TFR.
In more formal terms: For each neuron j ∈ {1, ..., N } (where N is the number of neurons in the network) there exists a set of neurons I j (t) consisting of the M ∈ [0, N ] neurons which send synaptic projections to j at time t. This gives the set I j (t) := { i = 1, ... , M | w ij (t) = 0 }. Along the same lines as work done in [28] on STDP, we use the Fokker-Planck formalism to study (and reason on how to influence) the probability Faster Slower Pre-Synaptic
Post-Synaptic
Meta-homeostatic plasticity Diagram. A) TFR evolves over time based on the relationship between the EFR of a given neuron and the EFR of its incoming neighbors. Here the EFRs of a neuron's nearby (in firing rate space) in-neighbors are plotted over time such that those with greater EFRs than the post-synaptic neuron are in red for all times where they are greater and blue otherwise. Green dots indicate points where the EFR of the post-synaptic neuron (black) crosses from above or below the EFR of one of its in-neighbors. The TFR of the post-synaptic neuron is represented by the black dashed line. More significant changes in direction can be noticed near green crossing points. B) A trace of the derivative of the post-synaptic neuron's TFR over the same time period as (A), and with green dots at the same points in time as in (A) indicating EFR crossings. Notice sharp changes after crossings. C) A simplified diagram of MHP: Pre-synaptic (in-neighbors) with higher EFRs than the post-synaptic neuron exert a downward force on post-synaptic TFR, while in-neighbors with lower EFRs push their post-synaptic neighbors' TFRs up. This repulsive force drops off with distance in EFR-space and is scaled so as to ultimately induce a lognormal distribution across the population. density P(J) of a population of TFRs J, which in our case are modified by many continuous interactions with presynaptic neurons (as opposed to discrete plastic updates as with STDP). That is, we regard the TFR of an individual neuron j,ν j ∈ J p at time t to be the sum of modifications caused by interactions with presynaptic neurons: Where θ represents some non-specific set of parameters. As in [28] we observe that there exists a family of functions for the drift (A(J)) and diffusion (B(J)) of J for which the distribution produced by the stationary solution of the Fokker-Planck equations is approximately lognormal. Namely: From this two biasing functions f + /f − for STDP were derived in [28]. Considering the similarities between a synaptic weight defined by the sum of multiple updates in STDP and TFR defined by the sum of multiple updates here the f + /f − terms derived in [28] are used to act on TFR instead of synaptic weight. Equations (5-6) were chosen so as to satisfy the assumptions of the derivation in [28].
We further define two mutually exclusive subsets is the subset of neurons projecting onto neuron j which have lower (higher) EFRs than j at time t. The specification of time being necessary in our definitions since activity levels (and ergo set membership), TFR, and even membership in I j (t) (as a result of synaptic pruning) are all dynamic. In order to explicitly prevent "clumping" of neurons around the same preferred activity level the EFRs of neurons in I j (t) exert a repulsive effect with respect toν j which influencesν j . Specifically neurons in L j (less active than j) produce a potentiating effect onν j while neurons in G j (those more active than j) depress the TFR according to the following rule: Where η is the learning rate which was initialized to 0.05 but exponentially decayed to 10 -6 with a time constant of 500s. Note that η was set to 0 when a neuron reached its maximum incoming inhibitory and excitatory currents (see: Synaptic Normalization). The contributions of incoming neighbor neurons to the change in TFR are also normalized by the in-degree of the neuron i.e. the instantaneous set cardinality of I j , denoted by |.|. ζ here is a noise term drawn from N (0, .7). As [28] has derived f + /f − equations which demonstrably represent an approximately lognormal solution to the stationary Fokker-Planck equations, and shown them to be successful in the context of (Log-)STDP, the same terms are reused here: Where α determines the degree of log-like saturation i.e. if firing rate depression for a neuron with aν above ν 0 (the "low" firing rate constant, set to 2 Hz) has a large α the depression relative toν will be more log-like and likewise will be closer to linear for small α. β controls the rate at whichν facilitation decreases with increasingν, such that small β entails a rapid fall-off in firing rate potentiation, and large β entails a slower fall-off. In all simulations, α and β were set to 2.5 and 10, respectively. An example of how local EFR, the EFR of incoming neighbors, and the TFR interact can be seen in Fig. 2, where changes in the derivative of TFR can be seen changing in response to the post-synaptic neuron's EFR moving above or below the EFRs of one of its pre-synaptic neighbors (resulting in a change in the sign of the contribution of that pre-synaptic neuron's EFR).
Synaptic Plasticity
Long-term synaptic plasticity was driven by spike-timing dependent plasticity (STDP) and synaptic scaling (normalization, SN). These alterations to synaptic strength were the primary driving factor behind the resulting connectivity structure, the other being the pruning protocol (see Subsection: ). While on the surface it may appear that the pruning rules bear the bulk of the responsibility for the resulting connectivity, only weak synapses are eligible for pruning, and it is STDP and SN which determine the relative strength of a synapse and thus its eligibility for said pruning.
In keeping with the existence of biological and physical constraints on the maximum efficacy of a synapse, all weight changes (be they through STDP or SN) were put through the following dampening function which prevented any synaptic weight w from exceeding some maximum weight W max by reducing any potentiation or depression commensurate with w's proximity to W max : Here ∆w refers generically to any change to a synapse's weight (discrete or continuous) and thus all future references to weight changes should be considered as having been passed through this function. In our model W max was set to +/-200 nA. This sort of dampening has been observed in existing synapses, for instance [32] found that STDP induced LTP had little effect on already strong glutaminergic synapses in dissociated hippocampal cultures and that fluctuations in spine volume between pyramidal cells in cortex were reduced in either direction for more strongly coupled neurons.
STDP
STDP operated on all types of synapses: EE, EI, IE, and II. Where EE refers to a synapse from an excitatory neuron to another excitatory neuron, EI refers to a synapse from an excitatory neuron to an inhibitory neuron, and so on. Different STDP windows were used for each case, since STDP observed at synapses involving inhibitory neurons (as either the pre-or post-synaptic cell) can take on a multitude of different forms [58] [59]. STDP used a small learning rate and updated weights continuously rather than in discrete jumps. This diminished the effect of repeated instances of spike time pairings, but overall was motivated by the fact that since the MANA reservoir starts with (effectively) no connectivity continuous growth seems more logical.
For EE STDP a standard Hebbian window was chosen [32]. This window was also chosen for EI STDP as found in [42] at fast-spiking striatal interneurons. Although an investigation of effects of all the different known EI STDP windows on synaptic structure and neural activity in the context of a self organizing network would be compelling it is out of the scope of this paper. In this work, EE and EI STDP rules took on the familiar additive form as follows: For Hebbian EE synapses, LTP (LTD) occurs when ∆ t < 0 (∆ t > 0), and thus W + /τ + (W − /τ − ) are used. The opposite occurs for the Anti-Hebbian EI synapses, which experience LTP (LTD) for ∆ t > 0 (∆ t > 0). In the above w refers to synaptic strength while η stdp is the learning rate or time-constant of weight changes caused by STDP.
For all synapses emanating from inhibitory cells (IE/II) a symmetric Mexican-hat function (the negative 2nd derivative of the normal probability density function) was used for the STDP window, which has been found at inhibitory afferents to CA1 pyramidal neurons [40] and is consistent with findings from auditory cortex [41]. While in both cases this was only observed at IE connections, wanting to self-organize all our synaptic connectivity entailed using some STDP window for II synapses. Due to the dearth of literature as to II-STDP, the same window used for IE synapses was used as a stand in. The scaled Mexican-hat function is as follows:ẇ Where a is a scaling factor set to 25 for both IE and II synapses and sigma determines the overall width of the window, 22 for IE connections and 18 for II connections. Data on II-STDP windows is sparse in the literature, but because the model must grow all its connections (including II connections), some sort of rule for determining their efficacy was required. The choice to use the IE-STDP rule found in [40] and [41] appeared to work well, though it is apparent that the topic could use further empirical and computational investigation .
Synaptic Scaling
The choice to include dynamic inhibitory synapses in MANA precludes ignoring inhibitory synapses with respect to synaptic normalization. In addition to the question of how the target sums of synaptic normalization ought to interface with a network of neurons with heterogeneous TFRs, we must also consider how synaptic normalization treats inhibitory afferents. Because of this, the ratio of incoming excitation to incoming inhibition becomes another degree of freedom which requires regulation. Synaptic normalization as a mechanism has its roots in the notion of homeostatic synaptic scaling [33] and therefore a sort of regulation of the ratio of total incoming excitatory/inhibitory drive which behaves in a homeostatic manner follows. It is known that neurons maintain a balance between the total inhibitory and excitatory conductances impinging upon them [60] , with both values scaling roughly linearly between the start and finish of UP-states. Notably neurons from [60] tended to maintain roughly the same slope over the course of UP-states when their total excitatory and inhibitory conductances were plotted against each other over time. This implies that through some mechanism(s) neurons come to a roughly stable ratio of incoming excitation to inhibition. Results from [16] and [15] would seem to also back up this assertion as they noted that higher firing rate pyramidal neurons tended to receive overall less inhibition than their less excitable counterparts. Lastly it has been shown directly that brain-derived neurotrophin factor (BDNF), the production and possibly release of which is regulated by activity, decreases the amplitude of excitation between pyramidal neurons while increasing the amplitude of excitation from pyramidal neurons to interneurons [61] [62] [25]. Furthermore decreases in BDNF weaken excitatory connections onto inhibitory neurons while multiplicatively strengthening synaptic connections between pyramidal neurons [63]. Similarly activity blockades (resulting in the reduction of BDNF) can gobally decrease the percentage of GABA-positive neurons in vitro [63]. In order to both model these phenomena and provide homeostatic control over inhibition, a simple rule whereby total incoming inhibitory and excitatory currents posses independent multiplicative factors which both track with homeostatic changes in firing threshold (though in opposing directions) was used: Hereσ refers to the scaling factor which is applied to either total incoming inhibition or total incoming excitation as detailed in the next section (Sec. ). θ e/i is an exponential running average of the neuron's threshold using the same time constant (λ) as homeostatic plasticity. Initially θ e and θ i are exactly the same, what differentiates them is the "trigger times" t e and t i after which θ e and θ i respectively stop updating their values thus freezing in place. The mechanism that initiates this freeze is detailed in the next section, but in short it is the time that excitatory/inhibitory synaptic normalization starts. Before such a time synapses are growing. In order to allow different neurons to have different total allowed incoming currents synaptic normalization cannot be enforced until some condition is met; in this case the condition is whether or not total incoming excitatory/inhibitory current exceeds what those values should be based on the synaptic normalization equations. Notably t e and t i can be different values because this condition is met independently for excitatory and inhibitory inputs. In any case the scaling terms are in essence the exponential difference between the current threshold and initially an exponential running average with a constant ρ set to 5 in all simulations. While technically unbounded the span of all thresholds across all neurons in all simulations never exceeded 10 mV and variations within the same neuron were quite small (typically < ±0.1 mV) once settled.
Synaptic Normalization
Neurons in the model took steps to ensure that the sum of incoming synaptic currents was kept at a constant value, unique to each neuron. In self-organizing models with a constant homeostatic mechanism (one which favors the same firing rate across all neurons) as implemented in [33][34][35] a constant synaptic normalization sum is sensible. Along those lines, one could imagine producing any desired distribution of firing rates in a network solely through manipulation of thresholds, even if total synaptic input were held constant across the constituent neurons. But, while such a scenario is possible in principle it places too much responsibility upon manipulation of the threshold, which must fight against this undue homogeneity of synaptic inputs imposed by our hypothetical modeler. Furthermore, from the standpoint of realism it is known that higher firing rate neurons tend to receive more total synaptic connections than their slower counterparts [15] [16], and studies using transfer entropy have demonstrated a high degree of inequality among neurons in terms of information flow with some 20% of neurons accounting for 70% of information flow in vitro [7].
Attempts to include this sort of heterogeneity have appeared in other models, perhaps most notably [38] where the incoming sum to be normalized was tied to excitatory synaptic in-degree. This produced interesting dynamics including the emergence of excitatory driver cells. However, such a configuration makes the total input to each cell predetermined by the modeler and does not allow cells to develop in accordance with the history of the network in which they are embedded.Fortunately there exists an explicit variable here which is itself dynamic, plastic, and otherwise self-organized which can be used for the purpose of determining total input to each neuron: target firing rate (ν).
Normalization proceeding as follows: for each w ij where i = {1, 2, ...N } and j is the index of the target neuron: Here ∞ I(t) E/I is the maximum total current (saturation value) of each type allowed to impinge on each neuron at time t. Both currents have a linear dependence upon the cell's TFR up to a certain point. The logistic sigmoid is used here to represent the saturation of total current impinging on a cell causing an initial roughly linear dependence upon the TFR which eventually nonlinearly approaches some predetermined maximum. The shape of the sigmoid determines how highν can be before increases inν begin providing diminishing returns with respect to the total allowed current of that time. It also determines the value at which total current saturates. The three shape parameters ω a , ω b , and ω c were set to 300, 0.1 and 100 respectively such that (not accounting for inp 0 ) the minimum Exc./Inh. saturation forν = 0 was 500 nA, while the maximum total Exc./Inh. current was 2,000 nA. Each neuron's saturation had an additional term added: inp 0 which was the sum of inputs from the input layer. This gave each neuron an equal amount of "reserved space" in terms of input to each neuron which could be populated by the recurrent connections which were expected to grow. On average inp 0 was 750 nA. One may consider this a substantial bias, after all some neurons would be initialized with higher saturation values for all values ofν than others. However, in practice heterogenous initial inp 0 (both in terms of the higher allowed saturation and all around more input from the input layer) did not bias the network in this way and inp 0 was a poor overall predictor of finalν (see Fig. 9).
Both currents have a linear dependence on the cell's TFRν, the slope of which is determined by ω a (300), which can be used to tune overall synaptic density along with ω b . Lower values of ω a places a higher premium on "space" at the entrance to each neuron proportional to TFR as opposed to ω b which has no such dependence. Since synapses must maintain a certain efficacy or face possible pruning less total allowed incoming current means that the strengthening of stronger synapses is more likely to push weaker synapses below this threshold thus possibly "booting" them from the set of incoming synapses to that particular neuron and the network as a whole. ω b also has an effect on this being the "baseline" total current. In our model ω b was set to be the sum of synaptic connections from input neurons (which themselves are plastic) plus some constant (in out case 40 nA seemed to be a good compromise), thus ensuring that some portion of incoming synaptic current was always derived from recurrent connections. All changes in weight as a result of synaptic normalization are also put through the dampening function (Eqn. (10)) so as to ensure that the maximum allowed synaptic weight is respected.
Growth and Pruning
Synapses were initialized between every neuron in the network (all to all connectivity) and set to 10 −4 nA (for context noise current impinging on the membrane potentials was drawn from: N (0, .1)), meaning that nearly immediately a large portion of neurons took on a weight value of 0, effectively no longer existing in the network. These synapses were eventually deleted in earnest upon the first pruning cycle. Thus all synaptic connections which survive after the first cycle can be thought of as having grown from nothing, which is to say that although programatically the network is initialized to a state of full connectivity, effectively it is initialized with no synaptic connections. All pruning cycles after the first can be conceptualized as deleting connections from the initial synaptic arbor. Each cycle was carried out at a specific interval, in this case every 5 seconds of simulated time.
The pruning rule removes only the weakest synapses and does so preferentially from neurons of high-degree. Even if synapses are eligible for deletion, the probability of removal becomes smaller the lower the degree of a neuron, so as to reduce the likelihood of producing neurons which receive no connections from the excitatory and/or inhibitory neurons in the network or have no outgoing synaptic connections. This prevents neurons from being completely disconnected from the network.
Specifically for a synapse s with an absolute efficacy of w emanating from a source/pre-synaptic neuron with a set of outgoing synaptic connections O, and which projects to a post-synaptic neuron with sets of incoming excitatory and inhibitory synapses S e/i . The probability of s being removed from the network is as follows: Where |.| refers to set cardinality, N e/i is the number of excitatory/inhibitory neurons in the network, and N is the total number of neurons in the network. w max is the absolute efficacy of the strongest excitatory/inhibitory synapse in the network depending upon whether i is an excitatory or inhibitory neuron, and w min is an arbitrary value set to 10 −3 nA, which is simply meant to guarantee the removal of impossibly weak connections. Note that this value is in fact greater than the value to which synapses are initialized at the beginning of the simulation, thus growth in the first 5 seconds is a requirement to remain in the network. Lastly, θ j and θ 0 are the firing thresholds of synapse s ij 's target neuron j and the initial firing threshold of all neurons, respectively. The structure of the rule attempts to reduce the likelihood that a neuron would receive no excitatory/inhibitory connections and simultaneously reduce the likelihood of neurons with no outgoing connections.
Synaptic growth occurred using a probabilistic quota system whereby the probability of disconnected pairs receiving a new connection between them was probabilistic based upon their distance from one another in 3D space. If no synapse was created between a given pair a new unconnected pair would be selected. This occurred until the quota was filled. Two quotas existed: one for excitatory synapses and one for inhibitory synapses. The quota could not exceed more than 0.1% of the total possible synaptic population (i.e. if 1 million synapses were possible then no more than 1,000 could ever be added).
Here Q e/i is our quota for adding excitatory and inhibitory synapses respectively, while R e/i is the total number of those synapses which were removed during the last pruning. The probability of forming a connection followed a distance based rule originally used in [52]. D(a,b) is the euclidean distance in 3-space between unconnected neurons a and b, and here λ is a regularizing parameter set such that the maximum possible distance resulted in a probability of connecting of at least 1% before the multiplication by the constant C which was set to 0.4 in all simulations. This gave a minimum probability to connect of 0.4%. Growth phases immediately followed pruning phases and where thus carried out at the same interval. New weights were initialized in the same manner as at the beginning of the simulation, notably with a very low efficacy. This means that newly grown synaptic connections had a negligible effect on network dynamics. Instead they served as a random detector of temporally correlated activity between the pre-and post-synaptic neuron. If no such temporal correlation existed or was too weak the synapse would fail to substantially grow and eventually be pruned, having a negligible effect on the post-synaptic neuron during its entire lifetime. Alternatively if some temporal correlation did exist and was sufficiently strong, the synapse would grow, establishing a new pathway through the network. In effect this would replace a perhaps purely correlational relationship with a potentially direct causal one.
Results
Primary to the network's self-organization regime is the specialization of neurons, specifically their convergence upon a unique TFR and the subsequent differences in degree and neighbor preferences accompanying that value. Interestingly self-organizing TFRs seems to also lead to a differentiation of a multitude of properties across different neurons in the network. While it is true that TFR appears as a variable in other places (notably as a term in calculating maximum allowed synaptic input for Synaptic Normalization), this alone does not lend itself as an obvious answer for why certain cells developed certain differences. The analysis here is primarily concerned with ascertaining to what degree MANA can capture features of living neural circuits with emphasis on the heterogeneity between neurons which self-organized as a result of the metaplastic mechanisms.
Firing rate statistics
The goal of MANA's signature mechanism was to self organize the target firing rates in a SORN-like model so as to reproduce the roughly lognormal distribution of firing rates which has been consistently reported in the literature in both spontaneous and evoked activity [4,5,64,65](see [6] for a review). To that end (as it is the foundation of many subsequent results) the ability of the model's formalisms (borrowed from the Log-STDP literature [28]) to produce the desired roughly lognormal distribution of TFRs is of primary concern. This was indeed the case across (and within) 40 networks of 924 neurons each(See. 3 A and 4 A). However, researchers cannot directly measure a neuron's TFR (only the mean firing rate over some time interval) and thus the distributions of firing rates reported in the literature are empirically observed averages of # of spikes/some time interval. Therefore it was necessar to check that MANA's empirically observed firing rates were also roughly lognormal and tracked well with the TFRs, the latter being necessary to validate the effectiveness of the combination of the meta-homeostatic and homeostatic firing rate plasticities. Indeed empirically observed firing rates of 36,960 neurons across 40 networks observed over the last 700s of simulated time were roughly lognormal and could be fitted to their TFRs with R 2 = 0.9997 indicating that neurons' empirical average firing rates were very close to their self-organized target values (See 4 B & C). developed over the course of one of the simulations colored by the polarity of the neuron (excitatory: orange; inhibitory blue). The distributions of (A) and (C) at t = 2,000s can be seen in (B) and (D) respectively. The intrinsic plasticity mechanism begins the simulation with a very high learning rate, which decays exponentially with time loosely analogous to temperature in a simulated annealing or heat-bath algorithm. Simultaneously the homeostatic plasticity mechanism altered the firing threshold, acting as an attractive force attempting to pull the TFR into whatever value it happened to take on at the time. A) The distribution of empirically observed firing rates calculated by counting the number of spikes from each neuron during the last 700s of the simulation for all neurons across all 40 networks accompanied by a lognormal fit. B) A similar fit for the TFRs (ν), demonstrating that both target and empirical firing rates converge upon a roughly lognormal distribution. C) In order to ensure that MHP and HP were working properly not only do both target and empirical firing rates have to take on a roughly lognormal distribution, but for each individual neuron the TFR should mirror the average empirical firing rate. Here we plot target and empirical firing rates against each other for all neurons across all simulations and fit the results with a linear function (R 2 = 0.9997; m = 1.003). Indeed the mechanism effectively acts upon empirical firing rate.
Synaptic efficacy statistics
Synaptic efficacies of all varieties (Exc. →Exc., Exc. →Inh., Inh. →Exc., and Inh. →Inh.) also took on heavy tailed distributions (see Fig.5). This represents the first result which was not in some way intrinsically built into MANA. Heavy-tailed distributions of synaptic efficacy have been found in SORN models among the network's excitatory synapses [35], however here we are able to produce such a distribution among MANA's inhibitory synapses as well due to our inclusion of iSTDP. Interestingly the distributions of synaptic efficacies for inhibitory neurons (Fig. 5 A) were quite similar in shape to the excitatory synaptic efficacy distributions, despite the former having very different STDP windows from the latter (insets of Fig. 5 A). To the author's knowledge MANA with iSTDP is the first complete network model to approximately produce or otherwise self organize the heavy-tailed distribution of inhibitory synaptic currents found in the literature [11][12][13]. In general Log-normal distributions of synaptic efficacy have been found in both living tissue [8][9][10] and in functional connectivity [7] (again, for a review see [6]). The distributions of synaptic efficacy here and in SORN models [35] seems to carry with it a roughly lognormal shape but with a much heavier left-hand tail. Given the strong possibility of under-sampling of very weak synaptic connections in empirical studies the distributions here would seem at the very least plausible.
MANA reproduces wiring differences between high and low firing rate neurons
Studies performed on transgenic mice expressing a green fluorescent protein which was coupled to the activity-dependent c-fos gene demonstrated key differences in the wiring of between neurons which expressed c-fos (were more active) and did not (had a history of less activity) [20] [15] [16]. If MANA's self-organization scheme (which includes the emergence of analogs to the c-fos expressing, highly active neurons in the data in the form of neurons with high TFRs) is plausible, then ought to be expected that differences in wiring similar to those found in [16], would be observed.
Indeed MANA was able to replicate many observed differences in wiring from [15] and [16]. These include: 1) c-fos expressing (more active/high firing rate) neurons have more afferent excitatory connections, 2) that the mean uEPSPs of those connections were not stronger than the mean excitatory connections impinging on neurons which did not express c-fos (i.e. high activity neurons have more but not stronger incoming connections), and 3) Excitatory neurons expressing c-fos received decreased inhibition compared to their less (1) and (2) were clearly the case with MANA (see Fig. 6). This behavior was not programmed into the network. Synaptic normalization did indeed operate so as to give higher firing rate neurons more total allowed incoming current, however this did not guarantee that the settled upon total incoming current would come from large numbers of synapses with similar strengths to those of lower firing rate neurons as opposed to smaller numbers of much stronger excitatory synapses. Additionally, while neurons were able to manipulate their total incoming Exc./Inh. ratios, no mechanism was preprogrammed in a manner which forced high TFR neurons to have reduced inhibitory drive.
MANA produces nonrandom topological features
Patch-clamp studies of connectivity by Perin et al. [14] and Song et al. [8] have shown that excitatory neurons cluster in non-random patterns. Interestingly this result has also been found in studies using effective/functional connectivity [67]. Using the same methods as in [8] (for comparison purposes), whereby null models were derived from base connection probability, the over-representation of specific 3-motifs was examined (Fig. 7). Not only did certain 3-motifs appear overrepresented in approximately the same way, but this over-expression became exceedingly more prevalent when only stronger synapses were considered, replicating the findings of [8], whereby the over-represented motifs were comprised of stronger synapses, thus forming a network backbone of nonrandom triadic connections.
Using similar methods to those developed in [14] the over-representation of higher numbers of connections within 3, 4, 5, and 6 neuron clusters was examined here, so as to compare between the living data and MANA. Given the size of the excitatory subnetworks (typically 780 neurons) and the number of network Qualities of low and high firing rate neurons Across all 40 networks; orange(blue) represents excitatory(inhibitory) neurons: A) Average strength of incoming excitatory synapses plotted against firing rate, demonstrating that high firing rate neurons do not receive stronger incoming excitatory connections than low firing rate neurons. B) Excitatory in-degree is a good predictor of firing rate. Combined with (A) this demonstrates that high firing rate neurons receive more, not stronger incoming excitatory connections than their low firing rate counterparts as found in [16]. C&D) The proportion of total incoming drive which is inhibitory for each neuron plotted against TFR. Here we see that low firing rate neurons of both types receive more inhibition while high firing rate neurons regardless of type receive less, a relationship known to exist in the data [16] [15]. E) A decomposition of (A) into two bars showing the minimal difference in average incoming excitatory current between neurons with TFRs less than and greater than 10 Spks/s. F) A representation of (C&D) in bar form, again comparing all neurons with TFRs less than and greater than 10 Spks/s. data sets (40), slight deviations from the techniques in [14] were required. In order to be statistically rigorous while maintaining computational tractability the following scheme was used: As in [14] our null models consisted of a random-rewire of the network which preserved node degree (in and out) as well as the number bidirectional and unidirectional connections. For each cluster size 10 million distinct combinations of 3-6 nodes were randomly chosen from each of the 40 networks and the number of connections found within each cluster was recorded. For each of the 40 networks 1,000 null models were generated and 10,000 unique combinations of 3-6 neurons were sampled. This generated a distribution of the quantity of connections within each cluster type for each of the 40 networks as well as a null distribution for each of the 40 networks. Despite the discrepancy results remain comparable since the same null models were used. The only difference exists in the sampling since a combinatorial explosion makes a full survey of the space of null models impossible. A 2-way KS-statistic between the distributions between the null models and the 40 networks for each number of connections in each cluster was used to calculate our p-values. The results of this analysis can be seen in Fig. 8, which broadly speaking demonstrates that the model self-organizes more tightly coupled clusters than would be expected by chance, as has been found in patch-clamp [14] and effective connectivity [67] studies.
MANA self-organizes specialized groups
The laminar structure of cortex is a well studied phenomena in neuroscience. Different layers of mammalian cortex are populated by different cell types and have particular relationships to one another. In particular laminar layers differ with respect to where their inputs originate, where their outputs target, and the degree to which they serve as inputs ans/or outputs to the column as a whole. For instance layer IV is known to receive significant amounts of input from thalamus "core" or C-type cells [68] and send a great deal of outputs to layers II/III [9,[15][16][17]. This thalamus →Layer IV →Layers II/III pathway has been studied extensively as being central to early cortical processing of inputs-particularly in barrel cortex [9,[69][70][71][72]. Reconstructions of the connectivity between cortical layers in barrel cortex demonstrate that the layers differ greatly in where within the column they send and receive synaptic connections and the degree to which they connect to themselves [9]. Computer simulations of this reconstruction demonstrated that stimulation of Layer IV had the greatest chance of spreading activity across the entire column. Indeed certain hodological themes have been identified across species and areas of cortex which have distinct groups of cells whose connectivity implies distinct input/output/recurrent processing roles [17]. The recurrent MANA reservoir is driven by 100 input neurons which are completely controlled by the experimenter, receive no inputs from the MANA reservoir, and otherwise lack any sort of autonomous dynamics. This "input layer" is not part of the network proper, though the synapses connecting the input to the reservoir are. Consider that at an initial input density of 25% each reservoir neuron was on average connected to 25 input neurons, considering that these connections were random this means the number of incoming connections would take the form of a binomial distribution with p=0.25 of success and n=100 trials. Meta-homeostatic plasticity ensures directly that neurons have different preferred levels of activity and, in accordance with the TFR, different incoming innervation. However MHP in no way dictates or biases the neuron with respect to where that innervation comes from. If no functional distinctions were occurring in MANA with respect to the input layer/signal, then it stands to reason that the degree to which neurons receive input from the input layer would not change significantly by the end of the simulation and/or that each neuron would receive roughly the same amount of innervation from the input layer. This was not the case. Neurons in the MANA reservoir took on a wide variety of different levels of innervation from the input layer, in particular a large proportion of neurons lose all input layer innervation, becoming fully recurrent, which distinguishes them from neurons which retained significant input layer drive (see Fig. 9 D-H). Inputs from the input layer to the neurons which retained their input layer drive were also correlated as seen in Fig. 9 G. The implication of this configuration is that MANA reservoir self-organize such that there exist specific neurons which handle external drive, while others do not. External signals must first pass through these neurons with significant (correlated) input layer drive, before coming into contact with the other neurons in the network.
Notably, initial innervation from the input layer is a poor predictor of TFR, indicating that the latter is determined by properties of the input patterns and the self-organization of the MANA layer much more so than by initial conditions (see Fig. 9 B). Thus the very small amount of initialization in MANA (i.e. the weights from the input-layer to the reservoir) did NOT ultimately bias reservoir TFRs.
In addition to the amount by which synaptic inputs to each neuron from the external input layer changed, the proportion of each neuron's input which originated in the input layer was considered. This gives a more subtle quantitative measure of the neuron's role in the network with respect to the input layer. To this end, each neuron is assigned an "inputtedness" score, which is 0 if a neuron only receives inputs from other neurons in the recurrent reservoir and 1 if a neuron receives all its synaptic inputs from the external input layer. This reveals that MANA self-organizes both feed-forward and feedback inhibition and that these roles are taken on by different inhibitory neurons, since some inhibitory neurons have a high inputedness score while others have a score of 0 ( Fig. 9) among other things. Fig. 9 shows that a very large fraction of neurons end the simulation with 0 input from the input layer and are thus fully recurrent. Those that do not receive strong input correlations, which can be seen in the vertical striping of the weight matrix connecting the input layer to the reservoir (Fig. 9G). It's worth noting that input correlations (which can be seen in the reservoir as well in Fig. 10B) have been shown to be a prerequisite for lognormal firing rate distributions in neural networks [45].
Knowing that some neurons receive more external input while others receive more internal input as well as that some neurons have a preference to the degree to which their incoming/outgoing neighbors receive connections from the input layer implies a division of labor. Thus we should expect some neurons which themselves are all or mostly recurrent should receive connections from neurons which receive external drive if such a division of labor is present. To test this we measured the proportion of total incoming drive originating in the recurrent layer for the incoming and outgoing neighbors of each neuron: Where I R and I I refer to the set of in-neighbors from the recurrent MANA reservoir and the external input layer respectively. O R is the set of out-neighbors from the recurrent MANA reservoir (note the absence of a O I which would be out-neighbors to the external input layer since such connections were not permitted i.e. O I ≡ ∅). N and M are the set cardinalities of I R and O R respectively or the number of in-neighbors to neuron j and out-neighbors to neuron i in the recurrent reservoir respectively. In this notation h is presynaptic to i, which is presynaptic to j. Neighbors in I I were not counted since they themselves received no inputs, however weights from neurons in I I were counted (otherwise a comparison between drive from I I and I R would be impossible). Finally this gives us r j and r i for each neuron or the average proportion of recurrent drive across the in-and out-neighbors respectively. Diversification of input selectivity A) Overlayed histograms of the synaptic strengths of synapses connecting the input layer to the recurrent reservoir before and after self-organization demonstrating an overall decrease in strengths and quantities. B) Total drive from the input layer (as these synapses were initialized to nonzero values) against final TFR demonstrates no discernible correlation between initial drive and final TFR, ruling out the different starting drives to each neuron as a significant bias toward final activity-related outcomes. C) An example network before and after self-organization with each neuron colored according to total input-layer drive they receive. Notice that most input →reservoir synapses are significantly pruned and that absolute input drive decreases for all neurons and that recurrent connections have grown. Only the top 2.5% and 10% of synapses for the input →reservoir and reservoir →reservoir are drawn for visibility. D) A plot of "inputedness" defined as the proportion of synaptic drive originating in the input layer against TFR. Neurons exhibit a diversity of levels of inputedness while others settle such that all or nearly all their connections from the input layer are pruned. Some neurons thus receive from the input and project to the rest of the reservoir while others receive only from the reservoir. For inhibitory neurons this indicates the presence of both feedforward and feedback inhibition. E) The explained variances when PCA is performed on the columns of the input →reservoir weight matrices. Fewer principal components are required to explain more of how reservoir neurons select from the input. F&G) An example input →reservoir weight matrix before and after self-organization demonstrating the evolution from a randomized 25% dense matrix into a matrix with blatant organization and specifically input correlations to reservoir neurons (noted as a prerequisite for lognormal firing rates [45]) H) An example network where each neuron is colored by proportion of total drive from the input layer. Notice significant diversifications as well as a significant number of neurons which have lost all input-layer drive. The picture painted by the results in Figs. 9 & 10, which considers only the Exc. →Exc. subnetwork, implies a division of labor reminiscent of the hodology discussed in [17], whereby certain populations of neurons receive inputs from specific sources and are specialized to that end. In MANA not only do distinct populations of neurons exist which possess and do not possess input drive, but the results in Fig. 10 indicate that the neurons which do not receive direct input layer drive can be further subdivided into those which receive drive from the neurons which receive input layer drive and ones which do not. In other words a portion of the population that does not receive direct drive from the input layer is highly innervated by the population of neurons which do. These neurons then feed other neurons which receive no input layer drive. Notably, the neurons which receive drive from neurons which receive input layer drive and project to neurons which do not possess the highest out-degrees. Indeed nearly all the high out-degree neurons have this property, while neurons in the other two groups have lower out-degrees. This indicates that these populations differ not only in where they receive inputs and send outputs, but also in some of their intrinsic attributes. That is the populations appear to be composed of different kinds of neurons with different qualities insofar as such a thing is expressible with leaky integrate and fire point neurons. In sum, each population has distinct, specific sources of input and targets of output which are arranged such that each population takes as input the output of the last. This input selectivity and arrangement combined with differences in attributes like out-degree is highly indicative of specialization, and has emerged entirely through self-organizing mechanisms acting on much lower level aspects of the network.
MANA self-organizes hubs
It is well established that neurons are highly heterogeneous in terms of attributes like firing rate and synaptic degree, having more or less incoming/outgoing connections to other neurons and possibly stronger connections to said neurons [4-7, 20, 67, 74]. Furthermore synaptic structure is thought to posses some scale-free or small world attributes, which have been found in studies of functional connectivity [7,67,74]. Such structures are considered ideal for neural circuits since they represent a compromise between wiring cost and efficiency [75,76], and indeed hub neurons in line with this topology have been found in studies of functional connectivity [7,67,74,76]. Over the course of its self-organization, the network not only produces high degree hub neurons, but also settles into a state wherein these hubs are more highly connected to one another than would be expected by chance, forming the so-called "rich-club" [66]. This particular quality of hubs has been observed both in studies of connectivity of the mammalian microconnectome [77] [78] [7], and directly in the synaptic connectivity of C. Elegans [79]. The notion of a rich club can extend to any parameters of the nodes in the network since it merely measures whether or not neurons rich in the particular quantity of choice connect to one another beyond chance. The rich-club coefficient for directed networks is defined as: Where k is the richness parameter (synaptic degree unless otherwise specified), E >k is the number of directed edges between nodes where the richness parameter is greater than k and N >k is the number of nodes which posses a richness parameter greater than k [66] [80]. However because this value tends to monotonically increase for random networks, richness is typically measured with respect to a null model to produce a "normalized rich club coefficient", which shows how much more (or less) the given network's hub nodes connect to each other than what one would expect from chance: Where φ null is the mean rich-club coefficient of 100 networks for which synaptic connections have been rewired, preserving degree distribution, but otherwise randomizing the structure, which is consistent with the literature [66].
Rich clubs emerged very robustly in the Exc.→Exc. subnetwork appearing when using in-degree, out-degree, and degree as the richness parameter. These appeared both in the in-tact subnetworks and when only the top 10% of synapses (within each subnetwork) were used. Rich clubs across the whole network appeared to exist, but less reliably so, with a significant portion (though not a majority) of the 40 networks having no out-degree or degree rich clubs when inhibitory neurons are included. Interestingly a strong in-degree rich-club does appear to be present. Results for the full networks where only the top 10% of synapses (within each network) were considered appear similar to the in tact full networks.
Interestingly the model also produces inhibitory hub neurons which have been reported in [74] as inhibitory neurons tended to be the highest degree (in and out) in all of the networks. These inhibitory hubs are indeed more connected to other hubs/rich-nodes with respect to in-degree (see Fig. 11 (A)) than would be expected by chance and are thus members of one of the rich-clubs. The implications of a strong in-rich club, but weak or nonexistent out-rich club when inhibitory neurons are present combined with the knowledge that the inhibitory neurons are in fact the richest with respect to those parameters lends itself to the rather interesting conclusion that there must exist two groups of inhibitory neurons ones with high in-degree and ones with high out-degree. Those with high in-degrees are in a rich-club implying that widespread network activity will activate these inhibitory neurons silencing the network, but also themselves, thus allowing for recovery. Those with high out degrees are not in a rich club implying that activation of these interneurons can be used to silence large portions of the network in ways which do not directly inhibit one another. High interconnectivity of inhibitory neurons has been observed in an optogenetic study of acute slices of mouse visual cortex [81]. In particular, [81] found that parvalbumin expressing interneurons (the largest type, representing 36% of the total population) inhibited one another as strongly as they inhibited pyramidal cells. However the 2nd largest group expressing somatostatin heavily inhibited parvalbumin expressing interneurons and pyramidal cells but did not inhibit themselves [81]. In general a high degree of inhibition of inhibition (see Fig.5) appears in MANA, but in a highly organized fashion.
Discussion
Self-organization is key to how brains develop and change in response to new information. It's been common knowledge for years that no one plasticity mechanism can explain how the characteristic features of cortex arise alone. Models like the SORN have made substantial progress in detailing how some of these mechanisms might interact and why that interaction is beneficial or otherwise evidenced by current data. However, up until now we have lacked a comprehensive model capable of generating a wide array of cortical circuit features from a relative null state, which can serve as the basis for experiments requiring the context of the successful interaction of many mechanisms (as is the case in living tissue). We have demonstrated the first recurrent spiking neural model capable of self-organizing its own TFRs. Beyond that we have demonstrated a large-scale SNN model which self-organizes its inhibitory connectivity both in terms of strength and structure, and combined both these features with other mechanisms so as to "fully" self-organize a realistic neural circuit from the ground up. The resulting network is stable in spite of the fact that key aspects (most notably TFRs and connectivity) of the network remain in flux for a substantial portion of its self-organization, showing that stabilizing mechanisms like homeostatic plasticity and synaptic normalization are sufficient to provide stability even in a network with "moving targets" with respect to the target levels of activity for its individual neurons.
Models including both aspects of neuronal self-organization (differentiation/development and homeostasis) are quite desirable, but typically rare. RNNs have a history of promising a wealth of advantages over feed-forward and more classical AI techniques (and often delivering: [82] [83]), but are frequently sidelined given their notorious difficulty and impenetrability. Though the reservoir computing paradigm has Self-organization allows for mechanisms intrinsic to the network to organize around inputs from the problem space, and ideally find an optimal structural and behavioral configuration. Indeed work from [33], [84][85][86], has shown this to be the case. Although the parameters of the self-organization would likely need some tuning to the application, the point of a self-organizing network, both in the hands of engineers and from the standpoint of a natural system is to reduce what could be a lengthy or expensive parameter search for an optimal network for the problem-domain.
Validity
The validity of MANA with respect to the aggregate of its parameters and particular design decisions must be understood in the context of the initial goals of the project: design a cortical model which possesses as many features of living circuits as we are aware, doing so entirely through self-organization, where the mechanisms of self-organization have biological analogs as much as possible. Justification with respect to biology for each of the mechanisms can be found in their respective subsections within the Materials and Methods section, however since reproducing cortical features and doing so through mechanisms were higher level priorities than always possessing biological analogs, some mechanisms and parameters merely reflect a particular design toward achieving those higher priority goals. Indeed since it is the case that certain biological mechanisms with respect to neuroplasticity are not fully understood and other attempts using only mechanisms with direct analogs required hand tuning [35], the insertion of mechanisms with no direct known biological analog were necessary from the outset. That is, the central conceit of MANA arises from the initial goals laid out in the statement of motivations and goals: Motivation and Goals, specifically guideline (3), which stated that all mechanisms must have biological analogs except if doing so hinders either the reproduction of characteristic circuit features or the constraint that such reproduction must be done via self-organizing mechanisms. There existed-until now-no such mechanism for the self organization of TFRs, and thus conforming to (1) and (2) required the invention of a hypothetical, phenomenological mechanism to do just that. Furthermore, it must be understood that MANA does not necessarily represent the only or best way of achieving the results presented here, nor is it the goal here to decidedly prove such a thing is the case. The contribution of this work resides in it being the first such comprehensive model which attains the aforementioned goals, a proof of concept that such a thing is possible, and an example of how similar models moving forward might be approached and designed. We propose (with substantial evidence) that the equations laid out here make a good model of generic cortical circuits given the wide array of replicated phenomena and the fact that such phenomena were replicated from a null state.
Empirical studies aimed at finding certain key hallmarks of the particular formalisms here would be required to make more comprehensive statements as to MANA's validity as a scientific model. For instance the specific rule here for meta-homeostatic plasticity comprises a set of unambiguous formalisms translating the relationship between a neuron and its pre-synaptic neighbors into a realistic distribution of firing rates. The validity of those formalisms would require an empirical study to determine the relationships between the firing rates of pre-and post-synaptic neurons in living tissue, specifically looking for differences in mean firing rates that would match with those predicted by the equations. Such an experiment has not been carried out as of yet, and constitutes an avenue of future research.
In this way the elements of MANA are guides to what sorts of mechanisms and relationships we might expect to find in living tissue. They are a conceptual scaffolding upon which network self-organization can be discussed and experiments formulated. The specific equations represent a hypothesis for what the myriad of more complex lower-level biological mechanisms might be implementing or work in service of, which is supported by evidence in the form of its stability and broad range of replicated phenomena, but not grounded in experiment. But to be clear, a different phenomena being found to be at work in some part of a network can only fully invalidate its counterpart in MANA if it can be shown that such a phenomena is not in service of or performing the exact same function as its theoretical counterpart or offers greater explanatory power. Fundamentally MANA is a theoretical construct, and that nature defines its scope and capabilities. A good example of this lies in the fact that homeostatic modifications to synapses and firing thresholds generally take on the order of hours to days of real world time, whereas in MANA these changes occur on the order of seconds to minutes. This is a problem endemic to virtually all self-organizing models in this class [33,35,36] and as pointed out in [87] the model "equivalents" to these real world mechanisms should be called "rapid compensatory processes' (RCPs)' for that reason. However in that very same review it is argued that RCPs are fundamentally necessary to stabilize Hebbian plasticity and must exist in some form alongside their slow counterparts in the real world. The implication being that some set of as-yet-discovered lower level mechanisms related to or interacting with the slower homeostatic processes must implementing RCPs in one form or another. MANA can be thought of as a collection of just such RCPs, though it remains to be seen if MANA might be stable with simply much smaller learning rates for its processes. It is notable that in Fig. 3 after an initial rapid set of changes in thresholds and TFRs from the null state fluctuations in threshold tend to be very small indicating perhaps that once MHP organizes the distribution of firing rates and in conjunction with other processes orchestrates the firing rates of cells impinging on each neuron that rapid homeostatic plasticity may not be necessary for overall maintenance of activity within some acceptable range. This represents another avenue of future research for MANA.
Future Work
MANA opens up numerous avenues of future research. In fact this particular aspect of MANA can be considered to be its single strongest contribution. The original goal was to reverse engineer a neural circuit, which required that A) a wide variety of circuit features could be accounted for, and B) that those features had to be obtained via mechanism. The motivation for such an approach was to maximize the likelihood that whatever benefits such features and mechanisms conferred in biological circuits would be present in the artificial version. Having achieved (A) and (B), the next avenue of research clearly centers around ascertaining the computational properties of MANA, its applicability to different tasks, and the contributions of the different mechanisms and features to these. Indeed unpublished, preliminary results indicate that MANA can perform pattern separation on complex spatio-temporal inputs, and in particular action recognition tasks from video.
In addition to this more ambitious goal, a large amount of work remains with respect to exploring different aspects of MANA's quite large parameter space. While broad spectrum parameter searches are intractable here, smaller scale manipulations could provide valuable insight into the effects of the manipulations in the context of all of MANA's mechanisms. It has been demonstrated that, for instance, STDP becomes indistinguishable from a firing rate based mechanism when the pre-and post-synaptic neurons fire in a realistic manner [88]. A model possessing such a broad scope of mechanisms, which is known to produce realistic cortical features is ideal for providing a realistic setting for studying the effects of certain manipulations.
Models of this type, allow us to study how different inputs to the network may affect development, and indeed MANA can also be used as a tool for scientific modeling. MANA as presented here developed around a very artificial input stream meant only to provide some vague degree of statistical structure. However, this is far from the much richer set of inputs (and outputs) presented to real developing brain. By no means is our model an "end all be all", and indeed when given richer inputs, may reveal things still missing from our understanding. Von Melchner and colleagues rewired the retinal outputs of neonatal ferrets so as to ennervate their auditory rather than visual cortex, and found that not only did the ferrets' auditory cortices develop retinal maps normally found in visual cortex, but they responded to visual stimuli in ways consistent with visual, not auditory perception [89]. This points to a certain genericness of cortex which can adapt to interpret arbitrary stimuli. Thus if our network model is "correct" then we should see it develop differently with different stimuli, and in ways which mirror the associated cortices in mammalian brains. Such avenues of research offer a "win-win" in that we either show our model to be correct or (hopefully) learn something about why it's incorrect. In the latter case self-organizing models give us a platform to test hypotheses. A model which self-organizes realistic cortical structure and behavior in a general sense is a template from which the mechanisms behind the specific structure and function of various brain regions can be studied.
Conclusion
Based upon the results we can conclude that the formalisms laid out here which constitute MANA are indeed sufficient for the reproduction of a wide variety of known, highly complex and nonrandom features of cortical circuits and thus the stated goals have been achieved. We have presented here a self-organizing model which builds upon prior work in the field by introducing a metaplastic mechanism which guides the self-organization of multiple plastic mechanisms in the network. We have also included inhibitory plasticity pervasively including inhibition of inhibition in a manner as yet seen in models of this type. The metaplastic architecture developed here including several regulatory mechanisms is both stable and capable of reproducing a wide variety of known features of living neural circuits. While many topological features reproduced here can and have been explained by models like the SORN [35] these models have been unable to provide a phenomenological account of the development of TFRs which can account for the the features of synaptic topology and other properties known to be associated with highly active neurons [16] [15]. This is not limited to the particulars of properties known to be associated with neurons commensurate with their firing rates, and extends more generally into a phenomenological account of differentiation. Neurons which emerged from the MANA model not only possessed distinct qualities having to do with firing rate, but also had unique relationships to other cells in the network particularly between recurrent and input signals, implying distinct functional roles. Notably all of these features emerged entirely from mechanisms. The only hand-tuning which could be said to have occurred with respect to the overall outcome and the network features of interest were parameters of mechanisms which produced those outcomes. In this way MANA represents the first complete phenomenological model of generic cortical circuitry in that it provides a set of functions which collectively take as input model neurons and synapses and produce as output an artificial cortical circuit which possesses a wide variety of features known to exist in living circuits. | 2017-06-01T00:32:02.000Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "a13fc151f95d1e1abb6ce10fffeba88f5f9736c6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c092c4b61039dd20a31594dd44ffe64a04382a5f",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Mathematics"
]
} |
59523689 | pes2o/s2orc | v3-fos-license | LEOPARD: Identifying Vulnerable Code for Vulnerability Assessment through Program Metrics
Identifying potentially vulnerable locations in a code base is critical as a pre-step for effective vulnerability assessment; i.e., it can greatly help security experts put their time and effort to where it is needed most. Metric-based and pattern-based methods have been presented for identifying vulnerable code. The former relies on machine learning and cannot work well due to the severe imbalance between non-vulnerable and vulnerable code or lack of features to characterize vulnerabilities. The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities. In this paper, we propose and implement a generic, lightweight and extensible framework, LEOPARD, to identify potentially vulnerable functions through program metrics. LEOPARD requires no prior knowledge about known vulnerabilities. It has two steps by combining two sets of systematically derived metrics. First, it uses complexity metrics to group the functions in a target application into a set of bins. Then, it uses vulnerability metrics to rank the functions in each bin and identifies the top ones as potentially vulnerable. Our experimental results on 11 real-world projects have demonstrated that, LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as vulnerable and outperform machine learning-based and static analysis-based techniques. We further propose three applications of LEOPARD for manual code review and fuzzing, through which we discovered 22 new bugs in real applications like PHP, radare2 and FFmpeg, and eight of them are new vulnerabilities.
I. INTRODUCTION
Vulnerabilities are one of the key threats to software security [42]. Security experts usually leverage guided fuzzing (e.g., [14,50,66,67]), symbolic execution (e.g., [12,17,27,60]) or manual auditing to hunt vulnerabilities. As only a few vulnerabilities are scattered across a large code base, vulnerability hunting is a very challenging task that requires intensive knowledge and is comparable to finding "a needle in a haystack" [81]. Therefore, a large amount of time and effort is wasted in analyzing the non-vulnerable code. In that sense, identifying potentially vulnerable code in a code base can guide vulnerability hunting and assessment in a promising direction.
While vulnerability identification has been attracting great attention, some problems still remain. On one hand, metricbased techniques are mostly designed for one single application (or a few applications of the same type). Thus, they might not work on a variety of diverse applications as machine learning may over-fit to noise features. Moreover, while an empirical connection between vulnerabilities and bugs exist, the connection is considerably weak due to the differences between vulnerabilities and bugs [15]. As a result, the research on bug prediction cannot directly translate to vulnerability identification. Unfortunately, the existing metric-based techniques use the similar metrics as those in bug prediction, and thus fail to investigate the characteristics of vulnerabilities.
On the other hand, metric-based and pattern-based techniques mostly require a great deal of prior knowledge about vulnerabilities. In particular, a large number of known vulnerabilities are needed for effective supervised machine learning in some metric-based techniques. The number of vulnerabilities is much smaller than the number of bugs, and the imbalance between non-vulnerable and vulnerable code is severe, which hinders the applicability of supervised machine learning to vulnerable code identification. Similarly, a prerequisite of those patternbased techniques is the existence of known vulnerabilities as the guideline to formulate patterns. They can only identify similar but not new vulnerabilities. Further, patterns are often application-specific, and thus those techniques are usually used as in-project but not cross-project vulnerable code identification.
In this paper, we propose a vulnerability identification frame-work, named LEOPARD 1 , to identify potentially vulnerable functions in C/C++ applications. LEOPARD is designed to be generic to work for different types of applications, lightweight to support the analysis of large-scale applications and extensible with domain-specific data to improve the accuracy. We design LEOPARD as a pre-step for vulnerability assessment, but not to directly pinpoint vulnerabilities. We propose three different applications of LEOPARD to guide security experts during the manual auditing or automatic fuzzing by narrowing down the space of potentially vulnerable functions.
LEOPARD does not require any prior knowledge about known vulnerabilities. It works in two steps by combining two sets of systematically derived program metrics, i.e., complexity metrics and vulnerability metrics. Complexity metrics capture the complexity of a function in two complementary dimensions: the cyclomatic complexity of the function, and the loop structures in the function. Vulnerability metrics reflect the vulnerable characteristics of functions in three dimensions: the dependency of the function, pointer usage in the function, and the dependency among control structures within the function.
LEOPARD first uses complexity metrics to group the functions in a target application into a set of bins. Then, LEOPARD leverages vulnerability metrics to rank the functions in each bin and identify the top functions in each bin as potentially vulnerable. We propose such a binning-and-ranking approach as there often exists a proportional relation between complexity and vulnerability metrics, which is evidenced in our experimental study. As a result, each bin has a different level of complexity, and our framework can identify vulnerabilities at all levels of complexity without missing low-complexity ones.
We implemented the proposed framework to obtain complexity and vulnerability metrics for C/C++ programs. We evaluated the effectiveness and scalability of our framework with 11 different types of real-world projects. LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as potentially vulnerable, outperforming both typical machine learning-based and static analysis-based techniques. Applying LEOPARD on PHP, MJS, XED, FFmpeg and Radare2 and with further manual auditing or automatic fuzzing, we discovered 22 new bugs, among which eight are new vulnerabilities.
In summary, our work makes the following contributions.
• We propose a generic, lightweight and extensible framework to identify potentially vulnerable functions, which incorporates two sets of program metrics. • We propose three different applications of LEOPARD to guide security experts during the manual auditing or automatic fuzzing to hunt for vulnerabilities. • We implemented our framework and conducted large-scale experiments on 11 real-world projects to demonstrate the effectiveness and scalability of our framework. • We demonstrated three application scenarios of our framework and found 22 new bugs.
1 Leopard is known for its opportunistic hunting behavior, broad diet, and strength, which reflect the identification capabilities we are pursuing. In this section, we present the overview of LEOPARD and elaborate each step of the proposed approach.
A. Overview Fig. 1 presents the work flow of LEOPARD, which is designed to be generic, lightweight and extensible. The input is the source code of a C/C++ application. LEOPARD works in two steps: function binning and function ranking, and returns a list of potentially vulnerable functions for vulnerability assessment.
In the first step ( § II-B), we use complexity metrics to group all functions in the target application into a set of bins. The complexity metrics capture the complexity of a function in two dimensions: the function itself (i.e., cyclomatic complexity) and the loop structures in the function (e.g., the number of nested loops). Each bin has a different level of complexity, which is designed to identify vulnerabilities at all levels of complexity (i.e., avoid missing vulnerable functions with low-complexity).
In the second step ( § II-C), we use vulnerability metrics to rank the functions in each bin in order to identify the top functions in each bin as potentially vulnerable. The vulnerability metrics capture the vulnerable characteristics of a function in three dimensions: the dependency of the function (e.g., the number of parameters), the pointer usage in a function (e.g., the number of pointer arithmetic) and the dependency of control structures in the function (e.g., the number of nested control structures). By incorporating such metrics, we can have a high potential of characterizing and identifying vulnerable functions.
LEOPARD is designed to support and facilitate confirmative vulnerability assessments, e.g., to guide security experts during automatic fuzzing [14,50,66,67] or manual auditing by providing potentially vulnerable function list and the corresponding metrics information. With such knowledge, security experts can prioritize the assessment order, choose the appropriate analysis technique, and analyze the root cause. Further, based on application-specific domain knowledge (e.g., vulnerability history and heavily fuzzed function lists), security experts can further rank or filter the potentially vulnerable functions to focus on those more interesting functions.
Using program metrics in a simple binning-and-ranking way makes LEOPARD satisfy our design principle of being generic and lightweight. It is applicable to any large-scale applications of any type and does not require prior knowledge about known vulnerabilities. The two sets of metrics are comprehensive, but also are extensible with new metrics as we gather more usage feedback from security experts (see discussion in § V). Thus, LEOPARD also satisfies our design principle of being extensible such that it can be further enhanced.
B. Function Binning
Different vulnerabilities often have different levels of complexity. To identify vulnerabilities at all levels of complexity, in the first step, we categorize all functions in the target application into a set of bins based on complexity metrics. As a result, each bin represents a different level of complexity. Afterwards, the second step ( § II-C) plays the prediction role via ranking. Such a binning-and-ranking approach is designed to avoid missing low-complexity vulnerable functions. Complexity Metrics. By "complexity", we refer to the approximate number of paths in a function, and derive the complexity metrics of a function from its structural complexity. A function often has loop and control structures, which are the main sources of structural complexity. Cyclomatic complexity [39] is a widely-used metric to measure the complexity, but without reflection of the loop structures. Based on such understanding, we introduce the complexity of a function with respect to these two complementary dimensions, as shown in Table I.
Function metric (C1) captures the standard cyclomatic complexity [39] of a function, i.e., the number of linearly independent paths through a function. A higher value of C1 means that the function is likely more difficult to analyze or test.
Loop structure metrics (C2-C4) reflect the complexity resulting from loops, which can drastically increase the number of paths in the function. Metrics include the number of loops, the number of nested loops, and the maximum nesting level of loops. Loops are challenging in program analysis [68] and hinder vulnerability analysis. Basically, the higher these metrics, the more (and possibly longer) paths need to be considered and the more difficult to analyze the function. Binning Strategy. Given the values of these complexity metrics for functions in the target application, we compute a complexity score for each function by adding up all the complexity metric values, and then group the functions with the same score into the same bin. Here we do not use a range-based binning strategy (i.e., grouping the functions whose scores fall into the same range into the same bin) as it is hard to determine the suitable granularity of the range. Such a simple strategy not only makes our framework lightweight, but also works well, as evidenced by our experimental study in § IV-C.
C. Function Ranking
Different from the structural complexity metrics, in the second step, we derive a new set of vulnerability metrics according to the characteristics of general causes of vulnerabilities, and then rank the functions and identify the top ones in each bin as potentially vulnerable based on the vulnerability metrics. Existing metric-based techniques [44,45] rarely employ any vulnerability-oriented metrics, and make no differentiation between complexity metrics and vulnerability metrics. Here, [61] and/or missing checks on some sensitive variables [74] (e.g., pointers). Resulting vulnerabilities include but are not limited to memory errors, access control errors (e.g., missing checks on user permission), and information leakage. Actually, the root causes of many denial of service and code execution vulnerabilities can also be traced back to these causes. The above mentioned types account for more than 70% of all vulnerabilities [11]. Hence, it is possible to define a set of vulnerability metrics that are compatible with major vulnerability types. Here we would not favor any specific types of vulnerabilities, e.g., to include metrics like division operation which is closely related to divide-by-zero, while the exploration of type-specific metrics is worth of investigation in the future. With either high or low complexity scores, vulnerable functions we focus on are mainly with complicated and compact computations, which are independent from the number of paths in the function. Based on these observations, we introduce the vulnerability metrics of a function w.r.t. three dimensions, as summarized in Table II.
Dependency metrics (V1-V2) characterize the dependency relationship of a function with other functions, i.e., the number of parameter variables of the function and the number of variables prepared by the function as parameters of function calls. The more dependent with other functions, the more difficult to track the interaction.
Pointer metrics (V3-V5) capture the manipulation of pointers, i.e., the number of pointer arithmetic, the number of variables used in pointer arithmetic, and the maximum number of pointer arithmetic a variable is involved in. Member access operations (e.g., ptr→m), deference operations (e.g., *ptr), incrementing pointers (e.g., ptr++), and decrementing pointers (e.g., prt--) are all pointer arithmetics. The number of pointer arithmetic can be obtained from the Abstract Syntax Tree (AST) of the function via simple counting. These operations are closely related to sensitive memory manipulations, which can increase the risk of memory management errors.
Alongside, we count how many unique variables are used in the pointer arithmetic operations. The more variables get involved, the more challenging for programmers to make correct decisions. For these variables, we also examine how many pointer arithmetic operations they are involved in and record the maximum value. Frequent operations on the same pointer In a word, the higher these metrics, the higher chance to cause complicated memory management problems, and thus higher chance to dereference null or out-of-bound pointers.
Control structure metrics (V6-V11) capture the vulnerability due to highly coupled and dependent control structures (such as if and while), i.e., the number of nested control structures pairs, the maximum nesting level of control structures, the maximum number of control structures that are control-or data-dependent, the number of if structures without explicit else statement, and the number of variables that are involved in the data-dependent control structures. We explain the above metrics with an example (Fig. 2) calculating Fibonacci series. There are two pairs of nested control structures, if at Line 7 respectively with if at Line 8 and for at Line 12. Obviously, the maximum nesting level is two, with the outer structure as if at Line 7. The maximum of control-dependent control structures is 3, including if at Line 7 and Line 8, and for at Line 12. The maximum of data-dependent control structures is four since conditions in all four control structures make checks on variable n. All three if statements are without else. There are two variables, i.e., n and i involved in the predicates of control structures. Actually, the more variables used in the predicates, the more likely to makes error on sanity checks. The higher these metrics, the harder for programmers to follow, and the more difficult to reach the deeper part of the function during vulnerability hunting. Stand-alone if structures are suspicious for missing checks on the implicit else branches.
There usually exists a proportional relation between complexity and vulnerability metrics, because the more complex the (independent path and loop) structures of a function, the higher chance the variables, pointers and coupled control structures are involved. The complexity metrics are used to approximate the number of paths in the function, which are neutral to the vulnerable characteristics. Importantly, for the set of control structure metrics used as vulnerability indicators, they describe a different aspect of properties than complexity metrics. First, whether control structures are nested or dependent, or whether if are followed by else, are independent to cyclomatic complexity metrics. Second, intensively coupled control structures are good evidence of vulnerability. Instead of directly ranking functions with complexity and/or vulnerability metrics, we propose a binning-and-ranking approach to avoid missing less complicated but vulnerable functions, as will be evidenced in § IV-B. Ranking Strategy. Based on the values of these metrics for the functions, we compute a vulnerability score for each function by adding up all the metric values, rank the functions in each bin according to the scores, and cumulatively identify the top functions with highest scores in each bin as potential vulnerable functions. During the selection, we identify the top k functions from each bin where k is initially 1, and increase by 1 in each selection iteration. Notice that we may take more than k functions as we treat functions with the same score equally. This selection stops when an appropriate portion (i.e., p) of functions has been selected. Here p can be set by users. Similar to the binning strategy, we adopt a simple ranking strategy to make our framework both lightweight and effective.
III. APPLICATIONS OF LEOPARD
LEOPARD is not designed to directly pinpoint vulnerabilities but to assist confirmative vulnerability assessment. LEOPARD outputs a list of potential vulnerable functions with complexity metrics and vulnerability metrics scores, which can provide useful insight for further vulnerability hunting. In this section, we demonstrate three different ways to apply the results generated by LEOPARD for finding vulnerabilities. With LEOPARD, we found 22 new bugs in five widely-used real-world programs. The detailed experimental results will be introduced in § IV-F. Manual Auditing. In general, with the help of LEOPARD, manual auditing (e.g., code review) can be greatly improved w.r.t. effectiveness and efficiency. Instead of auditing all the functions [22], security experts can focus on only those potentially vulnerable functions that are identified by LEOPARD.
Furthermore, the vulnerability metrics produced by LEOP-ARD may help security experts to quickly identify the root cause of vulnerabilities with their domain knowledge, especially for complicated large functions. For example, if a vulnerable function has a large number of instances of if-without-else, security experts could pay attention to the logic of the missing else to see if there are potential missing checks; and if a vulnerable function has a large number of pointers, security experts could focus on the memory allocation and deallocation operations to see if there are potential dangling pointers. Although these metrics cannot directly pinpoint the root cause, it can provide explicit hints on the possible root cause. Target Identification for Directed Fuzzing. Fuzzing has been shown as an effective testing technique to find vulnerabilities. Specifically, greybox fuzzers (e.g., AFL [4] and its variants [13,14]) have gained the popularity and been proven to be practical for finding vulnerabilities in real-world applications.
Current greybox fuzzers aim to cover as many program states as possible within a given time budget. However, higher coverage does not necessarily imply finding more vulnerabilities because fuzzers are blindly exploring all possible program states without focusing the efforts on the more vulnerable functions. Recently, directed greybox fuzzers (e.g., AFLGo [13] and Hawkeye [20]) are proposed to guide the fuzzing execution towards a predefined vulnerable function (a.k.a. target site) to either reproduce the vulnerability or check whether a patched function is still vulnerable [13].
Since LEOPARD produces a list potential vulnerable functions, a straightforward application with directed greybox fuzzers is to set potential vulnerable functions as target sites. In this way, we can quickly confirm whether a potentially vulnerable function is really vulnerable or a false positive by directing the fuzzer to concentrate on the function. Note that although the fuzzer can reach a vulnerable function, the vulnerability hidden in the function may not always be triggered. But still, directed fuzzing has been shown as an effective technique to reproduce vulnerabilities [13]. To demonstrate the idea, we utilize a directed fuzzing tool, Hawkeye [20], which is built upon an extensible fuzzing framework FOT [19] and reported to outperform ALFGo [13]. However, due to the large number of the potential vulnerable functions generated by LEOPARD, it is ineffective to set all potential vulnerable functions as target sites as it may confuse the fuzzer where to guide. To this end, we choose to separate the target application into smaller modules based on its architecture design or simply namespace, and then let the Hawkeye to fuzz with the targets grouped by modules separately. Seed Prioritization for Fuzzing. Greybox fuzzers often keep interesting test inputs (i.e., seeds) for further fuzzing. These seeds need to be continuously evaluated to decide which of them should be prioritized. By default, most fuzzers (e.g., AFL) prefer seeds with "smaller file size" and "shorter execution time" or "more edge (basic-block transition) coverage", which are not vulnerability-aware decisions.
Since LEOPARD assigns each function a vulnerability score and a complexity score, we can use these scores to help to evaluate which seed should be prioritized such that the fuzzer can find more vulnerabilities in the given time budget. For this purpose, we extended FOT by enabling it to accept external function-level scores for seed prioritization. The detailed seed evaluation process is explained as follows. First, we calculate a priority score for each function based on the binning-andranking strategy. For a function F within top k, its priority score is calculated using the following formula: where N i is the number of functions with rank i and N is the total number of all functions. For example, if the top 1 functions contribute a portion of 20% to the total number of all functions, then these functions are assigned with a score of 80 (100 − 20). Then, the function-score mapping is provided to FOT. After executing a test input (i.e., seed), the fuzzer can get an execution trace consist of functions. Then the fuzzer will accumulate the priority scores of the functions on the execution trace to form the priority score of that trace. As a result, each seed is associated with a trace priority score representing its vulnerableness. When the fuzzer chooses the next seed to fuzz, it will select the one with highest trace priority score.
IV. EVALUATION
LEOPARD is implemented in 11K lines of Python code. Specifically, we used Joern [71] to extract the values of complexity and vulnerability metrics, given the source code of an application. More details of the implementation and evaluation are available at our website [6].
A. Evaluation Setup Target Applications. We used 11 real-world open-source projects that represent a diverse set of applications. BIND is the most widely used Domain Name System (DNS) software.
Binutils is a collection of binary tools. FFmpeg is the leading multimedia framework. FreeType is a library to render fonts. Libav is a library for handling multimedia data, which was originally forked from FFmpeg. LibTIFF is a library for reading and writing Tagged Image File Format (TIFF) files. libxslt is the XSLT C library for the GNOME project. Linux is a monolithic Unix-like computer operating system kernel. OpenSSL is a robust and full-featured toolkit for the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols. SQLite is a relational database management system. Wireshark is a network traffic analyzer for Unix and Unix-like operating systems. The details of each target application are reported in Table III. The first column gives the project version, the second column reports the source lines of code, and the third column lists the total number of functions in each project. The last three columns report the number of vulnerable functions, CVEs (Common Vulnerabilities and Exposures), and CVEs excluded from our research, collected as ground truth (see below). Here, we chose the recent versions of the projects that had large number of CVEs. The number of functions ranges from 666 for libxslt to 488,960 for Linux, which is diverse enough to show the generality of our framework. In total, 26,886K lines of code and 600,825 functions are studied, which makes our study large-scale and its results reliable. Ground Truth. To obtain the ground truth for evaluating the effectiveness of LEOPARD, we first manually identified the list of vulnerabilities that were disclosed before July 2018 in the 11 projects from two vulnerability database websites: CVE Details [11] and National Vulnerability Database [7], i.e., we collected all the vulnerabilities reported for the given version of the project from its release date to July 2018. CVEs in external libraries used in a project are not claimed to the project.
The full list of CVEs in most projects are recorded by the above two websites. However, the patches of the CVEs are not well maintained and difficult to collect. We obtained available patches of these CVEs in the 11 projects from an industrial collaborator, who offers vulnerability scanning services for C/C++ programs. Functions that are patched to fix the vulnerability are identified as vulnerable. The results are reported in the fourth and fifth columns of Table III. As an example, we display the CVE list, available patches and corresponding patched functions of Libav at our website [6]. Some CVEs failed to be included in our research, as shown in the last column of Table III To answer this question, we first computed the complexity score and vulnerability score, as in § II-B and § II-C, for each function in all the projects (as shown in Table III). Then we plotted in Fig. 3 the relationship between complexity score (i.e., x-axis) and vulnerability score (i.e., y-axis) using logarithmic scale, where vulnerable and non-vulnerable functions were respectively highlighted in red and blue. The result of BIND is omitted for space limitations but is available on our website [6].
We can see from Fig. 3 that all projects share the similar patterns; vulnerable functions are scattered across nonvulnerable functions w.r.t. complexity score and vulnerability score; and there exists an approximately proportional relation between complexity score and vulnerability score for vulnerable functions. Therefore, if we directly ranked the functions based on complexity metrics and/or vulnerability metrics, we would always favor those functions with high complexity score and high vulnerability score, and miss those with low-complexity but vulnerable (e.g., vulnerable functions located in the first 3 bins in Fig. 3a, 3g and 3j). Instead, by first binning the functions according to complexity score and then ranking the functions in each bin according to vulnerability score, our framework can effectively identify the potentially vulnerable functions at all levels of complexity (see details in § IV-C). For all 11 projects, the number of bins ranges from 56 to 206 with an average of 114. Each bin has 301 functions on average, and 22% of bins contain vulnerable functions. Details of the function distribution among bins can be found at our website [6]. As can be seen from Fig. 3, bins with smaller complexity scores have more functions, and bins with larger complexity scores have more vulnerable functions. Sparsity of bins with larger complexity scores benefits the selection of most vulnerable functions, while our ranking in bins with smaller complexity scores gives more chance to identify less complex but vulnerable functions. Moreover, Fig. 3 also visually indicates the severe imbalance between non-vulnerable and vulnerable functions (see the third and fourth columns of Table III), which indicates traditional machine learning will over-fit and be less effective (more details will be discussed in § IV-C).
Our binning-and-ranking approach is reasonable for predicting vulnerable functions at all levels of complexity.
C. Effectiveness of Binning-and-Ranking (Q2)
We ran LEOPARD on all the projects; and analyzed its effectiveness when selecting different portion of functions, i.e., the parameter p in the ranking step (see § II-C). Here we used the percentage of functions (i.e., Iden. Func.) that are identified by LEOPARD as potentially vulnerable, and the percentage of vulnerable functions (i.e., Cov. Vul. Func.) that are covered by those identified potentially vulnerable functions as the two indicators of the effectiveness of our framework. These two indicators are used throughout the evaluation section. The results are shown in Fig. 4, where the x-axis denotes Iden. Func. and the y-axis denotes Cov. Vul. Func.. The legends are only shown in Fig. 4a and omitted in others for clarity; and the result of BIND is omitted but available on the website [6]. In general, as Iden. Func. increases, the indicator Cov. Vul. Func. also increases. For a small value (e.g., 20%) of Iden. Func., our binning-and-ranking approach can achieve a high value for Cov. Vul. Func. (e.g., 74%). Furthermore, we also report how many vulnerable functions are covered when we identify certain percentage of functions as vulnerable in Table IV. When identifying 5 %, 10%, 15%, 20%, 25% and 30% of functions as vulnerable, we can cover 29%, 49%, 64% 74%, 78% and 85% of vulnerable functions. This means by identifying a small part of functions as vulnerable, we cover a large portion of vulnerable functions, which can narrow down the assessment space for security experts. Comparison to Baseline Approaches. A recent study [80] on 42 existing cross-project defect prediction models and two state-of-the-art unsupervised defect prediction models [46,78] has indicated that, simply ranking functions based on source lines of code (SLOC) in an increasing (i.e., ManualUp) or decreasing (i.e., ManualDown) order can achieve comparable or even superior prediction performance compared to most defect prediction models. We put the results of ManualUp (which is much worse than LEOPARD) at our website [6], and only show results of ManualDown in this section.
In Fig. 4, the comparison of Cov. Vul. Func. between LEOP-ARD and ManualDown is shown for each project. LEOPARD We also conducted experiments to compare our framework with four machine learning-based techniques, namely random forest (RF), gradient boosting (GB), naive Bayes (NB) and support vector classification (SVC). The four techniques used all 4 complexity metrics and 11 vulnerability metrics as the features, and conducted a cross-project prediction by first training a model with the data from ten of the 11 projects and then using the model to predict the probability of being vulnerable for the functions in the other one project. By rotating the project to predict, we obtained the prediction results for all 11 projects. A larger predicted probability indicates that a function is more likely vulnerable. We rank the functions according to the probabilities, and identify a list of high-probability functions as vulnerable. A fair comparison to LEOPARD can be drawn when the same number of functions is identified. The results are shown in Fig. 4 and Table V. As shown in Fig. 4, an obvious shortcoming of RF and GB is the unstable performance among different projects. It indicates that machine learning approaches highly depend on the large knowledge base of various vulnerable functions, which are however hard to obtain. Specifically, RF only shows similar or slightly better performance than LEOPARD in Fig. 3a and 3b, while GB only shows similar performance in Fig. 3a, 3b and 3i. LEOPARD outperforms RF and GB in Fig. 3c, 3d, 3e 3f, 3g, 3h and 3j. Both RF and GB performs even worse than the ManualDown baseline in Fig. 3c, 3h and 3j. As numerically shown in Table V, when identifying 20% of functions, RF and GB separately cover 15.2% and 13.1% less of ground truth than LEOPARD. Again, LEOPARD does not rely on any prior knowledge about a large set of vulnerabilities but machine learning-based techniques do. NB and SVC presented extremely lower recalls among the four typical machine learning algorithms. Hence, we omitted the results and put them at our website [6]. Note that 11 projects may not be an adequate dataset for training and testing, especially given the severe imbalance between vulnerable and non-vulnerable functions, the validity of conclusions drawn can be threatened. However, such a prerequisite for prior knowledge of vulnerable functions motivate our design of LEOPARD.
Comparison to Static Scanners. We also applied two popular static software scanner tools to investigate their vulnerability prediction capability on our dataset, including an open source tool, Cppcheck [10], and a commercial tool. To avoid legal disputes, we hide the name of the commercial one and refer it as S***. Cppcheck and S*** are among the most popular static code analysis tools used to detect bugs and vulnerabilities in software. Both tools report the suspicious vulnerable statements. Whenever an alarm locates within the vulnerable functions in our ground truth, we claim a true positive for that tool. The number of total alarms reported by these two tools and the recall can be found in Table VI. Cppcheck was able to analyze all 11 projects and identified a few vulnerable functions in Binutils, FreeType and Wireshark. S*** failed to analyze Linux; and for the other 10 projects only a few vulnerable functions are detected in LibTIFF. Static scanners often rely on very concrete vulnerability patterns. Subtle pattern mismatch will cause false positives and negatives. Thus. their recalls are nearly 0, which indicate that they are not promising for general vulnerability identification. False Negative Analysis. By examining the vulnerable functions that LEOPARD fails to cover when 40% functions are identified, we summarize three main reasons for false negatives: 1) they are involved in some logical vulnerabilities which are hard to be revealed by function metrics; 2) they are implicated via some fixes indirectly related to the CVE, e.g., if a fix changes the function signature, callers of this function should not be counted as vulnerable; or 3) security critical information is in their surrounding context and unseen from the function itself, e.g., calculation of complicated pointer offsets sometime is done via a separate function, where no pointer metrics can be inferred, thus resulting in a lower vulnerability score. For the first case, such vulnerabilities are generally hard to identify via static analysis, and should not be a concern of our approach. Case two is also irrelevant to the validity of our approach. A mitigation for the third case is to include taint information to our vulnerability metrics, as will be discussed in §V. False Positive Analysis. Balancing the generality, accuracy and scalability is always a very challenging task for static analysis.
Since LEOPARD is designed to reveal general vulnerabilities, it is impossible to avoid false positives. However, LEOPARD aims to assist vulnerability assessment rather than a stand alone static analysis tool. False analysis is therefore not a critical criteria for evaluating its capability. Furthermore, some vulnerabilities are previously patched in history, secretly patched [70] or currently unexposed, and it is impossible to confirm whether they are indeed false positives. This is also reflected in the experiments in § IV-F, where new vulnerabilities have been found in the reported potential vulnerable functions.
Our binning-and-ranking approach is effective, i.e., identifying 20% of functions as vulnerable to cover 74.0% of vulnerable functions on average. Such a small portion of functions can be very useful for security experts, as will be shown in our application of LEOPARD in § IV-F. Besides, LEOPARD outperforms machine learning-based techniques and static analysis-based approaches.
D. Sensitivity of the Metrics (Q3)
To evaluate the sensitivity of the complexity and vulnerability metrics to our framework, we removed one of the dimensions of the complexity and vulnerability metrics from LEOPARD, and then ran LEOPARD on all the projects. We show the sensitivity results of complexity metrics and vulnerability metrics in Fig. 5. The x-axis and y-axis represent Iden. Func. and the delta of recall (i.e., Cov. Vul. Func.) compared to LEOPARD with all metrics. After removing one dimension of metrics, the recall delta of each project when identifying certain percentage of functions are labeled by blue cross marks, where positive delta means improvement in performance, and negative ones means degradation. The red dots are average recall delta among all 11 projects.
We can see from Fig. 5 that, basically, there are much more degradation than improvement when removing any dimension of metrics. Moreover, the average recall deltas across projects are negative for Iden. Func. at 15%, 20%, 25%, and 30% in all five experiments, i.e., less vulnerable functions are covered when the same percentage of functions is identified as vulnerable. Some improvement of average recall delta at 5% and 10% actually results from some relatively large improvements of only a few projects. Specifically, most significant degradation occurs when the cyclomatic complexity metrics (i.e., CD1) is removed, and most significant average degradation occurs when the loop structure metrics are removed, which indicates they make substantial contribution to our framework. It also proves the necessity of our binning strategy. With the above observation, we can conclude that all dimensions of our complexity and vulnerability metrics contribute to the effectiveness of LEOPARD, but complexity metrics contribute the most; and it is difficult or even impossible to derive an optimal model for the metric combination that can work well for all ranges of Iden. Func. for all projects. Hence, we design a generic but not optimal model that treats each metric equally.
Complexity metrics significantly contribute to LEOPARD; and it is difficult to derive an optimal metric model that works for all projects, which motivates our generic model without sacrificing much effectiveness.
E. Scalability of Our Framework (Q4)
To evaluate the scalability of our framework, we collected the time of extracting complexity and vulnerability metrics and the time of identifying potentially vulnerable functions by LEOPARD. The detailed results are reported at our website [6]. The time used to build the code property graph and query the graph to obtain metric values depends on the number of functions in each project. For small-scale projects, it respectively takes 2 and 45 minutes to build and query the graph; and it takes hours for large-scale projects (i.e., Wireshark and Linux). It takes less than 50 seconds to identify 100% functions even for Linux. These results demonstrate that our framework scales well for large-size projects like Linux. For machine learning-based techniques, GB on average takes 9 minutes to train the model and make the prediction for each project, and RF takes 5 minutes. Considering they also depend on the metrics calculation, LEOPARD is more efficient. S*** basically takes several minutes to finish the static analysis but requires the project to be well compiled and built, and fail to handle Linux. The lightweight static scanner Cppcheck shows comparable performance as LEOPARD.
Our framework scales well and can be applied to largescale applications like Linux.
F. Application of LEOPARD (Q5) Manual Auditing. Code review is a popular approach for vulnerability hunting. In this section, we demonstrate the role that LEOPARD plays in helping security experts to hunt vulnerabilities with a case study of FFmpeg 3.1.3. In order not to overwhelm the security expert, we showed the top 1% candidates with LEOPARD, which is a list of 128 functions with detailed complexity and vulnerability metric scores, as well as the specific variables involved in the metrics, e.g., the variables involved in control predicates. The security expert is experienced with code review and is familiar with the basic implementation and code structures of FFmpeg. He firstly grouped the functions into different modules and chose libavformat as the target, which is the module responsible for the streaming protocols and conversion, and has been prone to vulnerabilities in history. Among all 128 functions, 13 of them are in libavformat. He spent one day to find a divide-by-zero bug in one of the functions, with CVE-2018-14394 assigned. Intuitively, he thinks the maximum of data-dependent control structures metrics (with the variables involved) more interesting, as he can be guided to trace backward and/or forward the data flow of these sensitive variables. Detailed discussion about the aforementioned case can be found at our website [6]. Directed Fuzzing. As discussed in § III, LEOPARD can supply targets for directed fuzzing. Experimentally, we ran LEOPARD on PHP 5.6.30 (a popular general-purpose scripting language that is especially suited to web development) and identified around 500 functions as potentially vulnerable. Notice that PHP is used by more than 80% of all the websites, and 5.6.30 is the current stable version. Thus PHP is well-tested by its users, developers, and security researchers, and it is difficult to find vulnerabilities. We selected top 500 functions reported by LEOPARD as the target sites for Hawkeye for bug hunting. We divided PHP into several modules based on its architecture and focused on the functions in the modules (e.g., mbstring and Zend) that are related to file system and network data as they are often reachable through entry points. We excluded the functions in those well-fuzzed modules (e.g., SQLite, phar and gd). This manual filtering process is different from manual auditing as the security expert does not pinpoint the vulnerability directly. After 6-hour fuzzing, we discovered six vulnerabilities in PHP 5.6.30 with details shown in Table VII. Seed Prioritization. In § III, we also discussed the application of applying the results of LEOPARD to the seed evaluation process during fuzzing. We used LEOPARD to generate function level scores for three real-world open-source projects and utilized the scores to provide guidance to FOT [19]. The three projects are mjs [1] (a Javascript engine for embedded systems), xed [2] (the disassembler used in Intel-Pin) and radare2 [3] (a popular open source reverse engineering framework). For the experiment purpose, we ran FOT with and without the guidance from LEOPARD for 24 hours and collected the detected crashes. Table VIII shows the detailed performance differences of FOT with and without LEOPARD. From the results, LEOPARD can help FOT to detect 127% more crashes in 24 hours on average. Finally, seven new bugs are found in mjs, seven new bugs are found in xed, and a new vulnerability (CVE-2018- 14017) is exposed in radare2.
These results showed that LEOPARD can substantially enhance the vulnerability finding for a limited time budget, which is the original purpose of designing LEOPARD.
V. METRICS EXTENSION The set of complexity and vulnerability metrics can be refined and extended, to highlight interesting functions via capturing different perspectives. To this end, we have identified the following information to be vital to further improve our findings. Taint Information. Leveraging taint information will help an analyst to identify the functions that process the external (i.e., taint) input. In general, functions that process or propagate the taint information can be considered quite interesting for further assessment. Hence, incorporating the taint information into vulnerability metrics will further enhance the LEOPARD's ranking step by assigning more weight (or importance) to the functions that process or propagate the taint information. Vulnerability History. In general, when a vulnerability is reported, the functions related to the vulnerability will go through an intensive security assessment during the patching process. Hence, such information can be used to refine the ranking by either: (1) giving more importance to recently patched functions due to the verified reachability, with considerable risks of incomplete patch or introducing new issues, or (2) giving low priority to the functions that are patched long before the release of the current version, assuming that the functions have gone through a thorough security assessment and it is not worth the effort to re-assess it.
Domain Knowledge. Domain knowledge can play a vital role in prioritizing the interesting functions for further assessment. Information such as the modules that are currently fuzzed by others or the knowledge about the modules that are shared by two or more projects can be used to refine LEOPARD's ranking.
VI. RELATED WORK Here we discuss the most closely related work that aim at assisting security experts during vulnerability assessment. Pattern-Based Approaches. Pattern-based approaches use patterns of known vulnerabilities to identify potentially vulnerable code. Initially, code scanners (e.g., Flawfinder [5], PScan [8], RATS [9] and ITS4 [64]) were proposed to match vulnerability patterns. These scanners are efficient and practical, but fail to identify complex vulnerabilities as the patterns are often coarsegrained and straightforward. Differently, our approach does not require any patterns or prior knowledge of vulnerabilities.
Since then, security researchers have started to leverage more advanced static analysis techniques for pattern-based vulnerability identification (e.g., [18,29,34,37,59,63,71,72,74]). These approaches require the existence of known vulnerabilities or security knowledge as the guideline to formulate patterns. As a result, they can only identify similar but not new vulnerable code. Differently, we do not require any pattern inputs or prior knowledge of vulnerabilities, and can find new types of vulnerabilities.
Besides, several attempts have been made to automatically infer vulnerability patterns (e.g., [41,62,73]). While promising, these approaches only support specific types of vulnerabilities, e.g., missing-checking vulnerabilities for [62] and taint-style vulnerabilities for [41,73]. However, our approach can find new types of vulnerabilities. Metric-Based Approaches. Inspired by bug prediction [16,28,30,38,49], a number of advances have been made in applying machine learning to predict vulnerable code mostly at the granularity level of a source file. In particular, researchers started by leveraging complexity metrics [21,44,45,55,56] to predict vulnerable files. Then, they attempted to combine complexity metrics with more metrics such as code churn metrics and token frequency metrics [26,31,43,47,48,52,54,54,57,58,58,65,79,81]. Then, advances have been made to use unsupervised machine learning to predict bugs [25,32,36,46,75,76,77,78,80] using the similar set of complexity metrics. These approaches use the similar metrics as those in bug prediction, but do not capture the difference between vulnerable code and buggy code, which hinders the effectiveness. Moreover, the imbalance between vulnerable and non-vulnerable code is severe, which hinders the applicability of machine learning to vulnerable code identification. Instead, our approach specifically derives a set of vulnerability metrics to help identify vulnerable functions. Vulnerability-Specific Static Analysis. Researchers have attempted to detect specific types of vulnerabilities via static analysis; e.g., buffer overflows [24,82], format string vulnerabilities [24,53], SQL injections [23,33,69], cross-site scripting [23,33,35] and client-side validation vulnerabilities [51]. While they are effective at detecting specific types of vulnerabilities, they often fail to be applicable to a wider range of vulnerability types. Moreover, they often require heavyweight program analysis techniques. Differently, our approach is designed to be generic and lightweight.
VII. CONCLUSIONS
We have proposed and implemented a generic, lightweight and extensible framework, named LEOPARD, to identify potential vulnerable code at the function level through two sets of systematically derived program metrics. Experimental results on 11 real-world projects have demonstrated the effectiveness, scalability and applications of LEOPARD. | 2019-01-31T17:09:15.000Z | 2019-01-31T00:00:00.000 | {
"year": 2019,
"sha1": "b9c86aebba3b0542dafe51db7398ae5a9bfa1c5b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1901.11479",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2aa72a65f65502e17498f070c5821c8caa5c0cc2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
91263250 | pes2o/s2orc | v3-fos-license | Allelopathic potential of selected woody species growing on fly-ash deposits
The objective of this study was to determine the allelopathic potential of Robinia pseudoacacia L., Ailanthus altissima (Mill.) Swingle and Amorpha fruticosa L. that grow on the fly-ash deposits at the “Nikola Tesla – A” thermoelectric power plant in Obrenovac. The chemical characteristics of fly ash, such as pH, electrical conductivity (EC), content of carbon (C) and nitrogen (N), contents of available phosphorus (P2O5) and potassium (K2O), the contents of total and available Fe, Cu, Mn, Ni and Zn as well as of phenolic acids (3,5 dihydroxybenzoic acid (3,5-DHBA) and ferulic acid) and flavonoids (rutin and quercetin) were analyzed in control fly ash (bare zones without vegetation cover) and plant rhizospheric fly ash. In order to determine the allelopathic activity of phenolic compounds in fly ash, modified soil sandwich allelopathic biotests were performed, and Trifolium pratense L. (red clover) was used as the indicator species. A. fruticosa showed the highest allelopathic activity, followed by A. altissima whereas R. pseudoacacia showed the lowest allelopathic potential. Negative correlation was noted between radicle and hypocotyl growth inhibition of red clover and the pH of fly ash. Positive correlations were found between radicle growth inhibition and the content of C, P2O5, total concentrations of Cu, available concentrations of Mn and Ni, the contents of ferulic acid, 3,5-DHBA, and rutin. Our results indicate that A. fruticosa and A. altissima increased the content of phenolics in fly ash, which can act as allelochemicals leading to radicle growth inhibition of red clover in the pioneer plant community on fly-ash deposits. These woody species that colonized fly-ash deposits can initiate the beginning of pedogenetic processes altering the ecosystem processes at degraded sites.
INTRODUCTION
Allelopathy presents both inhibitory and stimulatory interactions between plants through the action of secondary plant metabolites -allelochemicals [1,2].Allelochemicals are often investigated as compounds that potentially allow the invasion of plant species in new habitats due to the lack of adaptive potential of native species to new allelochemicals originating from introduced species [3].One of the most studied groups of allelochemicals are phenolic compounds, which in plant tissue can be found in soluble form or bound to the cell wall polysaccharides [4,5].
Phenolic compounds from the plant organism are delivered into soil by leaching from the surface of the plant and fallen leaves, decomposition of litter, and active excretion from roots [6].Most of the phenolic compounds are soluble in water and rinsed from the surface of the plant body and transferred to the deeper parts of the soil by rainwater [7].In soil, phenolic compounds represent the second most widespread group of compounds (after cellulose), and occur in three different forms: free, reversibly bound and bound [8].However, the concentration of phenolic compounds in the soil is much lower than in plant organisms [9].These compounds have a short lifetime and they are susceptible to degradation by microorganisms, so their detection and identification in soil is much more difficult than in plant tissues [10].
Through allelopathic activity, phenolic compounds have significant effects on the structure and composition of plant communities [11,12].The phenolic compounds may have direct harmful effects due to their release by donor species or can be degraded or transformed by soil microorganisms.Furthermore, these compounds can have an effect on the physical, chemical and biological soil characteristics or can induce the release of biologically active substances in third plant species [13].Phenolic compounds in the soil can lead to root and hypocotyl growth inhibition, the limitation of water and mineral uptake, and inhibition of photosynthesis and enzymatic activity in acceptor plants [5,14,15].Blum et al. [16] indicated that the focus in allelopathic research should be shifted to the soil system since the phytotoxic activity of allelochemicals is a function of the characteristics of plant allelochemicals and soil [17].Additionally, adsorption and desorption of allelochemicals in soil is a dynamic process and is influenced by various physicochemical factors, such as soil moisture, pH and organic matter content [18,19].Furthermore, phenolic compounds represent an important link in the dynamics of mineral and organic compounds in the soil due to their effect on the chemical properties of the soil, availability of certain metals and the microorganism community [5,8,15,20,21].
The process of plant colonization on fly-ash deposits (revegetation) is very slow due to unfavorable conditions of substrates [22].The plants growing on the fly-ash deposits are exposed to a combination of several stress factors, such as high temperatures, increased irradiation, drought and elevated concentrations of some metal(loid)s [23][24][25].Woody plants that have a large cover and great biomass play a prominent role in the process of fly-ash revegetation.Organic matter originating from these plants is of great importance in the process of soil humus formation, which represents a mixture of various organic compounds among which phenolic compounds are very important.
In the majority of studies, the effect of phenolic compounds on vegetation is emphasized, while their effects on substrate characteristics are not fully addressed [26].In this study, we hypothesized that woody plant species, Robinia pseudoacacia L., Ailanthus altissima (Mill.)Swingle and Amorpha fruticosa L., show a great allelopathic potential due to the production of high amounts of phenolics that are released from these plants in fly ash.It is also assumed that these allelochemicals contribute to the increased availability of certain metals, which can additionally lead to inhibition of seedling (radicle and hypocotyl) growth of the indicator plant species, Trifolium pratense L. (red clover).Therefore, the objectives of this study were (i) to compare the chemical characteristics of control and rhizospheric fly ash; (ii) to analyze the total and available metal contents (Fe, Cu, Mn, Ni, Zn) in control and rhizospheric fly ash; (iii) to evaluate the contents of phenolic compounds in control and rhizospheric fly ash; (iv) to determine the allelopathic potential of the selected woody species through screening of seedling (radicle and hypocotyl) growth inhibition of red clover as an indicator plant species.
Study area
The study was performed on the fly-ash deposit at the "Nikola Tesla -A" thermoelectric power plant (TENT-A) in Obrenovac (N 44º30', E 19º58'), which is located on the right bank of the Sava River 40 km upstream from Belgrade (Serbia) (Supplementary Fig. S1).The climate is semi-continental with local mean annual rainfall and temperature of 647 mm and 11ºC, respectively.As the largest producer of electricity in Serbia, southeastern Europe and the Balkans, the six blocks of TENT-A with a total power of 1820 MW annually produce about 8 billion kWh of electricity.Annually, TENT-A burns 12-14 Mt of lignite, which is supplied by the Kolubara-Tamnava surface pits.The chemical analysis of electrofilter ash at TENT-A showed that it consists of SiO 2 (54.21%),Al 2 O 3 (24.98%),Fe 2 O 3 (6.13%),CaO (5.89%), MgO (3.15%),K 2 O (1.12%), Na 2 O (0.29%), TiO 2 (0.69%), P 2 O 5 (0.07%) and SO 3 (0.96%) (data obtained from the Vinča Institute for Nuclear Sciences, Belgrade, Serbia).
To date, the landfill of TENT-A has disposed of more than 66×106 t of fly ash that occupies 400 ha of agricultural land of fluvisol type.Fly ash is hydraulically transported in a suspension with water at ratios of 1:10 or 1:20.The disposal of fly ash is carried out in three lagoons, one is active (L 2 ), and the other two are in a temporary mode for the technical consolidation of fly ash and drainage (L 1 and L 3 ) (Fig. S1C).To reduce the negative impact of fly ash on the environment, TENT-A has provided biological recultivation, i.e. the sowing of legumes (Medicago sativa L., Lotus corniculatus L., Vicia villosa Roth., Trifolium pratense L.) and grasses (Secale cereale L., Lolium multiflorum Lam., Festuca rubra L., Dactylis glomerata L.), as well as planting of trees (Robinia pseudoacacia L., Ailanthus altissima (Mill.)Swingle) and shrubs (Tamarix sp.).The established herbaceous and woody vegetation cover on fly-ash deposits over time promote natural succession and revegetation.In the flat part of L 1 , some spontaneous herbaceous plant species, shrubs and trees were noted: Calamagrostis epigejos L., Oenothera biennis L., Sorghum halepense (L.) Pers., Erigeron canadensis L. and Amorpha fruticosa L.
Sampling of fly ash
The field research into fly-ash deposits on L 1 was carried out during August 2016.The fly ash was collected at a depth of 0-30 cm on bare zones without vegetation cover represented control fly ash (C FA ), whereas the fly ash taken up in the root zone of R. pseudoacacia, A. altissima and A. fruticosa was marked as plant rhizospheric fly ash (RP FA, AA FA, and AF FA , respectively).Fly-ash samples were collected in the range of 5-10 m from the embankment of L 1 .Collected fly ash was packed into plastic bags and brought to the laboratory for analysis.After the removal of visible plant remains, samples were dried at room temperature (25ºC) and sifted through a sieve (0.5 mm mesh).For chemical, elemental and biochemical analyses, five composite samples of fly ash were used (n=5).
Chemical analysis of fly ash
Fly-ash pH was measured in water (pH H2O ) and 0.1 M KCl solution (pH KCl ) with a pH meter (PHT-026 Multifunction meter).Electrical conductivity (EC[dSm -1 ] in the extract (fly ash:water=1:5) was measured with a conductometer (PHT-026 Multi-function meter).Organic carbon (C) was measured by the method of Tyurin [27], and the total nitrogen content (N) was determined by Kjeldahl digestion [28].The C/N ratio was calculated.Available forms of phosphorus (P 2 O 5 ) and potassium (K 2 O) were analyzed using the standard AL-method [29].
Elemental analysis of fly ash
Total concentrations of chemical elements (Cu, Fe, Mn, Ni, Zn) in fly ash were determined according to the modified method 3051A (EPA SW-846 test methods) [30] as follows: 2-3 g of fly-ash sample were oven-dried for 1 h at 105ºC.The sample was dissolved in a mixture of 25 mL of HNO 3 and HClO 4 for 12 h at 40ºC.Concentrations of available chemical elements in the fly ash were determined according to Zemberyova et al. [31].Extraction was performed with 0.05 mol/ dm 3 EDTA at pH 7. The sample of dried fly ash (2-3 g) was added in 25 mL of 0.05 M EDTA and mixed on a magnetic stirrer for 1 h at room temperature (20±4ºC).Flam atomic absorption spectrophotometer (FAAS, "Perkin Elmer 3300") was used for analyzing the concentrations of chemical elements.Standard solutions were used for the preparation of calibrated diagrams.The range of concentrations of test elements of the standard solutions was 0.5-2.0mg/dm 3 for Cu, Zn and Ni, and 1.0-5.0mg/dm 3 for Mn and Fe.The measured values of the element contents in fly ash are expressed in µg/g dry matter.
Determination of phenolic acids and flavonoids by HPLC
Phenolic acids and flavonoids were extracted by dissolving 10 g of dry fly ash in 30 mL of pure methanol (99.8%) in an ultrasonic bath for 15 min and then left to dissolve another 24 h.Subsequently, samples were centrifugated 20 min at 10000×g and supernatants were filtered through 0.2-μm cellulose filters (Agilent Technologies, Santa Clara, CA, USA) and stored at 4°C until use.
The methanolic extracts were analyzed by a HPLC system (Shimadzu, Kyoto, Japan) which consisted of a degasser DGU-20A3, analytical pumps LC-20AT, 7125 injectors and SPD-M20A diode array detector and CBM-20A system controller.Separation was achieved on a Luna C18 column at 30ºC, 250×4.6 mm I.D., 5 µm (Phenomenex, Torrance, CA, USA) with a flow rate of 1.0 mL/min.The injection volume was 20 µL.The chromatographic data were processed using LC Solution computer software (Shimadzu).Gradient elution was used (5% B 0-5 min, gradient 5-60% B during 5-30 min, 60% B held for 5 min, then ramped from 60% to 90% B for 2-3 min and equilibrated for a further 5 min; mobile phases: A, water acidified with formic acid, pH=3; B, acetonitrile).The identity of compounds was determined by comparing the retention times and absorption maxima of known peaks with pure standards (Sigma) at 290 and 245 nm.Two phenolic acids were used as phenolic acid standards: ferulic and 3,5-dihydroxybenzoic acid (3,5-DHBA); for identification of flavonoids, two different standards were used: rutin and quercetin.The concentrations of phenolic acids and flavonoids are expressed in μg/g of extracts.
Growth inhibition test
The allelopathic activity of selected plant species grown on fly ash was assessed by a modified "soil sandwich method" [32].In the experiment, 5 mL of agar (0.5%) cooled to 42ºC were added into a multi-dish (6 dishes) plate containing 3 g of dried fly ash.After solidification, 3.2 mL of agar (0.5%) was added on a fly ash-agar layer.After 1 h, 5 seeds of T. pratense L. were added to the gelled agar culture medium in one dish (30 seeds per multi-dish plate).Control plates contained only agar medium (without fly ash).The multi-dishes were incubated at 25ºC in the dark.After 7 days, the lengths of the radicle and hypocotyl were measured and the percentage of growth inhibition was calculated (compared to control).The bioassays were done in 5 replicates (30 seeds per replicate, n=150).
Statistical analyses
Statistical analyses included determination of the mean (M) and standard deviation (SD) for each of the analyzed parameters.Differences between groups in terms of chemical properties of fly ash, total and available concentrations of chemical elements, content of phenolic acids and flavonoids in fly ash as well as the inhibition of radicle and hypocotyl growth of indicator species, were determined by multivariate variance analysis (MANOVA) and Scheffé's post-hoc test.Pearson's correlation coefficients (r) between the allelopathic activity of selected plant species growing on fly ash and the chemical characteristics, total and available concentrations of elements and content of phenolics in fly ash were determined.Statistical analysis was performed using the package Statistica 10.0.
Chemical properties of fly ash
The chemical properties of the control (C FA ) and rhizospheric fly ash (RP FA, AA FA, and AF FA ) are shown in Table 1.The results showed that the pH (H 2 O) and pH (KCl) in RP FA had higher values than the C FA (p<0.001),AA FA (p<0.001) and AF FA (p<0.001).EC values were higher in AF FA compared to C FA (p<0.001),RP FA (p<0.001) and AA FA (p<0.001).In comparison to C FA the results showed statistically significant higher values of C, N and C/N in RP FA (p<0.01),AA FA (p<0.01) and AF FA (p<0.01),except for the values of N in the RP FA and the C/N values in the AA FA .The highest values of C and N were detected in AA FA .Also, statistically significantly higher values (p<0.001) for both P 2 O 5 and K 2 O were detected in all rhizospheric fly ash in comparison to control fly ash.The highest values of P 2 O 5 were detected in AA FA , while the highest values of K 2 O were detected in the RP FA .
Total and available elements in fly ash
Total and available concentrations of Cu, Fe, Mn, Ni and Zn in C FA , RP FA , AA FA and AF FA are shown in Table 1.The results showed statistically significantly lower concentrations of Cu, Fe and Ni in the C FA compared to all plant rhizospheric fly ash (p<0.001),except for Fe and Ni in the AA FA .However, statistically significantly higher concentrations of both Mn and Zn were found in the C FA (p<0.001) compared to all other plant rhizospheric fly ash, except for Zn in the AF FA .In the control fly ash and all other rhizospheric fly ash, total metal concentrations decreased in the following order: Fe>Mn>Ni>Cu>Zn.
The available Cu, Fe and Ni concentrations in the C FA were lower in comparison to all plant rhizospheric fly ash (p<0.001),except for Fe in the AA FA and Ni in the RP FA .However, the available concentrations of Mn and Zn in the C FA were higher versus all the plant rhizospheric fly ash (p<0.001).The content of the available elements in the C FA , RP FA and AF FA showed the same decreasing order: Fe>Mn>Cu>Ni>Zn, whereas the content of available elements in the AA FA decreased in a slightly different order: Fe>Mn>Ni>Cu>Zn.Furthermore, the percentage of available concentrations of Fe, Mn and Ni in fly ash relative to the total content of the elements was the highest in the AA FA (0.42%, 4.07%, 4.39%, respectively).The percentage of available Zn in fly ash relative to the total metal content was the highest in the C FA (11.31%), whereas the highest percentage of available Cu concentrations was detected in AF FA (19.6%).
Phenolic content in fly ash
The content of phenolic acids and flavonoids in C FA , RP FA , AA FA , and AF FA is presented in Table 2.The content of 3,5-DHBA in C FA was higher, whereas the content of ferulic acids was lower compared to the RP FA and AA FA (p<0.001).Results in this study showed the highest content of these phenolic acids in the AA FA .
The rutin content in C FA was lower in relation to the RP FA and AF FA (p<0.001), while the quercetin content in C FA was lower in comparison to the AA FA (p<0.001).
The highest content of rutin was noted in the AF FA , whereas the highest content of quercetin was found in the AA FA .The phenolic acid and flavonoid contents in the C FA showed the following decreasing order: quercetin>3,5-DHBA>rutin>ferulic acid.In the RP FA and AA FA , the order of phenolic content was as follows: quercetin>ferulic acid>rutin>3,5-DHBA.However, the content of phenolic acids and flavonoids in the AF FA showed a different decreasing order: quercetin>3,5-DHBA>ferulic acid>rutin.In this study, the relationship between chemical properties and the phenolic content in fly ash is presented in Table 3. Positive correlations were noted between EC, the P 2 O 5 content, the total and available concentrations of Cu, total concentrations of Fe and Ni and 3,5-DHBA, ferulic acid and rutin (p<0.05,p<0.01, p<0.001), and between the C content and ferulic acid (p<0.001), the N content, the available concentrations of Fe and Ni and quercetin (p<0.01).However, negative correlations were found between the C and K 2 O contents, the total concentrations of Fe, the available concentrations of Ni and quercetin in fly ash (0<0.05,p<0.01, p<0.001).
Allelopathic effects of plants growing on fly ash deposits
Higher inhibition of radicle growth of red clover was noted in AF FA in comparison to the C FA and RP FA (p<0.01) (Fig. 1).Higher inhibition of red clover radicle growth was found in the AA FA versus RP FA (p<0.001).The inhibition of radicle elongation decreased in the following order: AF FA >AA FA >C FA >RP FA .Hypocotyl growth inhibition in red clover was significantly lower in the rhizospheric fly ash of all plant species compared to the C FA (p<0.01,p<0.001), as well as in RP FA compared to AA FA (p<0.01) (Fig. 2).The inhibition of red clover hypocotyl growth decreased in the following order: C FA >AA FA >AF FA >RP FA .
DISCUSSION
The phytotoxic activity of allelochemicals in soils is a function of the interaction of the allelochemicals from plants and soil characteristics [17].Allelochemicals primarily affect the soil pH, aggregation and aeration, content of N and C, plant-water relations, nutrient ion uptake, degree of decomposition, composition and activity of soil microbiocenosis [5].Phenolic compounds have an effect on the accumulation and availability of soil nutrients, which in turn have a great influence on plant growth [17].However, phenolics can also play an important role in inhibiting the nitrification that affects a plant's nutrient status [1].Recent studies are more focused on the environmental impact of allelochemicals on ecosystems than on plant-plant interactions [11], indicating a great influence of allelochemicals on the turnover of inorganic and organic compounds in soil [33].
In this study, the pH values of fly ash ranged from 6.32-8.1, which is similar to results obtained by other authors (8.85 [34], 7.95 [25]).According to Whitehead et al. [35], soil pH can affect the concentration of phenolic compounds in the soil solution.Our research showed that as the pH of fly ash increases, the inhibition of radicle and hypocotyl growth of red clover is less pronounced.A high pH value (8.1) and low nitrogen content (0.13%) in the rhizospheric fly ash of R. pseudoacacia (RP FA ) was noted, although this is a nitrogen-fixing species.The cause of this phenomenon may be the detrimental effect of high pH on the microorganisms involved in nitrogen fixation [36].Generally, microorganisms can reduce the allelochemical activity of phenolics in the rhizosphere and reduce the inhibition of plant growth due to rapid mineralization of phenolic compounds [15].The limitation of phytotoxic activity of active substances released from the donor plants could occur through sorption to organic matter which is greatest under neutral to slightly alkaline conditions [21].However, high inhibition of red clover radicle and hypocotyl growth could be attributed to the lower pH values of the AA FA and AF FA (6.34 and 6.32, respectively), which can modify the nutrient status and element availability in fly ash [21].
In the present study, the C content in plant rhizospheric fly ash (RP FA , AA FA and AF FA ) was greater than in the control fly ash (C FA ) (2.41, 3.12 and 3.07 times, respectively), and this might be due to its origin from humus that contains humic acids, peptides and carbohydrates as compared to control fly ash where the C is derived from coal [37].Our findings show that phenolics in the AA FA and AF FA had a more negative impact on red clover radicle growth with an increasing C content in fly ash compared to phenolics in the RP FA and C FA .The C pool in soils depends on its input through leaf and root litter and root exudates and decomposition of organic matter by microorganisms [38].The high inhibitory effect of A. altissima and A. fruticosa on red clover radicle growth could be related to the high content of phenolics in fly ash which, being C-rich compounds, are released from plants to fly ash and contribute to the C-stock in fly ash.The positive correlation between ferulic acids and carbon content in fly ash (r=+0.585)points to their high content in the AF FA (4.11 µg/g, 4.43%) and AA FA (0.7 µg/g, 4.5%).Hence, it was assumed that some phenolics from these plants were less susceptible to microbial decomposition and had a high allelopathic effect.
The N content in the AA FA and AF FA was higher than in C FA (3.27 and 1.90 times, respectively) and in RP FA (2.77 and 1.61 times, respectively).The high N content in rhizospheric fly ash reveals the ability of these plant species to increase the amount of nitrate in the soil where the net nitrification rate in N-deficient substrates, such as fly ash, depends on the quality of litter [39].Furthermore, in N-limited soils, the high content of phenolics inhibits N mineralization, leads to the release of dissolved organic N from leaf litter and reduces overall ecosystem N loss [40].In our study, the highest content of quercetin (60 μg/g) and N (0.36%) in the AA FA and the positive correlation between quercetin content and N in fly ash (r =+0.945) suggest that the high polyphenol production from this plant species may present an advantage to increased plant uptake of organic N [40].
The content of available P was significantly higher in plant rhizospheric fly ash (RP FA , AA FA and AF FA ) than in the C FA (1.21, 1.51 and 2.69 times, respectively).Herr et al. [41] noted that the rhizospheric soil of Solidago gigantea had a lower pH value and a higher content of available P than surrounding soils without this species, which is similar to our observations in A. altissima and A. fruticosa.Our results showed significant inhibition of red clover radicle growth in response to the high available content of P in fly ash.According to Kafkafi et al. [42], phenolic compounds can compete with P for sorption sites on mineral surfaces and may form complexes with Al and Fe.The high content of phenolic allelochemicals (3.5-DHBA, ferulic acids and rutin) that are released from plants can increase P availability (r=+0.925,r=+0.990,r=+0.955,respectively), desorbing previously bound P, which might be related to our results obtained for A. fruticosa.The low content of P in the RP FA can be related to the inhibition of N fixation [43] as well as with the low content of phenolic acids, which can reduce solubility and availability of P [5].
The concentration of Cu, Mn, Ni, and Zn in fly ash can be 30 times higher than in coal, making fly ash the main threat for the surrounding environment and human health [44].In our study, the total concentrations of Ni in all rhizospheric fly ash samples were toxic (12-34 μg/g, [45]), the concentrations of Cu were within the normal range (13-24 μg/g, [45]) whereas the concentrations of Mn and Zn were deficient (270-525 μg/g and 45-100 μg/g, respectively [45]).The availability of Cu, Fe, Mn, Ni and Zn in control and plant rhizospheric fly ash samples was relatively low.In addition, the concentrations of total and available Mn and Zn in the rhizospheric fly ash of all woody species were lower than in control fly ash.Higher Cu availability in AF FA in relation to the C FA can be due to low pH values (6.3), as in slightly acidic conditions Cu is more available [46].However, red clover radicle growth inhibition is positively correlated with total concentrations of Cu and the available content of Mn and Ni in fly ash.The content of heavy metals in soil can be crucial for the persistence and activity of allelochemicals [47] and may affect the production and release of secondary metabolites from the plant roots, which in turn increase the availability of nutrients or form chelates with toxic metal in the soil [48].
Phenolic compounds can increase the availability of Cu, Fe and Mn by forming organic complexes that enhance the uptake of these elements by plants [5,21].The high concentrations of 3.5-DHBA, ferulic acid and rutin in fly ash were related to the high total content of Cu (r=+0.706,r=+0.863,r=+0.798,respectively), Fe (r=+0.717,r=+0.705,r=+0.777,respectively) and Ni (r=+0.8555,r=+0.818,r=+0.888,respectively), which coincides mostly with the results obtained in A. fruticosa.The available content of Cu increased and the available content of Fe decreased with the higher contents of 3.5-DHBA (r=+0.791,r=-0.768,respectively), ferulic acid (r=+0.872,r=-0.616,respectively) and rutin (r=+0.878,r=-0.733,respectively) in fly ash mostly found in A. fruticosa, whereas the highest content of quercetin in fly ash coincided with the highest available content of Ni (r=+0.755) in the AA FA .The decrease of hypocotyl growth of red clover was lower with increasing total concentrations of Cu, Fe, Mn and Ni and with decreasing available concentrations of Mn and Zn in fly ash.Since there are no significant correlations between hypocotyl growth inhibition of red clover and phenolic compounds in fly ash, it can be assumed that these phenolics do not express an allelopathic effect on red clover hypocotyl growth.
The results of this study showed that the contents of 3,5-DHBA, ferulic acid and rutin were greatest in the AF FA where they expressed the strongest inhibitory potential on red clover radicle growth.The cause of this phenomenon may be the increased allelopathic activity of several different phenolic compounds in the combination [49].Thus, the additive and synergistic effects of several different phenolic compounds become significant and decisive in allelopathic inhibition of various ecophysiological processes in the acceptor plant [5,8].Csizar [50] showed that A. fruticosa had stronger allelopathic potential than A. altissima and R. pseudoacacia, which is in line with our results.Generally, different allelopathic compounds were found in R. pseudoacacia and A. altissima [51,52], but the data on the allelopathic activity of these woody species grown on fly-ash deposits are still lacking.
In this study, the presence of phenolic compounds in all plant rhizospheric fly ash samples indicates that R. pseudoacacia, A. altissima and A. fruticosa can affect the beginning of pedogenetic processes of a sterile substrate, such as fly ash.Phenolic compounds, as the most prevalent allelochemicals in plants, enter the soil by leaching from the surface of the plant body, decomposition of litter and active secretion from roots [6,53].Free phenolics are very important in the interactions between donor and acceptor plants because they are first available to the acceptor plant when they reach the litter and the soil [23].Therefore, the detection of allelochemicals in the rhizosphere of the donor species is very important for allelopathy [54].Further studies will determine the seasonal dynamics of phenolic compounds in rhizospheric fly ash and leaf litter of investigated plant species, the contents of these allelochemicals in roots and aboveground plant parts, as well as the effects of these compounds on germination and the seedling growth of the other herbaceous plant species that grow on fly-ash deposits.
CONCLUSIONS
The allelopathic activity of phenolic compounds in rhizospheric fly ash is strongly influenced by pH, content of C and available P, as well as the total content of Cu and the available contents of Mn and Ni in fly ash.In addition, the unfavorable conditions that prevail on fly-ash deposits promote the high production of phenolic acids and flavonoids in woody plant species such as A. fruticosa, A. altissima and R. pseudoacacia.These phenolic compounds have the properties of allelochemicals in fly ash and can lead to the inhibition of growth of T. pratense as the understory herbaceous species present in the fly-ash plant community.These woody species that colonized the fly-ash deposits may act as promotors of pedogenetic processes on fly ash, altering the ecosystem processes in anthropogenically degraded sites.
Table 4 .
Pearson's correlation coefficients (r) between red clover radicle and hypocotyl growth inhibition and chemical parameters, total and available concentration of elements and phenolic contents in fly ash. | 2019-04-03T13:08:12.427Z | 2019-04-02T00:00:00.000 | {
"year": 2019,
"sha1": "69c4407e0d3688115452951ffde257c38cd0f2b0",
"oa_license": "CCBY",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-46641800050G",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "69c4407e0d3688115452951ffde257c38cd0f2b0",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
6666121 | pes2o/s2orc | v3-fos-license | Antisense Therapy in Neurology
Antisense therapy is an approach to fighting diseases using short DNA-like molecules called antisense oligonucleotides. Recently, antisense therapy has emerged as an exciting and promising strategy for the treatment of various neurodegenerative and neuromuscular disorders. Previous and ongoing pre-clinical and clinical trials have provided encouraging early results. Spinal muscular atrophy (SMA), Huntington’s disease (HD), amyotrophic lateral sclerosis (ALS), Duchenne muscular dystrophy (DMD), Fukuyama congenital muscular dystrophy (FCMD), dysferlinopathy (including limb-girdle muscular dystrophy 2B; LGMD2B, Miyoshi myopathy; MM, and distal myopathy with anterior tibial onset; DMAT), and myotonic dystrophy (DM) are all reported to be promising targets for antisense therapy. This paper focuses on the current progress of antisense therapies in neurology.
Introduction
Antisense oligonucleotides (AOs) are short, synthetic nucleic acid sequences that selectively hybridize to target sequences in messenger RNA (mRNA). AOs can cause inhibition or redirection of splicing and inhibition of protein synthesis through various mechanisms, including disruption of the cell's splicing machinery, interference with the ribosomal complex, and/or by activation of RNase H1-mediated degradation of the oligo-RNA heteroduplex [1]. Antisense therapy is an approach to fighting diseases using DNA-like molecules (AOs). After initially observing antisense-mediated RNA regulation in nature, investigations using model systems to test the feasibility of using synthetic AOs to reduce levels of specific mRNA transcripts quickly followed. Early experiments showed that AOs were effective in reducing target transcripts and protein synthesis [2]. However, despite promising early results, the use of AOs in disease therapy has been stymied by technical challenges and progress has been slow. Despite more than 20 years of research and clinical investigations, the United States Food and Drug Administration (FDA) has only ever approved two marketable AO drugs, Vitravene (Isis Pharmaceuticals, Carlsbad, CA, USA), for the treatment of cytomegalovirus retinitis in immunocompromized Acquired Immune Deficiency Syndrome (AIDS) patients with human immunodeficiency virus (HIV) infection [3], and, recently, Kynamro® (Isis Pharmaceuticals, Carlsbad, CA, USA) for the treatment of familial hypercholesterolemia. Although approved in 1998, Vitravene was removed from the market in 2004. Notwithstanding its slow progress, antisense remains a widely popular area of research in molecular biology, and with recent advancements in oligo chemistries and promising results from recent clinical trials it may well be that the day of AOs in the clinical arena in neurology is close at hand.
Challenges
Although promising, the headway of antisense therapy in the clinical realm has been quite slow.
To better appreciate the current status of AO drug therapies, it is important to consider the hurdles that AOs have had to overcome. The first of these hurdles is drug delivery. First generation AOs do not easily cross the lipid bilayer of the cell, making intracellular potency via systemic delivery problematic since these AOs cannot readily penetrate to their intracellular targets at significant concentrations to be effective [4][5][6][7]. In the case of certain neurodegenerative diseases, such as Huntington's disease and Alzheimer's, the limited permeability of the blood-brain barrier further compounds the difficulty of effective drug administration to target cells of the central nervous system (CNS) [8]. Another problem associated with first generation AOs is off-target toxic effects [9]. DNA and RNA can be immunostimulatory, binding to and activating toll-like receptors or other receptors involved in innate immunity in a sequence-and chemistry-dependent manner [10]. Other biological barriers include uptake and sequestration of AOs in the reticuloendothelial system and intracellular sequestration in oligo-protein complexes and phagolysosomes [11]. Furthermore, to achieve biochemical efficacy, a large proportion of RNA targets must be hybridized and silenced-this number can vary widely, but can be as high as >90 percent [12]. To overcome these challenges, AOs have been designed such that the ribose backbone (normally present in RNA and DNA) is replaced with other chemistries. These constructs are so distinct from classical nucleic acid structures that they are not readily targeted by nucleases or DNA/RNA-binding proteins. These modifications result in increased stability and help prevent most off-target toxic effects. The various chemistries and modifications of AOs will be discussed in-depth in the next section. Regarding issues of delivery to CNS tissues, studies have shown the feasibility of AO-mediated RNA silencing in CNS tissues by AO drug administration into cerebrospinal fluid (CSF) via cerebral ventricles and intrathecal injection [13,14]. Drug administration into CSF via cerebral ventricles is a common medical practice in humans [15]. Studies involving administration of AOs into cerebral ventricles have shown significant oligonucleotide concentrations present not only in the brain and brainstem but also in many levels of the spinal cord after delivery in rats and nonhuman primates, providing evidence of delivery efficacy and sidestepping the hurdle of permeating the blood-brain barrier [16].
Comparative AO Chemistries
To avoid nuclease degradation, facilitate stronger base-pairing with target mRNA sequences, increase stability, and enable easier delivery into the cell, a variety of AO chemistries have been developed ( Figure 1). One of the most widely used oligo chemistries is the 2'O-methylphosphorothioatemodified (2'OMePS) antisense oligo. These oligos contain a 2'-modification of the ribose ring as well as phosphorothioate linkages throughout their length ( Figure 1C). The 2'OMePS AOs exhibit improved stability and increased cellular uptake via conventional delivery reagents. These AOs have also been shown to be very efficient in vivo [17,18]. The safety of this particular AO chemistry has been well characterized through a number of preclinical and clinical trials for several diseases [19][20][21]. Of note, the Prosensa/GlaxoSmithKline Duchenne muscular dystrophy (DMD) drug development program (Prosensa Therapeutics, Leiden, the Netherlands, and GlaxoSmithKline, London, UK), currently one of the leading bodies in antisense therapy research, employs this particular antisense chemistry [20,21].
Another oligo chemistry that is gaining in popularity is the phosphorodiamidate morpholino oligomer (PMO, morpholino). The PMO chemistry differs from traditional DNA/RNA chemistry in that the nucleic acid bases are bound to morpholine moieties as opposed to deoxyribose/ribose rings and the phosphodiester backbone is replaced by a phosphorodiamidate linkage [22] ( Figure 1D). Like other oligos, the chemical modifications to PMOs render them sufficiently different from conventional nucleic acid chemistries so that they are not recognized by nucleases, making them very stable. Advantages of PMOs include increased binding efficiency to RNA targets and insusceptibility to metabolic degradation. Moreover, PMOs do not activate toll-like receptors, the nuclear factor (NF)-κB-mediated inflammatory response, or the interferon system [23]. Currently a phase 2 clinical trial involving PMOs for Duchenne muscular dystrophy is being conducted by Sarepta Therapeutics (Cambridge, MA, USA), and a significant improvement in 6-min walking distance (6-min walk test) has already been reported (MDA conference presentation, Washington, April 2013). There are several groups of next generation antisense compounds that have shown very promising results in animal models. For example, 2'-methoxyethoxy (2'-MOE)-modified oligonucleotides containing lipophilic 2'-O-alkyl-substituted nucleobase modifications demonstrate high RNA binding affinity and metabolic stability, and can be used as gapmers to catalyze RNase H1-mediated degradation of target nucleic acids [24-26] ( Figure 1E). 2'-MOE oligos have been used in vivo to target toxic mRNA triplet repeats in myotonic dystrophy [27]. Vivo-morpholinos (vPMOs) are octa guanidine (cell-penetrating moiety) conjugated PMOs ( Figure 1H) and have shown very efficient splicing modulation in studies targeting the FCMD gene, DMD exons 6 and 8 multi skipping in dystrophic dogs, and exons 45-55 in mdx52 mice [28][29][30]. PMOs with peptide conjugates (PPMOs or PMOs with muscle targeting peptides; Figure 1F) act similarly to vPMOs and efficiently rescued cardiac muscle as well as skeletal muscles in mdx mice [31][32][33][34][35][36][37]. Peptide nucleic acids (PNAs) are another class of antisense oligo in which the phosphodiester-linked deoxyribose/ribose backbone is replaced by peptide-linked repeating N-(2-aminoethyl)-glycine units, to which the nucleobases are attached [38] ( Figure 1I). PNAs exhibit greater binding strength than many other AOs and are extremely stable, though their solubility in water is much lower [39,40]. Locked nucleic acid (LNA) AOs contain a 2'-C, 4'-C-oxymethylene-linkage which -locks‖ the deoxyribo/ribo sugar structure in an N-type conformation [41] ( Figure 1G). LNAs are stable against exonucleolytic degradation, exhibit high thermostability and hybridize strongly with target nucleic acids [42,43]. Several LNA analogs have been developed [42,44]. The characteristics of LNA constructs have made them the oligo of choice for several molecular applications, including microarrays [45], genotyping assays [46][47][48], and for the stabilization of DNA triplex formation in gene silencing [49]. In 1992, Sood et al. first reported an antisense oligo chemistry containing a boronated phosphate backbone (boranophosphate) [50]. Known as boranophosphate-oligodeoxy-nucleosides (BH3 − -ODN), these AOs differ from classical DNA/RNA constructs in that they contain a borane group in place of a non-bridging oxygen species in the phosphodiester backbone ( Figure 1J). Boranophosphates have been shown to activate RNase H1-mediated RNA cleavage [51]. Furthermore, experiments have demonstrated the highly lipophilic nature of boranophosphates [52], thus facilitating their transport across the bilipid membrane to target nucleic acids. This characteristic is likely due to the increased hydrophobicity of BH 3 compared with oxygen. Boron-modified dNTPs have also been successfully employed in DNA sequencing assaysby taking advantage of the nuclease-resistant nature of boranophosphates [53,54], researchers are able to sequence resultant nucleic acid fragments following exonuclease digestion [55]. Oxetane-modified oligonucleotides ( Figure 1K) are another form of AO which have proven their feasibly as antisense molecules by exhibiting resistance to nuclease digestion, the ability to activate RNase H1-mediated cleavage of the AO/RNA heteroduplex, tightly bind to their target nucleic acid sequences, and efficiently silence gene expression in vitro [56,57]. Development of more effective and less toxic AOs will be a key to the success of AO therapy.
Antisense Oligo Delivery
The method of delivery of antisense oligonucleotides in neurology is mainly predicated on the nature of the disease. There are two major targets of delivery: tissues of the central nervous system (CNS) and all other non-CNS tissues. In the case of neurodegenerative diseases such as Huntington's disease (HD), amyotrophic lateral sclerosis (ALS), and spinal muscular atrophy (SMA), direct targeting of CNS tissues is often desirable and can be accomplished via intrathecal injection, intracerebroventricular administration, and intraparenchymal delivery to the striatum [58][59][60][61]. This sidesteps the hurdle of the blood-brain barrier and increases the likelihood of oligo uptake to desired CNS tissues. A recently concluded phase I clinical trial involving Isis Pharmaceuticals' antisense drug ISIS 333611 against SOD1 for the treatment of ALS reported no serious adverse effects following intrathecal injection [60].
For the antisense treatment of myopathic diseases, such as Duchenne muscular dystrophy (DMD), systemic administration via subcutaneous or intravenous injection, as well as direct intramuscular injection has been shown to facilitate widespread oligo distribution and effective intracellular uptake [62,63]. In the case of DMD, the preexisting pathology of the muscle tissues further enhances oligo uptake, as the plasma membranes of these muscle cells are unstable and contain small perforations, allowing AOs to more readily penetrate to their intracellular targets [64].
As previously mentioned, the intracellular delivery of AOs is further aided by chemical modifications which allow the oligos to more easily penetrate cell membranes. These modifications come in various forms, such as arginine-rich peptide conjugated morpholinos (PPMOs) or morpholinos linked to octa guanidine dendrimers (vPMOs), but each chemical adduct is designed to aid intracellular uptake.
In some instances, a dualistic targeting of both CNS and non CNS tissues is favorable, especially in cases involving multiorgan diseases. For example, Hua et al. demonstrated that liver plays an important role in SMA pathogenesis and were able to show a significant increase in survival in severely affected SMA mice following subcutaneous delivery of AOs. Increased survival following systemic AO administration was more pronounced than intracerebroventricular administration to CNS tissues alone and was further increased when both routes of oligo administration were coupled together [59].
Antisense Therapy in Neurology: Overview
In the second half of this article, the use of antisense oligos for Duchenne muscular dystrophy (DMD), Fukuyama congenital muscular dystrophy (FCMD), myotonic dystrophy (DM), spinal muscular atrophy (SMA), dysferlinopathy, Amyotrophic lateral sclerosis (ALS), and Huntington's disease (HD) will be covered. Although they are all targeted by antisense therapy, therapeutic strategies for these disorders are quite different. For example, to target DMD, antisense-mediated exon skipping can remove nonsense mutations or frame-shifting mutations from mRNA [65][66][67]. To treat the mutation in the FCMD gene, a cocktail of vivo-morpholino AOs targeting splice enhancer sites and splice silencer sites led to correction of the aberrant splicing pattern in cell and mouse models [29]. RNase H1-mediated degradation of toxic RNA with 2'-MOE antisense for myotonic dystrophy type 1 showed very promising results in the mouse model [68]. A unique -knock up‖ approach (exon inclusion) targeting the SMN2 gene with 2'-MOE antisense or PMOs has been used to treat SMA cell and mouse models [69,70]. In the following sections, recent progress of antisense therapy in neurology and remaining challenges will be discussed.
Exon Skipping Therapy for DMD
DMD is an X-linked recessive form of muscular dystrophy, affecting around one in 3,500 boys worldwide, which leads to muscle degeneration and eventual death [71,72]. DMD is caused by mutations in the gene encoding dystrophin [73]. Recently, exon skipping has been heavily researched for the treatment of DMD [74,75]. Exon skipping employs antisense oligos as -DNA Band-Aids‖ to skip over the parts of the mutated gene that block the effective creation of proteins and restore the reading frame ( Figure 2) [76]. In fact, such exon skipping of disease-causing mutations occurs spontaneously in DMD patients and animal models to some extent [77][78][79][80][81]. The efficacy of exon skipping was tested in several animal models including dystrophic mdx mice and dystrophic dogs as well as human DMD cells [30,35,[82][83][84][85][86][87][88][89][90][91][92][93][94][95]. Systemic rescue of animal models with exon skipping has been demonstrated in dystrophic dogs (exons 6 and 8 multi-skipping), mdx mice (exon 23), and mdx52 mice (exon 51 and exons 45-55 multi-skipping) [17,28,82,96]. Currently, systemic clinical trials are being conducted targeting exon 51 in the DMD gene with PMOs and 2'OMePS antisense oligos, and very promising data have already been presented [20, [97][98][99][100] (Table 1). Possibly, these antisense drugs will be approved by the Federal Drug Administration (FDA) in the near future. In addition, the first clinical trial of DMD targeting exon 53 skipping will start in Japan scheduled in 2013 (Nippon Shinyaku Co. Ltd. and National Center of Neurology and Psychiatry news release; UMIN-CTR Clinical Trial number UMIN000010964) ( Table 1). Remaining challenges include: (1) limited efficacy of AOs, especially in the heart; (2) unknown long-term safety; (3) limited applicability (only approximately 10% of DMD patients can be treated with exon 51 and exon 53 skipping therapy, respectively). Nonsense mutations in the DMD gene can create a novel STOP codon which results in the loss of DMD protein. Exon skipping corrects this error when exons (black) that are bound to antisense oligos (green) are spliced out of the pre-mRNA, and the resulting exon sequences -fit together‖, i.e., are in-frame (denoted by the shape of each exon-ends that fit together are in-frame). Out-of-frame mutations caused by the loss of exonic sequences, through deletion or splice site mutations, can also be corrected through exon skipping, which removes exons adjacent to the mutation site so that the remaining exons are in-frame. The result is a truncated yet partly functional protein, as in the case of Becker muscular dystrophy (BMD).
Splicing Correction Therapy for FCMD
FCMD is an autosomal recessive form of muscular dystrophy mainly described in Japan [101]. The gene responsible for FCMD encodes a novel protein, fukutin [102]. Fukutin is believed to add chains of sugar molecules (glycosylation) to α-dystroglycan, a member of the dystrophin glycoprotein complex [103,104]. Interestingly, most patients (87%) with mutated FCMD gene bear chromosomes that have a 3-kb retrotransposon insertion into the 3'-untranslated region (UTR) of the gene derived from a single ancestral founder [105,106]. The aberrant mRNA splicing induced by the SINE-VNTR-Alu (SVA) retrotransposon exon-trapping is responsible for the pathogenesis of FCMD [29] (Figure 3). The insertion induces splicing errors and cryptic splice site activation with a new splice donor in exon 10 and a new splice accepter in the SVA insertion site. This results in aberrant splicing and truncation of exon 10. To rescue the mutated gene, a cocktail of at least three antisense oligos was required [91]. These oligos were targeted against intronic or exonic splicing enhancer sites (called ISE or ESE). These splicing enhancers are sites with consensus sequences that bind to splicing activator proteins [107,108]. They increase the probability that a nearby site will be used as a splice junction [109]. A cocktail of vPMOs led to normal fukutin mRNA expression and protein production in human patient cells as well as the mouse model in vivo [29].
Antisense Therapy for DM1
Myotonic dystrophy is the most common adult form of muscular dystrophy and is characterized by myotonia (slow relaxation of the muscles), progressive muscle weakness, and atrophy [110]. DM can also cause dysfunction of heart, eye, and brain tissues, as well as the gastrointestinal and endocrine systems [111,112]. Myotonic dystrophy type 1 (DM1) and myotonic dystrophy type 2 (DM2) are multisystemic microsatellite expansion disorders caused by an expanded CTG tract in the 3' UTR of the dystrophia myotonica-protein kinase gene (DMPK) and an expanded CCTG tract in the first intron of the CCHC-type zinc finger, nucleic acid binding protein gene (CNBP, also known as ZNF9), respectively [113][114][115][116][117][118]. Disease phenotype (including age of onset and severity) is highly correlated with repeat number. In the case of DM1, unaffected individuals tend to have CTG repeats between 5 and 35 while DM1 patients often present with expansions between 50 and >2,000 [119,120]. DM follows an autosomal dominant pattern of inheritance and, although the precise molecular mechanisms are unknown, symptoms are thought to arise owing to the toxic gain-of-function of RNA transcripts containing expanded repeats, which causes the transcripts to be retained and accumulate in the nucleus [121]. Wang et al. have also provided evidence to suggest a possible dominant-negative effect of expansion-containing mutant RNA transcripts [122]. Protein-level gain-of-function is not likely, as the CTG expansion region lies outside of the DMPK coding region in the 3' UTR. Antisense-mediated suppression of DMPK RNA transcripts is, therefore, a promising therapeutic approach [123,124] ( Figure 4). Importantly, there is considerable evidence implicating diminished DMPK transcripts in DM1 pathology, with a consensus among several studies that production and processing of DMPK mRNA is inhibited by expansion-containing mutant transcripts [125][126][127][128][129][130]. In their study utilizing homozygous DMPK-null mice, Reddy et al. showed that these mutants develop a progressive myopathy that is pathologically similar to DM, underscoring the importance of DMPK in maintaining proper skeletal muscle condition [131]. Recent in vitro studies have helped shed light on the therapeutic efficacy of several AO chemistries targeted against the microsatellite expansion of DM1 [132]. In vivo studies using 2'-MOE, LNA, and PPMO chemistries have provided evidence of efficient, long-lasting antisense-mediated knockdown of mutant RNA transcripts, as well as amelioration of physiological and transcriptomic abnormalities in DM1 mouse models [27, 133,134]. Researchers from the University of Rochester and Isis Pharmaceuticals, Inc. have developed efficient methods to treat DM1 in a mouse model with systemically administered 2'-MOE modified antisense oligos [27]. They have successfully reversed symptoms of DM1 in these mice by eliminating toxic RNA in muscle fibers. Currently the group is working to improve their lead compound further, developing antisense oligos with stronger efficacy against the toxic RNA, but with minimal toxic effects.
Currently, no clinical trials are underway which involve AOs for the treatment of DM. Prosensa Therapeutics (Leiden, Netherlands), is currently in the pre-clinical stage of developing an antisense oligo, PRO135, which was shown to ameliorate toxic effects in vivo in DM1 preclinical models (Table 1).
Exon Inclusion Therapy for SMA
Spinal muscular atrophy (SMA) is a lethal autosomal recessive disease caused by a genetic defect in the SMN1 (survival motor neuron) gene [135,136]. SMA is characterized by the deterioration of spinal motor neurons, followed by weakness and wasting of the voluntary muscles in the arms and legs of infants and children, resulting in death during childhood [137]. Interestingly, SMA patients retain at least one copy of a highly homologous gene called SMN2 [138]. SMN2, an inverted duplicate copy nearly identical to SMN1, is unable to compensate for the loss of SMN1 due to a C-T transition in exon 7 which interferes with a splice modulator, causing exon 7 to be lost and rendering the resultant SMN protein nonfunctional; however, some full-length SMN transcripts (~10%) and functional SMN proteins are still produced. The SMN2 gene differs from SMN1 by only five base pair changes [139]. Consequently, upregulation of SMN by modification of SMN2 exon 7 splicing is a promising therapeutic approach ( Figure 5), an approach that has already demonstrated favourable results in animal models [69,[140][141][142][143]. Antisense PMOs targeting splice silencing motifs that promote exon 7 retention successfully rescued the phenotype in a severe mouse model of SMA after intracerebroventricular delivery [144]. In addition, the PMO injection led to longer survival after a single dosing by ICV injection. (Table 1). The study involves patients with infantile-onset SMA and is currently seeking to recruit eight participants between three weeks and seven months of age in the US and Canada. The aim of the study is to provide information regarding the safety and tolerability of ISIS-SMNRx. The results of this investigation will help lay the foundation for a future large-scale phase II/III clinical trial. The drug under investigation, ISIS-SMNRx, is a 2'-MOE modified AO designed to modulate SMN2 splicing, thereby increasing levels of SMN protein. A previously concluded Phase I trial evaluating ISIS-SMNRx (ClinicalTrials.gov identifier: NCT01494701) showed the drug to be well-tolerated across all doses and also reported a significant improvement in muscle function in several participants.
Exon Skipping Therapy for Dysferlinopathy
The dysferlinopathies are a category of muscular dystrophy arising due to mutations in the dysferlin (DYSF) gene [145,146]. Three clinically distinct autosomal recessive muscular dystrophies are attributed to DYSF mutations: limb-girdle muscular dystrophy type 2B (LGMD2B), Miyoshi myopathy (MM), and distal myopathy with anterior tibial onset (DMAT) [147][148][149][150][151][152]. Dysferlinopathy is characterized by progressive muscle weakness and atrophy with onset usually beginning in adulthood and commencing in either the proximal or distal muscles, defining the clinical phenotype. Although distinct initially, the clinical phenotypes of dysferlinopathy include a wide spectrum of pathology that becomes less divergent as the disease progresses, eventually including both proximal and distal muscle groups, becoming one indistinguishable disorder. The sarcolemmal protein dysferlin is a transmembrane protein that is ubiquitously expressed and is found abundantly in cardiac and skeletal muscle where it plays a pivotal role in plasma membrane re-sealing [147,[153][154][155][156][157][158][159].
A promising therapeutic approach to treating dysferlinopathies is exon skipping, wherein AOs are used to selectively target exonic sequences and prevent their incorporation into the final mRNA transcript [65,160]. This process of splicing modulation restores the open reading frame and leads to the production of a truncated-yet functional protein and has already been demonstrated in vitro using dysferlinopathy patient-derived cells [161]. In addition, Sinnreich et al. reported a case wherein a mildly affected mother with two severely affected daughters, both having LGMD2B with homozygous DYSF null mutations, was found to carry a lariat branch point mutation that resulted in the in-frame exon skipping of exon 32. The action of the resulting dysferlin protein is thought to account for her mild phenotype [162]. Therefore, at least dysferlin exon 32 is thought to be a promising target of exon skipping therapy, although there are currently no ongoing or pending clinical trials involving AO-mediated therapy for dysferlinopathy.
Antisense Therapy for Amyotrophic Lateral Sclerosis (ALS)
Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease affecting upper and lower motor neurons in the brain and spinal cord [163]. Though associated with some clinical heterogeneity, ALS typically manifests during adulthood and is characterized by progressive neuronal death, spasticity, muscle atrophy, paralysis, and death within ~5 years of diagnosis [164][165][166]. Most cases of ALS are sporadic; however, ~10% of cases are familial and follow an autosomal-dominant pattern of inheritance [167,168]-of these, 20% are caused by mutations in the Cu/Zn superoxide dismutase (SOD1) gene, resulting in a toxic gain-of-function via a currently unknown mechanism [169][170][171]. Although currently believed to be the result of a gain-of-function mechanism, initial investigations into the role of SOD1 in ALS supported a loss-of-function mechanism [172,173]. Belief in a loss-of-function model waned significantly following in vivo experiments involving transgenic mice expressing human SOD1 protein, which exhibited progressive neurodegeneration, mirroring human ALS clinical pathology [170,174,175]. Clinical observations which failed to support a connection between SOD1 activity and disease progression further eclipsed the idea of loss-of-function [176]. However, in their recent article, Saccon et al. compile previous and recent findings to provide a compelling argument for the existence of a possible modifying role of loss-of-function in ALS [177]. They note that SOD1 activity is significantly reduced in ALS patients and that SOD1-null mice exhibit neuropathology similar to human ALS. Although a loss of SOD1 activity does not appear directly responsible for ALS phenotype, these data support the idea of a possible synergistic relationship between gain-of-function and loss-of-function in ALS disease progression. The interplay between gain-and loss-of function has also been described in a host of other neurodegenerative disorders, including Huntington's disease and Parkinson's disease [178,179]. As such, the implications to antisense therapy in neurology, and especially the antisense-mediated reduction of SOD1, are profound. The long-term effects of downregulating SOD1, therefore, should be an important focus of future clinical trials.
Isis Pharmaceuticals recently concluded a Phase 1 placebo-controlled, double-blind, dose-escalation, safety and tolerability clinical trial for their antisense drug ISIS-SOD1Rx (Table 1). The oligo employed in this study, ISIS 333611, was a 2'-MOE modified antisense oligo targeted to the first exon (19th-38th bps) of SOD1 (regardless of mutation) and catalyzed RNase H1-mediated degradation [60,180]. The study involved patients from four US centers aged 18 years or older and carrying SOD1 mutations. Participants were given 12-h intrathecal infusions of ISIS 333611 at varying concentrations, or placebo. No clinically significant adverse effects associated with oligo administration were reported. Following administration, AO was detected in the CSF of all AO-treated participants and increased with dosage concentration. SOD1 concentrations in the CSF did not change significantly, though achieving SOD1 reduction was never an aim of the study.
In addition to the SOD1 gene, several other genes have also been implicated in ALS pathogenesis, including the TAR DNA binding protein (TARDBP), fused in sarcoma (FUS), angiogenin (ANG), ubiquilin 2 (UBQLN2), and valosin-containing protein (VCP) genes [181][182][183][184][185][186][187][188][189]. Most notably, it was recently discovered that a GGGGCC hexanucleotide repeat expansion in the first intron of the C9orf72 gene is the most common genetic cause of ALS pathogenesis, more common than all other known ALS gene mutations combined, accounting for between 37%-50% of familial ALS cases among studied cohorts [190][191][192][193][194][195][196][197]. Although both loss-of-function and gain-of-function mechanisms have been postulated, the underlying etiology by which these C9orf72 expanded repeats result in neurodegeneration is, as yet, unknown; however, evidence suggests a pathogenic threshold of hexanucleotide repeats may exist, though such a threshold has not yet been fully demarcated [191,192,195,[198][199][200]. Because of the high prevalence of C9orf72 mutations in cases of ALS, and because mutations in C9orf72 have also been associated with other neurodegenerative disorders, such as Parkinson's disease and frontotemporal dementia (FTD), C9orf72 is a promising candidate for targeted antisense therapy [191,195,[201][202][203]. Research groups are currently working with ISIS Pharmaceuticals to develop an antisense strategy for C9orf72-based ALS, working under the hypothesis that reducing mutant C9orf72 transcripts using AOs will ameliorate toxic aggregations of expanded repeat mRNA, which present as nuclear foci in brain and spinal cord in affected patients [191,204]. Early investigations using AOs have yielded promising results, reducing the frequency of C9orf72 expanded repeat aggregates and stabilizing gene expression in vitro [204,205].
Antisense Therapy for Huntington's Disease
Huntington's disease (HD) is an adult-onset, lethal, progressive neurodegenerative disease that follows an autosomal dominant pattern of inheritance. Clinical manifestations of HD include cognitive decay, such as the diminished ability to perform executive functions, motor deficits, such as chorea (involuntary, spastic movements), the inability to manage prehensile controls, and psychiatric disturbances, such as dysphoria, anxiety, irritability, mania and psychosis [206][207][208][209][210][211]. Neuropathological features of HD include widespread neuronal atrophy and the formation of nuclear/intranuclear inclusions in neural tissues of the brain [208,[212][213][214][215][216][217]. Although the precise etiology of HD is still unknown, the disease is caused by a trinucleotide CAG-expansion in the first exon of the Huntingtin (HTT) gene, which results in a toxic gain-of-function of the resultant mutant huntingtin protein (mHTT) [218,219]. The inclusion bodies are composed of aggregates of misfolded mHTT and their density is highly correlated with repeat length [220][221][222]. Wild-type huntingtin (HTT) is ubiquitously expressed and is found at high concentration in the brain [223][224][225]. HTT is vital to proper embryonic development and neurogenesis, and also plays a role in protecting CNS cells from apoptosis, vesicular trafficking, axonal transport, and synaptic transmission [224,[226][227][228][229][230][231][232][233][234][235]. Because the loss of HTT is associated with several deleterious consequences, the allele-specific silencing of mHTT is a promising therapeutic approach to treating HD [58,61,179,236], although some studies have shown significant beneficial effects from the co-suppression of both mutant and wild-type alleles [237][238][239].
The two foremost therapeutic approaches to allele-specific silencing of mHTT are the targeting of single nucleotide polymorphisms (SNP) and direct targeting of the expanded CAG region [240][241][242][243][244][245]. In vivo studies have demonstrated successful selective reduction of mHTT and a concomitant amelioration of HD neuropathology and behavioral/motor dysfunctions in mouse models [246][247][248]. Since AOs were first used to downregulate the expression of HTT, much attention has been given to developing antisense strategies aimed at selectively reducing mHTT levels [249]. A variety of AO chemistries, including PNA, LNA, 2'-MOE, and morpholino chemistries have been used in vitro and in vivo to selectively reduce levels of mHTT [58,239,243,245,[250][251][252]. Notably, similar to DM1, 2'-MOE modified antisense oligo infusion into the cerebrospinal fluid of HD mouse models successfully reversed the disease progression with RNase H1-mediated degradation of huntingtin mRNA [239]. No clinical trials involving antisense oligos for the therapeutic treatment HD are currently being pursued; however, Prosensa Therapeutics is currently conducting preclinical tests of their antisense drug PRO289, designed to reduce levels of mHTT by targeting the expanded CAG tract (Table1). So far, PRO289 has been successful in reducing mutant transcripts in HD patient-derived fibroblasts.
Conclusions-What Does the Future Hold?
Lately, antisense therapies have moved one step closer to entrance into the clinical arena. The data from the Phase 2 DMD clinical trials are very promising. Isis Pharmaceuticals has recently started clinical trials of an antisense oligonucleotide therapy for SMA and ALS. Antisense drugs against FCMD, DM1, and Huntington's disease are still in the preclinical stage of the development process but showed promising results in animal models. Some in vitro studies have demonstrated that dysferlinopathy is also a possible target for antisense therapy. Remaining challenges include limited delivery to the heart, potential off-target effects, lack of long term safety data, and limited applicability of each antisense oligo targeting each mutation (particularly in exon skipping therapy for DMD and dysferlinopathies). Unfortunately, the current regulatory process for drug development is not designed to handle these kinds of sequence-specific oligonucleotide therapies [253]. A re-evaluation of the current drug approval process, which takes into consideration the common characteristics of the same antisense chemistry and differences in the specific sequences, will help create a more efficient path for the development of antisense drugs and will benefit the progress of personalized medicine. With the recent clinical success of several antisense-based therapies, and establishment of proof-of-concept efficacy in several disease models, antisense oligos have established themselves as a promising and rapidly-developing therapeutic strategy covering a wide range of genetic disorders. With such dramatic improvements in antisense technology in a relatively short time frame, and with the current frenzied pace of antisense research, new and enhanced AO designs will likely be forthcoming and will facilitate their widespread application in the clinical realm. | 2014-10-01T00:00:00.000Z | 2013-08-02T00:00:00.000 | {
"year": 2013,
"sha1": "c27ecc2f9c949cb702946e6e25be7591a1b3358b",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/2075-4426/3/3/144/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c27ecc2f9c949cb702946e6e25be7591a1b3358b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2947972 | pes2o/s2orc | v3-fos-license | On the semiclassical treatment of anharmonic quantum oscillators via coherent states - The Toda chain revisited
We use coherent states as a time-dependent variational ansatz for a semiclassical treatment of the dynamics of anharmonic quantum oscillators. In this approach the square variance of the Hamiltonian within coherent states is of particular interest. This quantity turns out to have natural interpretation with respect to time-dependent solutions of the semiclassical equations of motion. Moreover, our approach allows for an estimate of the decoherence time of a classical object due to quantum fluctuations. We illustrate our findings at the example of the Toda chain.
Introduction
Coherent states are an important notion in quantum physics, in particular with respect to semiclassical approximations; for general references see [1,2,3]. The coherent states of the harmonic oscillator have been introduced by Schrödinger [4] and have been reexamined by Glauber [5] in circumstances of quantum optics. For spin systems, spin-coherent states, i. e. the coherent states of SU (2), have been introduced by Radcliffe [6]. These two types of coherent states provide an immediate connection to the classical limit of generic quantum systems and are the most important examples of coherent states in physics. The connection to the classical limit is obtained by using coherent states as a time-dependent variational ansatz to investigate the dynamics of a quantum system. Recently, this approach has been reconsidered by the present authors with respect to interacting spin systems given by a general Heisenberg model [7]. The central result in that work is the evaluation of the square variance of the Hamiltonian within coherent states. This quantity turns out to have a natural interpretation with respect to time-dependent spin structures and allows also for an estimate of the validity of the variational approach. In the present work we extend these results to the case of oscillator systems. In classical nonlinear lattices and as well in classical spin systems certain nonlinear excitations like solitary waves are of particular interest. However, it is an open question whether such dynamic and spatially localized excitations can also exist in the corresponding quantum systems. The results of this work provide an estimate for the lifetime of such objects. We demonstrate this for the example of the Toda chain. The outline of this paper is as follows: In section 2 we summarize the essential properties of the coherent states of the harmonic oscillator, and in section 3 we introduce the time-dependent variational method in quantum mechanics. This method in used in the next section to treat a generic anharmonic oscillator. In particular, the square variance of the Hamiltonian is evaluated. This quantity shows very analogous properties to those obtained in [7] for the case of quantum spin systems. These findings can be extended to the case of several anharmonically coupled degrees of freedom; as an example we examine the quantum Toda chain in sections 5 and 6.
Coherent states of the harmonic oscillator
The Hamiltonian of the quantum harmonic oscillator is given in standard notation by and the well-known commutation relations The quantities h/mω and √h mω arising in the operators (2) are the characteristic length and momentum, respectively. The system has an equidistant spectrum. Eigenstates are naturally labelled by n ∈ {0, 1, 2, . . .}, Coherent states of the harmonic oscillator are eigenstates of the lowering operator a with complex eigenvalues α, a|α = α|α .
They can be expressed as The parameter α is naturally decomposed into its real and imaginary part as Denoting an expectation value within a coherent state (6) by · it holds Coherent states maintain their shape in the time evolution of the harmonic oscillator, and the time dependence of the expectation values (8) follows exactly the classical motion of the harmonic oscillator. This fact justifies the term 'coherent states'. A further important property of these objects is their completeness, but it should be mentioned that an arbitrary linear combination ot coherent states has not the property (5). Thus, the coherent states do not form a subspace of the Hilbert space but rather a submanifold.
The time-dependent variational method
The Schrödinger equation of quantum mechanics can be derived by extremizing the action functional with respect to the quantum state |ψ(t) (or ψ(t)|) which is kept fixed at the times t i and t f [8]. An approximate approach to the dynamics of a quantum system can be performed by restricting the states in (11) to a certain submanifold of the Hilbert space. In the context of semiclassical approximations coherent states are a natural choice. E. g. for a single particle moving in a potential the appropriate objects are coherent oscillator states as described in the foregoing section. Thus, our restricted action functional reads in this casẽ where we have left out a total time derivative in the last integrand. The coherent state |α is employed here as a time-dependent variational ansatz, i. e. its time dependence is assumed to be given by time-dependent parameters π(t), ξ(t). This restricted variational principle can be recognized as the stationary phase condition for the quantum mechanical transition amplitude between fixed states |α(t i ) and |α(t f ) when expressed as a path integral over coherent states [9], The variational equations of motion obtained from (12) are which have the same form as the classical Hamilton equations. The time-dependent variational ansatz of coherent states becomes exact if the potential in the Hamiltonian is harmonic. Therefore, our approximate description of the quantum dynamics should be valid for not too large anharmonicities. This will be examined further in the next section.
The anharmonic oscillator
Let us consider a generic anharmonic quantum oscillator With coherent states as a time-dependent variational ansatz we find for the expectation value of the energy which is of course constant in time. The variational equations of motion (14) read It is worthwhile to note that the same equations can be obtained from the Heisenberg equations of motion for the operators q and p, when the expectation values of both sides of the equations are taken within the state |α and the same assumption about its time evolution is made as above. This approach has been used by Krivoshlykov et al. [10]. The equations (16)-(18) reduce to the classical ones in the limith → 0. Therefore, the coherent states reproduce the classical limit. Next let us examine the square variance of the energy, i. e. H 2 − H 2 . This quantity is non-zero only in the quantum case and, as well as H , strictly an invariant of the system, whatever the exact quantum mechanical time evolution of the coherent state is. The square variance can be written in the form with The quantity Ω 1 is of leading orderh, while Ω 2 contains only higher orders. The squared expressions in Ω 1 can be recognized as the right hand sides of (17), (18). Thus, we have Within our variational approach, the first order inh of the square variance of the Hamiltonian is purely due to the time dependence of the state vector. On the other hand, for a quantum state which has a non-trivial time evolution and is consequently not an eigenstate of the Hamiltonian, the energy must definitely have a finite uncertainty. Following this observation, the first order in (20) is not to be considered as a artifact of our variational ansatz, but as a physically relevant expression for the uncertainty of the energy for a time dependent solution to the variational equations of motion (17), (18). Therefore, the variational approach with coherent states does not only reproduce the classical limit, but is also meaningful for a semiclassical description of the anharmonic oscillator. The contributions of higher order summarized in Ω 2 indicate limitations of our variational ansatz, i. e they are a measure of decoherence effects due to the quantum mechanical time evolution. To clarify this, let us consider the temporal autocorrelation function i. e. the projection of the time-evolved state onto the initial coherent state. The modulus of this quantity depends on time for two different reasons: Firstly the quantum state has a non-trivial semiclassical time evolution described by the equations (17), (18). In real space the coherent state is represented by a Gaussian. Within our semiclassical description of the dynamics the wave function remains a Gaussian, but its center is moving. Therefore the overlap of the initial state and the time-evolved state is reduced Secondly, defects of our variational approach, which lead to decoherence effects, also diminish the scalar product (24). Such quantum fluctuations affect the shape of the wave function which will not remain strictly of the Gaussian form under the exact quantum mechanical time evolution in the anharmonic potential. The latter effects become significant on a time scale given by the uncertainty relation, where the relevant contribution to the uncertainty of the energy is given by Ω 2 , Alternatively one may consider the following correlation amplitude with α(t) given by time-dependent functions ξ(t) and π(t) which are solutions of (17), (18) with the initial condition α(0) = α. This quantity is the projection of the coherent state evolved under the exact quantum mechanical time evolution onto the state given by the semiclassical time evolution. If the potential in the Hamiltonian is purely harmonic we have |C(t)| = 1 and Ω 2 vanishs. In this case our variational ansatz of coherent states is of course exact and no decoherence effects occur. This observation also supports our interpretation of the different contributions to H 2 − H 2 . Thus deviations of the modulus of (26) from unity measure decoherence effects due to the exact quantum mechanical time evolution under the anharmonic Hamiltonian. These effects manifest themselves in the additional contribution Ω 2 to the square variance of the energy. The leading order Ω 1 can be interpreted purely as an effect of the semiclassical time evolution which does not incorporate decoherence effects since it assumes the state vector to remain within the submanifold of coherent states throughout the time evolution. If one inserts a generic time-dependent solution ξ(t), π(t) of the semiclassical equations of motion (17), (18) in Ω 1 and Ω 2 these quantities will not be constant in time separately (although their sum Ω 1 + Ω 2 stricly is constant in the exact quantum mechanical time evolution). However, as an approximation, one may use in (25) the value of Ω 2 given by the initial value of ξ. This is justified if the semiclassical motion of the particle is not too fast, i. e. the semiclassical momentum π is not too large. In particular, if the initial coherent state is chosen to have π(0) = 0 and a certain value of ξ, the particle will move in the semiclassical description to smaller ξ(t) because of the attractive potential. In this case the Ω 2 evaluated for the initial value ξ(0) is an upper bound for Ω 2 evaluated for later times, since this quantity grows with increasing ξ. Reversely speaking, quantum fluctuations summarized in the quantity Ω 2 become larger if the ξ approaches the turning point of the semiclassical motion governed by the equations (17), (18). This feature is well-known from the usual WKB-approximation and therefore consistent with the interpretation of Ω 2 given above. Moreover, in the following we will also examine other systems which exhibit stationary semiclassical dynamics with Ω 1 and Ω 2 being constant in time separately. Another example where the validity of our considerations can be checked explicitely is the free particle with H = p 2 /2m. Let the particle be initially in a coherent state with the wave function The quantity ω is not a frequency here but a parameter which determines the localization of the particle in real and momentum space around the expectation values (8). The square variance of the Hamiltonian reads the same as in (20) with Since the expectation value of the momentum is constant for such a translationally invariant system, Ω 1 and Ω 2 are conserved separately. The time-evolved wave function can be obtained readily as with a real phase ϕ(q, t). Thus, the width of the wave function increases, i. e. its spatially localized structure is smeared out, on a time scale of ∆t = 1/ω, which is consistent with the estimate given by (25). This result also strongly supports the above interpretation of the quantities Ω 1 and Ω 2 .
In the next section we will make further use of the estimate of the decoherence time ∆t provided by (25). The findings described above are completely analogous to the results obtained recently on interacting spin systems with spin-coherent states as a time-dependent variational ansatz [7]. The particular case corresponding to the harmonic limit of an oscillator is given here by a paramagnet, where all spins are independent of each other and coupled only to a static magnetic field. In this case all spins perform a Larmor precession around the field axis, and this motion is described exactly by spin-coherent states. A further common aspect of the harmonic oscillator and a spin in a magnetic field is the equidistance of the spectra of both systems.
Anharmonic lattices: The Toda chain
It is an obvious idea to generalize the results of the foregoing section to systems with many anharmonically coupled degrees of freedom. Let us consider an Hamiltonian H = T + V with where N is the number of degrees of freedom and periodic boundary conditions are imposed. The 2-particle potential V (x) contains in general anharmonic terms. To give a semiclassical description of the dynamics, one may proceed similarly as for the single anharmonic oscillator, but for a general potential V (x) such an approach leads to quite complicated expressions, in particular for the square variance of the energy. Fortunately, a special case exists where the results can be given in a concise form. This case is the Toda chain, which is well-known in the theory of nonlinear lattices [11], The potential V contains the two parameters η and γ; for further convenience we have rewritten η in terms of the new parameters ω and λ to be determined below. In the limit γ → 0 the system is just the harmonic chain having independent phonon modes labelled by the wave number k with the acoustic phonon dispersion ω(k) = 2ω sin(|k|/2). The usual phonon operators read An appropriate variational ansatz is given by coherent phonon states, where the coherent state of the mode k fullfills b k |β k = β k |β k and the tensor product runs over the first Brillouin zone. Again we denote expectation values within (33) by · . The parameters β k are related to the local expectation values q n = ξ n , p n = π n by Such an approach to the dynamics of the quantum Toda chain has been performed by Dancz and Rice [12], and by Göhmann and Mertens [13]. Here we add instructive results on the square variance of the Hamiltonian. The expectation value of the Hamiltonian reads where ∆ 0 is a correlation in the phononic vacuum |0 . More generally, one has where the following relations hold: The expectation value (35) has the same form as the classical Toda Hamiltonian up to a renormalization of the parameter λ. The equations of motion are obtained analogously as in (14) and have therefore also the same functional form as the classical ones. It was shown in [13] that this is a peculiarity of the Toda potential. From the equations of motion one obtains Note that the left hand side of (38) is of leading orderh, since the parameters β k contain a factor 1/ √h (cf. (34)). The square variance of the Hamiltonian reads These expressions can be derived by similar methods as described in [13]. The technical advantage of the Toda potential lies in the fact that the contribution R 2 has a comparatively simple form and can be obtained via the Baker-Campbell-Hausdorff-identity. Expanding the factor (exp(γ 2 ∆ n−n ′ ) − 1) in (41) and using the equations (38), (37) one can rewrite these formulae as with and for µ > 2 Each term Ω µ is of leading orderh µ because ∆ n−n ′ ∝h. As seen from (44) the lowest order inh in the square variance of the Hamiltonian is purely given by the time dependence of the semiclassical variables. Therefore, the same conclusions apply as in the foregoing section. Note also that again in the harmonic limit γ → 0 all Ω µ for µ > 1 vanish and the variational ansatz is exact.
We have demonstrated the result given in the equations (43), (44) for the Toda chain as an example, mostly to reduce technical difficulties. In fact, from the experience with an analogous semiclassical treatment of quite general Heisenberg spin models in arbitrary spatial dimension [7], these findings are expected to hold for more general lattice models.
Decoherence effects to semiclassical solitary waves in the Toda chain
In the last decades an immense literature has emerged on solitons in solid state physics. In those publications, the solid is usually modelled (at least effectively) as a classical system, while in fact it carries generally quantum degrees of freedom. We will see below how our approach can be used to make contact between the classical and the quantum mechanical description. In particular, the validity of theories based on classical solitary excitations can be estimated. The one-dimensional Toda lattice is an integrable system in the classical [11,14] as well as in the quantum mechanical case [15]. Moreover, a formal identification can be made between the dispersion law of the 1-soliton solution of the classical system and a certain branch of the excitation system of the quantum model, which is obtained by the Bethe ansatz [16]. Both dispersions are identical in form, and in this sense the quantum analogue of a classical soliton may be viewed as a certain stationary state of the quantum system; see also [17] for a discussion of that issue in a semiclassical context. Nevertheless, such an eigenstate obtained from the Bethe ansatz is not a dynamical object and naturally translationally symmetric, i. e not localized like a classical soliton. Moreover, such an explicite identification is in general only possible if the quantum and the classical system are both integrable. Therefore the question arises whether quantum states exist which have the essential properties of classical solitary waves, which are required in many classical descriptions of phenomena like energy transport etc. As such a quantum state is not translationally symmetric, it cannot be expected to be an eigenstate of the quantum system. Moreover, its time evolution is in general not fully coherent, but decoherence effects due to quantum fluctuations cause a finite lifetime of such a localized state. In the following we give an estimate for this lifetime of semiclassical solitary waves build up from coherent states in the quantum Toda chain. Let us first consider the variational ground state of the Toda chain with ξ n = π n = 0 for all n. Here we clearly have Ω 1 = 0, and for Ω 2 we find which is also zero for λ = (γ/2)∆ 0 . With respect to the parameter η entering the potential (31) this means This relation determines the frequency ω which enters the variational ansatz (33) via the phonon dispersion ω(k). One obviously has always a non-negative solution ω for any nonnegative η. Note that with this choice for ω the quantum corrections in the exponential factor in the variational expression (35) and also in the equations of motion cancel with the parameter λ, but are of course present compared with the original Hamiltonian. However, the higher order terms Ω µ with µ > 2 are in general non-zero for this classical ground state solution. Thus, our variational ground state approximates the exact ground state within the first two orders ofh. To account for higher corrections one has to implement a more complicated state than (33). Therefore, in the spirit of the WKB approximation scheme we can be confident to give a valid description of the quantum system within the first two orders ofh. Let us now turn to solitary solution to the variational equations of motion (which are practically the same as the classical equations). As mentioned above, such solutions do not correspond to (approximate) eigenstates of the system like the variational ground state, but suffer decoherence effects in their time evolution. Nonlinear excitations in the classical Toda chain with periodic boundary conditions are so-called cnoidal waves which can be expressed in terms of Jacobi elliptic functions. As a limiting case, a pulse soliton arises which is given by elementary expressions [11], with the soliton parameter κ, which is the inverse soliton width, and ν = ω sinh κ. Although this solution of the variational equations of motion is, strictly speaking, not compatible with periodic boundary conditions, it is an excellent numerical approximation for the cnoidal waves for large wave length and system size. For simplicity, we shall concentrate on the above expressions in the following. With this solution the quantities Ω µ can be written as where the Q µ depend only on κ. In particular, the Q µ (and therefore the Ω µ ) are timeindependent since our soliton solution describes a stationary movement, where a translation in time is equivalent to a translation in space. Therefore the time dependence drops out when the summations over the system in the equations (44)-(46) are performed. The dimensionless quantity (mω 2 /γ 2 )/(hω) is the ratio of the energy scales of the nonlinear interaction and of the linear phonon excitations. In a semiclassical regime this quotient is large and suppresses all orders Ω µ with µ > 2 (which are not considered here further, cf. above). For µ = 2 we have for an infinite system .
The above summations are non-elementary. The largest contribution stems from the summand with l = 0. Replacing the remaining sum over n by an integral, one concludes that this quantity should scale approximately like 1/κ. Indeed, a numerical evaluation of the full double sum for κ ∈]0, 0.5] shows that a very accurate value for this expression is (4/3κ); deviations from this occur only for large κ and are of order 10 −5 . Therefore, we may write in a very good approximation and the estimate of the decoherence time according to (25) is Multiplying with the soliton velocity c = ν/κ one finds for the decoherence length ∆l = c∆t for small κ Remarkably, no system parameter or Planck's constant itself, but only the soliton width enters (55). The decoherence length is large for small κ, i. e. broad solitons. For instance, a soliton with a width of 100 lattice units may travel (at least) about ten times this distance until decoherence effects become significant. With respect to the classical picture of solitons, this appears rather restrictive. On the other hand, the relation (55) provides only a lower bound for the coherence length; e. g. in the classical limith → 0 all decoherence effects vanish and the decoherence length becomes infinite. However, for not too large values of the ratio (mω 2 /γ 2 )/(hω) the decoherence length should be assumed to be of the order of the right hand side of (55), at least as a 'conservative' estimate.
Conclusions
In this work we have examined coherent states as a time-dependent variational ansatz for a semiclassical description of anharmonic oscillators. In particular, the square variance of the Hamiltonian H 2 − H 2 within coherent states is considered. For a single anharmonic oscillator, the first order inh of this quantity turns out to be purely given by the variational time dependence of the quantum state, cf. equations (20)-(23). Therefore, this contribution has a natural interpretation, which can be confirmed rigorously in the case of the harmonic oscillator and the free particle. Compared with recent results on spin-coherent states [7] this appears to be a general property of coherent states with respect to generic quantum systems. The remaining contributions to H 2 − H 2 can be used to estimate decoherence effects which arise from quantum fluctuations. In the foregoing section we have illustrated this by the example of the Toda chain. We have chosen this system, because it provides comparatively simple expressions for the quantities considered here, and explicite solitary solutions of the classical equations are available. In fact, we expect our approach to be useful for much more general anharmonic lattices. | 2014-10-01T00:00:00.000Z | 1998-11-26T00:00:00.000 | {
"year": 1998,
"sha1": "92b9608c107a6bb5d9c43c7b10ccca5ebac23b10",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9811371",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "92b9608c107a6bb5d9c43c7b10ccca5ebac23b10",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14093266 | pes2o/s2orc | v3-fos-license | Studying the Cosmic X-ray Background with XMM-Newton
We present a work in progress aimed at measuring the spectrum of the Cosmic X-ray Background (CXB) with the EPIC detectors onboard XMM-Newton. Our study includes a detailed characterization of the EPIC non X-ray background, which is crucial in making a robust measurement of the spectrum of CXB. We present preliminary results, based on the analysis of a set of Commissioning and Performance Verification high galactic latitude observations.
Introduction
The discovery of a diffuse background radiation in the X-ray sky dates back to the birth of X-ray astronomy: the first evidence was obtained by Giacconi et al. (1962) during the same rocket experiment which led to the discovery of Sco X-1, the first extra-solar X-ray source. Later observations have demonstrated that the bulk of Cosmic X-ray Background (CXB) above energies of ≈2 keV is of extragalactic origin, due to sources below the detection threshold. The first wide band measures of the CXB were made by HEAO−1 (1977): the CXB spectrum in the 2÷10 keV range was well described by a simple power law with photon index ≈ 1.4 (Marshall et al. 1980). More recently, several investigations (ASCA GIS/SIS: Miyaji et al. 1998, Gendreau et al. 1995ROSAT PSPC: Georgantopulos et al. 1996; SAX LECS/MECS: Vecchi et al. 1999) have confirmed the spectral shape but have shown differences of order 30% in the normalization. Barcons et al. (2000) showed that cosmic variance cannot account for the differences among the previous measures of the CXB intensity; such an uncertainty seriously affects the modelling and the interpretation of the CXB (see e.g. Pompilio 2000). A new, reliable measure of the CXB is thus required to improve the overall understanding of its nature. The XMM-EPIC instruments (Turner et al. 2001;Strüder et al. 2001) have appropriate characteristics to study extended sources with low surface brighteness, offering an unprecedented collecting area (≈ 2000 cm 2 ) and good spectral resolution (2% @ 6 keV) over a broad energy range (0.2÷12 keV) and a wide field of view (≈ 15 arcmin radius). However these cameras suffer a rather high instrumental background (Non X-ray Background, NXB). A correct characterization and subtraction of the NXB component is thus the crucial step in order to study the lowest surface brighteness source of the sky. In this work we deal only with the EPIC MOS cameras; the pn camera, having different characteristics, will require a different approach.
Characterization of Non X-ray Background
The EPIC NXB can be divided into two parts: an electronic noise component, which is important only at the lower energies (below ≈0.3 keV), and a particle-induced component which dominates above 0.3 keV and is due to the interaction of particles in the orbital environment with the detectors and the structures that surround them. The particle-induced NXB is the sum of two different components: 1. a flaring component. Clouds of low-energy (≈ 100 keV) particles (believed to be protons) in the magnetosphere can be focused by the telescope mirrors, reaching the detectors. These unpredictable episodes cause an up to 100 times (or even more) increase of the quiescent background rate. Data collected during these intervals are almost unusable, especially for the study of extended sources, and must be rejected with Good Time Interval (GTI) filtering. 2. a quiescent component. It is mostly due to the interaction of high energy (E ≥ a few MeV) particles with the detectors and the surrounding structures. To characterize the quiescent NXB we have analyzed a set of observations performed with the filter wheel in closed position: -the temporal behaviour is stable within an observation time scale; we have hints for a secular decrease in the period covered by the dataset (orbits 20÷84); -the spectrum is characterized by a flat continuum, with several fluorescence emission lines from materials in the detectors or the surrounding structures (see Fig. 1); -the spatial distribution has a radially flat profile (within 10 %), but Al-K and Si-K line emission are highly anisotropic due to an illumination effect. Further closed observation are now being collected to improve the quiescent NXB characterization.
Subtraction of Non X-ray Background
A standard recipe to remove the NXB recovering the "pure" CXB spectrum could be sketched as follows: 1. standard processing and event reconstruction; 2. rejection of hot pixels and bad columns; 3. GTI filtering (removing the flaring NXB component); 4. extraction of the spectrum from a selected area; 5. subtraction of quiescent NXB spectrum. high galactic latitude (| b |> 27 • ) pointings in order to avoid contamination by our galaxy; we discarded Magellanic Clouds pointings and observations of very bright sources. The closed observations collected during the very early orbits (≤ 40) were rejected, possibly being not fully reliable. We used the XMM-Newton Science Analysis System (XMM -SAS) v.5.0 to perform the standard processing of the raw event lists. The linearized event lists were then cleaned from hot pixels and bad columns using an ad-hoc developed procedure which uses cosmic ray IRAF tasks to localize the pixels to be rejected in each CCD and XMM-SAS task evselect to remove them. Next step was GTI filtering to avoid flaring NXB intervals: we set a threshold of 0.29 cts/s in the energy range 8÷12 keV, rejecting also the time bins having 0 counts. The event files from closed observations were merged in a single list for each camera. For each observation a geometric mask was created and applied to reject an eventually present bright central source. The diagnostic described in 3 was then applied to verify the level of SP contamination. For each selected observation, we defined the best area to extract the CXB spectrum by a simple Signal to Noise ratio optimization (we remember that the signal of CXB is vignetted, while the instrumental NXB is not, so the S/N decreases with increasing off-axis angles). The spectrum were then extracted using an ad-hoc developed task which corrects for the telescope vignetting on an event-by-event logic, using the most recent vignetting function determinations and accounting for the azimuthal modulation induced by the RGA. The same routine is used to extract the NXB spec-trum from the closed event list; the region of extraction is chosen in order to coincide in detector coordinates to the one used for the corresponding observation of the sky. For each observation a tentative "pure" CXB spectrum is obtained with a simple subtraction of the NXB spectrum. A second spectrum is obtained after a renormalization of the NXB spectrum according to the prescription described in sect. 3.
Spectral analysis and results
The selected observations were 13 for MOS1 for a total good exposure time of 200 ks and 14 for MOS2 (210 ks). The CXB spectrum obtained for each observation was fitted using an absorbed power law model in the energy band 1.8-8 keV, setting N H to 4 × 10 20 cm 2 ; the normalization was computed at 4 keV. We computed the mean values of the CXB spectrum slope and intensity for each camera by a straight χ 2 minimization. Figure 4 shows the CXB spectrum extracted from an observation of the Lockman Hole field as seen by MOS1 (black) and MOS2 (red); exposure time is ≈ 30 ks per camera, the best fit power law model is superposed. The same analysis was performed separately using the raw NXB spectrum and the renormalized one. The results are shown in Table 1. The nonrenormalized case yields a correct value for the photon index, while the normalization is too high by ≈ 30%; on the contrary, the renormalized case yields a lower value for the normalization, but the photon index is too hard by ≈ 15%. The overall shape of the CXB spectrum has thus been correctly determined, being virtually coincident to previous determinations (see sect. 1); this means that the NXB spectrum has been properly characterized. The simple renormalization of NXB spectrum reduces by 30% the CXB intensity, but the results suggest that a more sophisticated procedure may be required.
Conclusions
Our study of the Cosmic X-ray Background is currently in progress. We obtained a characterization of the different NXB components for the MOS cameras. We showed that the standard GTI filtering to remove the flaring component possibly leaves a low-level SP NXB unrejected, and that the correlation of counts (IN FOV)/(OUT FOV) can provide a good diagnostic to identify and evaluate this effect, allowing to reject pathological cases. We tried to correct for SP contamination by means of a simple renormalization of the quiescent NXB spectrum. The preliminary results show that the CXB spectral shape is correct, but the intensity is too high by ≈ 30%. The renormalization approach is encouraging, but will require further study. | 2014-10-01T00:00:00.000Z | 2002-02-26T00:00:00.000 | {
"year": 2002,
"sha1": "51487020f3d7e988c02048a3efabc8fed13ab20a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a875825c4483b51bc940d6110bcc1f7495c2b69a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3901885 | pes2o/s2orc | v3-fos-license | The impact of 27-hydroxycholesterol on endometrial cancer proliferation
Endometrial cancer (EC) is the most common gynaecological malignancy. Obesity is a major risk factor for EC and is associated with elevated cholesterol. 27-hydroxycholesterol (27HC) is a cholesterol metabolite that functions as an endogenous agonist for Liver X receptor (LXR) and a selective oestrogen receptor modulator (SERM). Exposure to oestrogenic ligands increases risk of developing EC; however, the impact of 27HC on EC is unknown. Samples of stage 1 EC (n = 126) were collected from postmenopausal women undergoing hysterectomy. Expression of LXRs (NR1H3, LXRα; NR1H2, LXRβ) and enzymes required for the synthesis (CYP27A1) or breakdown (CYP7B1) of 27HC were detected in all grades of EC. Cell lines originating from well-, moderate- and poorly-differentiated ECs (Ishikawa, RL95, MFE 280 respectively) were used to assess the impact of 27HC or the LXR agonist GW3965 on proliferation or expression of a luciferase reporter gene under the control of LXR- or ER-dependent promoters (LXRE, ERE). Incubation with 27HC or GW3965 increased transcription via LXRE in Ishikawa, RL95 and MFE 280 cells (P < 0.01). 27HC selectively activated ER-dependent transcription (P < 0.001) in Ishikawa cells and promoted proliferation of both Ishikawa and RL95 cells (P < 0.001). In MFE 280 cells, 27HC did not alter proliferation but selective targeting of LXR with GW3965 significantly reduced cell proliferation (P < 0.0001). These novel results suggest that 27HC can contribute to risk of EC by promoting proliferation of endometrial cancer epithelial cells and highlight LXR as a potential therapeutic target in the treatment of advanced disease.
Introduction
Endometrial cancer (EC) is the most common gynaecological malignancy and the fourth most common cancer in women in developed countries with incidence increasing in line with rising rates of obesity (reviewed in Onstad et al. 2016). Obesity is a major modifiable risk factor for EC and is thought to contribute to increased risk of malignancy in part due to increased exposure to estrogens, which enhance the risk of aberrant proliferation within the endometrium (Sanderson et al. 2017). Obesity is also associated with an adverse metabolic profile, which is postulated to independently increase risk of EC (Trabert et al. 2015).
A recent meta-analysis supported a positive association between dietary cholesterol consumption and endometrial cancer risk (Gong et al. 2016). Notably, obesity also puts individuals at risk of developing an adverse, raised, cholesterol profile. Cholesterol metabolites such as the oxysterol 27-hydroxycholesterol (27HC) have been Endocrine-Related Cancer demonstrated to promote cancer growth and metastasis in studies on breast cancer (Nelson et al. 2013, Wu et al. 2013, providing a plausible mechanistic link between increased adiposity and EC risk. 27HC is a primary metabolite of cholesterol, synthesised by the action of sterol 27-hydroxylase (CYP27A1) and metabolised by 25-hydroxycholesterol 7-α-hydroxylase (CYP7B1; (1)). 27HC acts as an endogenous agonist for the Liver X receptor (LXR), a ligand-activated transcription factor involved in the regulation of cholesterol homeostasis. Two isoforms of LXR have been identified; LXRα (encoded by NR1H3), which is predominantly expressed in the liver, kidney and small intestine but exhibits low expression in other tissues, and LXRβ (encoded by NR1H2), which is ubiquitously expressed. Based largely on studies in breast cancer, LXRs have been proposed as a novel anti-cancer target and the LXR-selective agonists GW3965 and T0901317 are reported to decrease proliferation of LXR-expressing breast cancer cell lines (MCF7, T47D, MDA-MB231) as well as the prostate cancer cell line LNCaP (Vedin et al. 2009, Kim et al. 2010. To the best of our knowledge, LXR expression has not been reported in human EC tissues and the impact of either 27HC or LXR agonists on the endometrium or endometrial malignancies is not known. In addition to activating LXRs, 27HC can also bind oestrogen receptors (ER) (Umetani et al. 2007) and acts as an endogenous selective oestrogen receptor modulator (SERM) (DuSell et al. 2008). 27HC has diverse impacts and its SERM activity is reported to be both tissue selective and context dependent. For example, 27HC acts as a competitive antagonist of ERs expressed in the vasculature and can antagonise E2-mediated endothelial cell migration and re-endothelialisation (Umetani et al. 2007). In contrast, in the absence of E2, 27HC is reported to act as an agonist to ERα (ESR1) to increase cell adhesion and expression of pro-inflammatory cytokines such as tumour necrosis factor alpha (TNFA) and interleukin 6 (IL6) (Umetani et al. 2014) by endothelial cells and macrophages. Notably, 27HC is also reported to increase proliferation of ERα-positive breast cancer cell lines and promotes MCF7 tumour xenograft growth in mice by stimulating ER-dependent cell proliferation (Wu et al. 2013). Given selective LXR agonists have anti-proliferative effects (Vedin et al. 2009), these studies suggest that proliferative effects of 27HC may be mediated via ER and that relative expression of LXR or ER isoforms may define the impact of the ligand.
ER isoforms are expressed in EC tissues and ER expression changes with disease progression (Collins et al. 2009).
We have previously reported that ERα is readily detectable in both epithelial and stromal cells in well-differentiated cancers but is significantly reduced in poorly differentiated cancers. In contrast, expression of ESR2 variants (ERβ1, 2, 5) was readily detected in well, moderate and poorly differentiated stage 1 ECs (Collins et al. 2009). We therefore postulated that 27HC might have distinct effects in EC depending on the bioavailability of ER isoforms present at different stages of disease progression.
Obesity and the metabolic syndrome are both associated with an increased risk of developing pre-malignant and malignant endometrial disease (Sanderson et al. 2017) but whether the cholesterol metabolite 27HC has an impact on EC risk/progression is not known. In the current study, we assessed the expression of the enzymes required for synthesis (CYP27A1) and breakdown (CYP7B1) of 27HC and assessed expression of the cognate receptors LXRα and LXRβ in primary human stage I endometrial adenocarcinomas (n = 126) and postmenopausal endometrial controls (n = 9). The impact of 27HC and the LXR-selective agonist GW3965 on EREand LXRE-dependent expression of a reporter gene, as well as cellular proliferation, was assessed in three EC cell lines which phenocopy well-, moderate-and poorly differentiated stage I ECs. Our novel findings demonstrate that 27HC can alter responses in EC cells and highlight LXR as a potential therapeutic target. Taken together, our findings suggest increased exposure to 27HC may increase risk of development and progression of EC.
Human tissue samples
Endometrial adenocarcinoma tissue was collected from postmenopausal women undergoing total abdominal hysterectomy who had been previously diagnosed to have endometrioid adenocarcinoma of the endometrium; they had received no treatment before surgery (Supplementary Table 1, see section on supplementary data given at the end of this article). Written informed consent was obtained from all subjects prior to surgery, and ethical approval was granted by the Lothian Research Ethics Committee (LREC1999/6/4). Methods were carried out in accordance with NHS Lothian Tissue Governance guidelines. All ECs were confined to the uterus (International Federation of Obstetrics and Gynaecology, FIGO, stage 1 as described in Collins et al. 2009). Diagnosis of adenocarcinoma was confirmed histologically by an experienced gynaecological pathologist, and tissues were further graded as well D A Gibson et al.
Postmenopausal controls (n = 9) were obtained from women undergoing surgery for non-malignant gynaecological conditions. None of the women were receiving hormonal therapy. A total of 126 EC tissue samples were analysed; 3 samples per grade were assessed for immunohistochemistry and n = 30 well-differentiated cancers, n = 64 moderately differentiated and n = 32 poorly differentiated samples were assessed for qPCR studies. A minimum of 10 samples at each grade were analysed for each gene, detailed sample numbers are included in Supplementary Table 2. Tissue for immunohistochemistry was collected in neutral buffered formalin (NBF), RNA extraction samples were collected in RNALater (Qiagen).
Measurement of mRNA
Isolation of mRNAs, preparation of cDNAs and analysis by qPCR were performed according to standard protocols (Bombail et al. 2010); samples were quantified by relative standard curve method or by the comparative ΔΔCt method with CYC as internal control. Primers/probes are given in Supplementary Table 3.
Immunohistochemistry
Single antibody immunohistochemistry using 3,3′-diaminobenzidine tetra-hydrochloride (DAB) detection was performed as described previously (Collins et al. 2009). Double immunofluorescence was carried out with antibodies directed against LXR or ERα and the proliferation marker Ki67. Details of antibodies and dilutions are provided in Supplementary Table 4. Primary antibodies were incubated at 4°C overnight. Antigen detection was performed using Tyramide signal amplification (Perkin Elmer) system followed by nuclear counterstaining with DAPI (4′,6-diamidino-2-phenyl-indole dihydrochloride). Negative controls were incubated in the absence of primary antibody but otherwise processed as above; no staining was detected in no primary controls for any of the antibodies used (not shown). Images were captured using a LSM 710 Confocal microscope (Zeiss) at ×40 magnification.
Cell cultures
Three endometrial adenocarcinoma cell lines representative of well-, moderately-or poorly differentiated cancers were used. Ishikawa cells were obtained from the European Collection of Cell Culture (ECACC no. 99040201, Wiltshire, UK). This cell line was originally derived from a well-differentiated adenocarcinoma of a 39-year-old woman (Nishida et al. 1985) and reported to express both ERα and ERβ protein (Johnson et al. 2007). RL95-2 cells (ATCC CRL-1671; hereafter RL95) were originally derived from a Grade 2 moderately differentiated endometrial adenocarcinoma (Way et al. 1983) and reported to express both ERα and ERβ protein (Yang et al. 2008, Li et al. 2014. MFE-280 (ECACC no. 98050131) were derived from a recurrent, poorly differentiated, endometrial adenocarcinoma and have low/undetectable expression of ERα and ERβ. Cells were maintained in DMEM/F12 (Sigma) supplemented with 10% FBS, 100 U penicillin, streptomycin and 0.25 µg/mL fungizone (Invitrogen) at 37°C in 5% CO 2 . Media for RL95 was supplemented with 0.005 mg/mL insulin (Sigma). Cells were incubated with 27-hydroxycholesterol (27HC; Tocris Cat. No. 3907) using stocks diluted in ethanol to give final concentrations ranging from 10 −5 M to 10 −8 M or GW 3965 hydrochloride (GW; Tocris Cat. No. 2474) using stocks diluted in DMSO to give final concentrations ranging from 10 −5 M to 10 −8 M. Some cultures were co-incubated with the antiestrogen fulvestrant (ICI 182,780;Tocris Cat. No. 1047) diluted in DMSO at a final concentration of 10 −6 M. Appropriate vehicle control incubations were included in all studies. All cell lines were authenticated using the Promega PowerPlex 21 system (Eurofins Genomics, Ebersberg, Germany).
Reporter assays
An adenoviral vector containing a 3× ERE-tk-luciferase reporter gene was prepared as described previously (Collins et al. 2009). Cells were cultured in DMEM without phenol red and containing charcoal stripped foetal calf serum (CSFCS) for 24 h before being infected with Ad-ERE-Luc at a MOI of 25. Activation of D A Gibson et al.
25:4
Endocrine-Related Cancer LXR-dependent signal transduction was assessed according to manufacturer's instructions using reagents from the Cignal LXR Reporter Kit, which includes positive and negative controls as well as a luciferase reporter gene under the control of tandem repeats of the LXR transcriptional response element (LXRE) (Qiagen, CCS-0041L).
Cells were treated for 24 h and luciferase activities were determined using 'Bright Glo' reagents (Promega). Luminescence was measured using Fluostar Microplate Reader (BMG labtech) and fold-change in luciferase activity was calculated relative to vehicle control for each treatment.
Proliferation assays
The impact of treatments on cell proliferation was assessed using CyQUANT Direct Cell Proliferation Assay (Thermo Fisher, C35011) according to manufacturer's instructions and nuclear fluorescence measured using Novostar Microplate Reader (BMG labtech). For each cell line investigated, cell number was quantified using a standard curve of known cell numbers and fold-change in cell number calculated relative to vehicle control for each treatment.
Statistical analysis
Statistical analysis was performed using GraphPad prism. One-way ANOVA was used to determine significance between treatments in data that were normally distributed. Non-parametric testing was utilised where sample sizes were insufficient to confirm normality of data distribution; Kruskal-Wallis test was used to assess differences between treatments. Where data were analysed as fold-change, significance was tested using one sample t test and a theoretical mean of 1. Criterion for significance was P < 0.05. All data are presented as mean ± s.e.m.
Results
Enzymes that regulate bioavailability of 27-hydroxycholesterol and its cognate receptor LXR are expressed in EC Messenger RNAs encoded by CYP7B1 and CYP27A1 were detected in all cancer grades ( Fig. 1A and B); expression of CYP7B1 was significantly lower in poorly differentiated cancers compared to moderately differentiated cancers (P < 0.05). Relative expression of CYP27A1 tended to be higher in poorly differentiated cancers, but this was not significant. We next assessed relative expression of mRNAs encoding the LXR receptors known to bind 27HC: NR1H3 (LXRα) and NR1H2 (LXRβ) were detected in all cancer grades ( Fig. 1C and D). Expression of NR1H3 was significantly lower in moderately differentiated cancers compared to postmenopausal controls (P < 0.01). Expression of NR1H2 did not change between sample groups.
Immunolocalisation of LXR and the proliferation marker Ki67 in EC tissue sections
The expression of LXR in EC tissue sections was assessed by immunohistochemistry using an antibody that detected 27HC signalling pathway is expressed in endometrial cancer and altered with disease severity. The expression of CYP7B1, CYP27A1, NR1H3 (LXRα) and NR1H2 (LXRβ) relative to internal control gene CYC was assessed by qPCR in postmenopausal control endometrium (PM Ctrl) and in endometrial cancer tissue homogenates from well-, moderately-and poorly differentiated endometrial adenocarcinomas. Relative expression of mRNAs encoding CYP7B1 (A) were decreased in poorly differentiated cancers compared to moderately differentiated cancers but CYP27A1 was not significantly different (B). Relative expression of mRNAs encoding NR1H3 (C; LXRα) were significantly decreased in moderately differentiated cancers compared to postmenopausal control tissues whilst NR1H2 (LXRβ) was not significantly different (D). *P < 0.05, **P < 0.01. Kruskal-Wallis test with multiple comparisons. PM, n = 9; Well n = 12-30; Mod, n = 42-64; Poor, n = 23-32. All data are presented as mean ± s.e.m. D A Gibson et al.
25:4
Endocrine-Related Cancer both isoforms of LXR (mouse anti-LXR; sc-271064). LXR was readily detected in well-, moderately-or poorly differentiated cancers and was immunolocalised to both stromal and epithelial cells (Supplementary Fig. 1). To assess if LXR expression was associated with cell proliferation within EC tissue, we performed double immunofluorescence staining for both LXR and the proliferation marker Ki67 (Fig. 2). In well-differentiated cancers ( Fig. 2A), nuclear immunoexpression of Ki67 (red staining) was detected which co-localised (yellow arrows) with LXR expression (green staining, note that single channel views show that the intensity of LXR staining varied between cells). Whilst careful evaluation of single channel views confirmed that the majority of LXR-positive cells were also immunopositive for Ki67 some cells were Ki67 negative (white arrows). In contrast, in moderately differentiated cancers (Fig. 2B), both markers were detected but few cells appeared to co-localise (yellow arrows) although LXR-positive cells (white arrows) were found in close association with proliferating cells. In poorly differentiated cancers (Fig. 2C), few cells expressed both markers. Ki67positive cells were clustered in regions with limited LXR expression and no co-expression of LXR and Ki67 was detected. LXR + Ki67 − cells (white arrows) were detected close to Ki67 + cells. We also assessed the expression of ERα and Ki67 in EC tissues ( Supplementary Fig. 2) as this receptor is implicated in the regulation of proliferation in normal endometrium (Lubahn et al. 1993, Frasor et al. 2003. Consistent with our previous study, ERα was not detected in the poorly differentiated cancers (Collins et al. 2009) and immunoexpression of Ki67 was clearly independent of ERα with an increase in abundance of positive nuclei in poor (sample codes 910/2178) as compared to well or moderately differentiated tissue where co-localisation of ERα and Ki67 was readily detected.
27HC activates LXRE-and ERE-dependent transcription in endometrial epithelial cancer cells and alters proliferation of EC cells
Having demonstrated expression of enzymes and receptors required for 27HC signalling, we extended our observational study by exploring the impact of the ligand on endometrial epithelial cancer cell lines chosen to model well-, moderately-or poorly differentiated stage I cancers; Ishikawa, RL95 and MFE 280. Protein expression of both LXR isoforms was confirmed by western blot in all cell lines studied ( Supplementary Fig. 3A and B).
We assessed the mRNA expression of LXRs in these cell lines and found that their expression phenocopied that found in tissue samples ( Supplementary Fig. 3). NR1H3 mRNA expression was significantly decreased in RL95 (moderately differentiated) cells compared to MFE 280 (poorly differentiated; P < 0.01; Supplementary Fig. 3C). Consistent with tissue mRNA expression patterns, NR1H2 was not different between cell lines ( Supplementary Fig. 3D). Messenger RNAs encoded by both ER genes; ERα (ESR1) and ERβ (ESR2; ERβ1 specific primers) were detected in all of the cell lines ( Supplementary Fig. 4). ESR1 mRNAs were significantly reduced in RL95 and MFE280 compared to Ishikawa cells ( Supplementary Fig. 4A) consistent with patterns of expression in intact tissue ( Supplementary Fig. 2). ESR2 mRNA was significantly reduced in MFE280 cells compared to Ishikawa ( Supplementary Fig. 4B). As 27HC is both an endogenous agonist for LXR and a SERM, the impact of 27HC on LXRE-and ERE-dependent transcription was investigated in the EC cell lines. 27HC significantly increased LXRE-dependent transcription in a dose-dependent manner in all 3 cell lines and was maximally stimulated by 10 −5 M 27HC (Fig. 3A, B and C). In contrast, 27HC only stimulated ERE-dependent transcription in Ishikawa cells (Fig. 3D) at 10 −8 M (P < 0.01) and 10 −7 M (P < 0.0001). The impact of 27HC was abrogated by co-incubation with the anti-oestrogen fulvestrant (ICI 182,780) consistent with ER dependence. In contrast to Ishikawa cells, 27HC had little impact on ERE-dependent transcription in RL95 (Fig. 3E) and MFE280 cells (Fig. 3F). As 27HC could activate both ERE-and LXRE-promoters, we assessed its impact on cell proliferation (Fig. 3G, H and I). 27HC induced proliferation of Ishikawa cells at concentrations ranging from 10 −8 M to 10 −6 M (P < 0.01), but this was inhibited at the highest concentration (10 −5 M, P < 0.0001). 27HC significantly increased proliferation in RL95 cells at concentrations of 10 −7 M (P < 0.001) or greater. In contrast, 27HC did not alter proliferation of MFE 280 cells at any of the concentrations investigated. Neither RL95 nor MFE280 cell lines expressed CYP7B1 (Supplementary Fig. 3E and F) precluding the potential for in vitro metabolism limiting cell responses to 27HC in these cell lines.
Targeting LXR with the synthetic agonist GW3965 activates LXRE-dependent transcription and alters cell proliferation in a cell-specific manner
Incubation of cells with the LXR-selective agonist GW3965 significantly increased LXRE-dependent transcription in a dose-dependent manner (Fig. 4) Endocrine-Related Cancer expression of LXRs in the EC cell lines ( Supplementary Fig. 3). In contrast to 27HC, GW3965 significantly and robustly increased LXRE-dependent transcription at concentrations ≥10 −8 M in Ishikawa (Fig. 4A) and RL95 (Fig. 4B) and ≥10 −7 M in MFE280 cells (Fig. 4C). Although LXR reporter responses were similar in the different cell lines, proliferation responses were strikingly different. In Ishikawa cells, treatment with GW3965 at concentrations 10 −8 M (P < 0.01) and 10 −5 M (P < 0.01) significantly increased proliferation (Fig. 4D). In contrast, GW3965 significantly and robustly decreased cell proliferation at all concentrations investigated in both RL95 (Fig. 4E) and MFE 280 cells (Fig. 4F).
Discussion
To date, no study has assessed the association between the cholesterol metabolite 27HC and EC. EC incidence rates have increased by ~50% since the early 1990s and approximately 57% of endometrial cancers in the United States have been attributed to being overweight or obese (Cancer Research, UK; http://www.cancerresearchuk. org -accessed November 2017, and Calle & Kaaks 2004). Although increased exposure to adipose-derived estrogens is believed to increase aberrant proliferation within the endometrium (Zhao et al. 2016), recent evidence supports an independent role for obesityassociated metabolic factors in modulating EC risk. Notably, both elevated triglycerides and increased dietary cholesterol consumption are reported to be associated with increased EC risk (Lindemann et al. 2009, Gong et al. 2016. Importantly, concentrations of the cholesterol metabolite 27HC are increased in postmenopausal women (Burkard et al. 2007) and are associated with increased risk of breast cancer. Several studies have identified that 27HC has an adverse impact on breast cancer (Nelson et al. 2013, Wu et al. 2013) but whether 27HC can affect EC has not been investigated previously.
In light of these studies, we hypothesised that 27HC signalling could contribute to the aetiology of endometrial cancer and influence disease progression, and we investigated this using both archival human tissue as well as cell lines Expression of LXR and the proliferation marker Ki67 in endometrial cancer. The expression of LXR (antibody identified both isoforms) and the proliferation marker Ki67 was assessed by immunohistochemistry in endometrial cancer tissue sections. In well-differentiated cancers (A), LXR was expressed throughout the tissue and localised to the nuclei of both stromal and epithelial cells (green staining). Nuclear immunoexpression of Ki67 (red staining) was detected and co-localised with LXR expression (yellow arrows) although some LXR-positive cells did not co-express Ki67 (white arrows). In moderately differentiated cancers (B) both markers were detected but did not appear to co-localise; only few cells expressed both LXR and Ki67 (yellow arrows). Most LXR-positive cells did not co-express Ki67 (white arrows). This was also true of poorly differentiated cancers (C), few cells expressed both LXR and Ki67 (yellow arrows) although LXR-positive cells were found in close association with proliferating cells (white arrows). Images representative of at least 3 different patients per cancer grade. Nuclear counterstain DAPI (grey). All scale bars 50 µM. D A Gibson et al.
25:4
Endocrine-Related Cancer that are derived from different grades of EC. We obtained new evidence for expression of the enzymes required for the both the synthesis (CYP27A1) and breakdown (CYP7B1) of 27HC. As concentrations of CYP7B1 mRNAs were significantly decreased in poorly compared to moderately differentiated cancers and expression of CYP27A1 did not change significantly across EC grades; we believe this would favour increased bioavailability of 27HC with increasing grade. These findings appear to parallel those reported for ER+ breast cancer where decreased expression of CYP7B1 and increased CYP27A1 has been reported in tumours compared to normal breast tissues (Wu et al. 2013). Furthermore, we found that the endogenous receptor for 27HC, LXR, was immunolocalised to stage 1 cancers and was expressed throughout the tissue and localised to the nuclei of both stromal and epithelial cells.
We sought to establish if 27HC could alter responses in EC cells by acting via its cognate receptor, LXR, or 27HC activates LXRE-and ERE-dependent transcription in endometrial epithelial cancer cells and alters proliferation. The cholesterol metabolite 27-hydoxycholesterol (27HC) is the endogenous agonist for LXR and is also classified as selective oestrogen receptor modulator. The impact of 27HC on LXRE-(A, B and C) and ERE-dependent (D, E and F) transcription was investigated by luciferase reporter assay in endometrial cancer cell lines; Ishikawa, RL95 and MFE280. 27HC significantly increased LXRE-dependent transcription in a dose-dependent manner in each endometrial cancer cell line. 27HC stimulated ERE-dependent transcription only at lower concentrations and was significantly increased by 10 −8 M 27HC (P < 0.01) and maximally stimulated by 10 −7 M 27HC (P < 0.0001). The 27HC effect was abrogated by co-incubation with the antiestrgoen Fulvestrant (ICI 182,780; ICI) at all concentrations of 27HC (D). 27HC did not increase ERE-dependent transcription in RL95 (E) and was only increased by 10 −5 M 27HC (P < 0.05) in MFE280 cells (F). Cell proliferation was assessed by CyQuant direct proliferation assay in each cell line (G, H and I). Proliferation of Ishikawa cells was increased by 10 −8 M (P < 0.01), 10 −7 M (P < 0.01) and 10 −6 M (P < 0.01) 27HC but decreased by 10 −5 M 27HC (P < 0.0001; G). Proliferation of RL95 cells was increased by 10 −7 M (P < 0.001), 10 −6 M (P < 0.01) and 10 −5 M (P < 0.001) 27HC (H). 27HC did not affect proliferation in MFE280 cells (I). *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001. One sample t test and a theoretical mean of 1. All data are presented as mean ± s.e.m. D A Gibson et al.
25:4
Endocrine-Related Cancer via estrogen receptors, which are known to regulate endometrial proliferation. 27HC activated LXR-dependent transcription in all cell lines tested. In contrast, we found that 27HC activated ERE-dependent reporter gene expression in well-differentiated cancer cells (Ishikawa; ERα+ERβ+) but not in those from moderately (RL95; ERα low ERβ+) or poorly differentiated cancers (MFE280; ERα low ERβ low ). However, 27HC increased proliferation of both Ishikawa and RL95 cells but not MFE280 cells consistent with reported ER expression in these cell lines (Johnson et al. 2007, Yang et al. 2008, Li et al. 2014). Our immunohistochemistry analysis ( Supplementary Fig. 2) supported these in vitro findings. We found that the proliferation marker Ki67 co-localised with ERα in well-and moderately differentiated cancers consistent with a key role for this receptor in mediating endometrial epithelial cell proliferation (Lubahn et al. 1993, Frasor et al. 2003. In poorly differentiated cancers, ERα was not detected consistent with previous reports (Collins et al. 2009). It has been reported that 27HC, acting as a SERM, can impact on ERα-or ERβ1-dependent regulation of cell function (He & Nelson 2017) and the oestrogenic effects of 27HC could therefore be mediated via either ER isoform in EC cells. In endometrial endothelial cells, which express ERβ but not ERα, oestrogenic effects are mediated via ERβ tethered to Sp1 and not via direct binding to ERE (Greaves et al. 2013). Furthermore, it has been reported that 27HC promotes proliferation of ERα-positive LNCaP prostate cancer cells via ERβ (Lau et al. 2000, Raza et al. 2017) which may account for the apparent discrepancy between ERE reporter assay and cell proliferation responses in RL95 cells observed in the current study. Taken together, these findings reveal the potential for 27HC generated within the EC tissue microenvironment to influence ER-dependent transcription and proliferation via ERs expressed in early-grade stage 1 EC. Although the association between ERs and endometrial proliferation is well recognised, there is limited data investigating the role of LXR in this process. Expression of LXRα and LXRβ mRNA has been previously reported in endometrium and myometrium of mice (Mouzat et al. 2007) and 27HC is reported Endocrine-Related Cancer to increase mouse uterine weight, consistent with an uterotrophic action; however, whether this was mediated via ER or LXR was not investigated (Wu et al. 2013). In mice, targeted ablation of the receptor subtypes revealed that Lxrα−/− but not Lxrβ−/− females had reduced endometrial areas compared to wildtype mice consistent with a role for LXRα in promoting endometrial growth/proliferation in that species (Mouzat et al. 2007). In the current study, we found that LXR co-localised with the proliferation marker Ki67 in well-differentiated but not moderate-or poorly differentiated EC tissues. In vitro assays verified this finding as the synthetic LXR agonist GW3965 had a cell-selective impact on the EC cell lines. In Ishikawa cells GW3965 increased proliferation, whereas in RL95 and MFE280 cells equimolar concentrations of agonist blocked proliferation. Given that LXR expression was detected in all grades of EC, this may suggest LXR could be an effective therapeutic target in some ECs, albeit in a grade-dependent context. Indeed, GW3965 is reported to abrogate E2-mediated increases in MCF7 breast cancer cell proliferation and has been proposed as an anti-proliferative ligand in this context (Vedin et al. 2009). LXR classically acts as a heterodimeric partner of retinoid X receptor (RXR). RXR is expressed in the nuclei of endometrial epithelial cells throughout the menstrual cycle (Fukunaka et al. 2001) as well as in EC tissues (Nickkho-Amiry et al. 2012). Interestingly, LXR-RXR functions as a 'permissive' heterodimer and binding of either an LXR agonist or the RXR agonist 9-cis retinoic acid activates transcription, whilst agonism of both dimer partners has an additive effect on activation. Assessment of RXR isoforms in the cell lines used in the current study demonstrated differential expression of RXRs in Ishikawa, RL95 and MFE280 cells, which may account for the distinct responses of these cell lines in response to GW3965 treatment ( Supplementary Fig. 5). NR2B1 (RXRα) mRNA expression was greatest in RL95 cells whilst NR2B2 (RXRβ) was detected in all cell lines. Notably, mRNA expression of NR2B3 (RXRγ) was not detected in RL95 cells but was abundant in MFE280 cells. Whether changes in the constitution of the receptor isoforms that contribute to the LXR:RXR heterodimer affect responses requires further investigation; however, previous studies demonstrate that targeting retinoid signalling may affect proliferation of EC cells. Notably, retinoic acid (RA) signalling via retinoic acid receptor (RAR) and RXR is reported to inhibit Ishikawa cell proliferation by inducing cell cycle arrest (Cheng et al. 2011) and fenretinide, a synthetic derivative of RA, induced apoptosis of Ishikawa cells (Mittal et al. 2014). These results suggest targeting LXR-dependent signalling with LXR and/or RXR agonists could inhibit proliferation in EC and cancer progression.
Changes in the local inflammatory environment that occur during development and progression of EC may also increase exposure to 27HC due to infiltration of inflammatory cells. We have previously demonstrated that infiltration of immune cells is increased in EC tissues compared to controls. Notably, the numbers of macrophages, neutrophils and dendritic cells were significantly increased in EC tissues (Wallace et al. 2010) consistent with 27HC-dependent increases in migration of bone marrow-derived CD11b + cells reported in in vitro assays (Raccosta et al. 2013). In addition, 27HC increases secretion of CCL2 from macrophages which enhances recruitment of monocytes (Kim et al. 2013) and can also upregulate ER-dependent expression of proinflammatory genes (Umetani et al. 2014). Notably, as Cyp27a1 is reported to be abundant in macrophages (2), these cells may also contribute to an increase in 27HC within the tumour microenvironment. In support of this idea, increased 27HC concentrations have been reported in breast cancer tumours (Wu et al. 2013) and increased concentrations of cholesterol have been reported in tumours of various cancer types although they have not been directly measured in EC. 27HC can also promote secretion of TNFA and IL6 from macrophages and TNFA is reported to increase proliferation of human endometrial glandular epithelial cells (Nair et al. 2013). Thus, although in the current study we only investigated the direct impact of 27HC on proliferation of EC epithelial cells, 27HC may also exacerbate changes within the tissue microenvironment by modulating inflammatory responses, and this merits further investigation in animal models.
Summary
In the current study, we provide the first evidence to support a mechanistic link between exposure to elevated cholesterol, biosynthesis of 27HC and EC. Analysis of human stage 1 endometrial adenocarcinomas revealed expression of the key metabolising enzymes of 27HC was altered in EC consistent with increased exposure to 27HC as EC progresses from well to poorly differentiated. Although survival rates for EC are high, incidence rates are increasing in line with rates of obesity and a rising incidence in pre-and peri-menopausal women creates unique therapeutic challenges. Based on our novel findings, D A Gibson et al.
25:4
Endocrine-Related Cancer we propose that exposure to 27HC may influence disease development/progression by activating ER-dependent pathways to increase epithelial cell proliferation. These results suggest strategies that seek to limit exposure to 27HC through lifestyle modification, lipid-lowering drugs such as statins or novel therapeutics that target 27HC synthesis (CYP27A1 inhibitors) may be effective in reducing endometrial proliferation in women at increased risk of developing EC. Taken together, our novel findings suggest that altered cholesterol metabolism, and aberrant exposure to 27HC, may contribute to the development and/or progression of EC. | 2018-04-03T02:05:49.206Z | 2018-01-25T00:00:00.000 | {
"year": 2018,
"sha1": "9ed6dd07166d5be228f58a95b0ecd253b78bbfd6",
"oa_license": "CCBY",
"oa_url": "https://erc.bioscientifica.com/downloadpdf/journals/erc/25/4/ERC-17-0449.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "34db0a07827c6c5c9bcadabd9fbece2d6a8ee473",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
88099311 | pes2o/s2orc | v3-fos-license | The identity of Eriosema nanum
An examination of available evidence leads to the conclusion that Eriosema nanum Burtt Davy must be regarded as a synonym of E. ellipticifolium Schinz. The relationship between E. ellipticifolium and E. uniflorum Burtt Davy is discussed.
INTRODUCTION
Eriosema nanum Burtt Davy, together with E. populifolium Harv.and E. angustifolium Harv., are among the rarest of the Fabaceae in South Africa.Recent intensive field studies of Eriosema (Stirton, 1975) showed the widespread occurrence of hybridi zation and of morphological variation in the genus in South Africa.This paper assesses various aspects of phenotypic plasticity and interspecific hybridi zation as they affect the identity and delimitation of E. nanum.HISTORY Schinz, in Vjschr. Naturf. Ges. Zurich 66: 229, 1921, based E. ellipticifolium on Junod 1411 from Shilouvane and Junod 2534 (Fig. 1) from Marovunge, both in the Eastern Transvaal.Owing to the absence of ripe or almost ripe fruits, he was unable to establish clearly whether he was dealing with a Rhynchosia or an Eriosema species ("Solange keine reifen oder nahezu reifen Friichte vorliegen, ist es vorlaufig ein aussichtsloses Bemiihen, feststellen zu wollen, ob es sich um eine Rhynchosia-oder eine Eriosema-Art handelt, sicher ist, dass sie sich mit keiner der mir bekannten Arten dieser oder jener Gattung deckt").He commented, however, that the plant was reminiscent of E. salignum, but differed in its lower surface indumentum.The name E. ellipticifolium has never been taken up.
In 1932 Burtt Davy treated E. ellipticifolium very briefly under the heading "species not seen".In this work, however, he published Eriosema nanum based on Galpin 1139 from the^summit of the Saddleback Mountain in the Barberton area (Fig. 2).As its allies he noted E. rufescens Schinz and E. burkei Benth.and commented also that it was perhaps closest to E. uniflorum Burtt Davy.A study of type specimens leaves no doubt that E. nanum is synonymous with E. ellipticifolium.Population studies showed that the wide leaf variation encountered is the result of a cline of decreasing width in a northerly direction (Stirton, 1975).The type localities of E. ellipticifolium occur in the northern extremities of this range.It is unfortunate that the more descriptive name nanum must be superseded.
FIELD OBSERVATIONS
Plants growing in full sun tended to be shorter and more compact than plants of the same species found growing under different intensities of shade.In Fig. 3 plants of E. ellipticifolium that were found growing in short open grassland (plant 1) are con trasted with those growing along a road bank in a pine forest (plant 2).Plant 1 can be seen to be smaller and more compact than plant 2. Other obvious differences, not all observable from the photographs, are the longer inflorescences, the thinner leaves and the less prominent secondary and tertiary venation in plant 2. Plants that grew through a thick canopy of grass had the same facies as the plants that grew in or along forest margins.These features can be clearly seen on Stir ton 1431.These various expressions of morphological form could possibly be the result of a competition balance of mineral nutrients, degree of light intensity, extent of water availability, or a combination of these and other factors.The plants shown in Fig. 3 were chosen as representative of the range of variation within the species after a critical inspection of their populations.
/
V *»* Plants that grew in burnt veld (Fig. 3: 3) had a more stunted form than plants of the same species which grew in adjacent unburnt veld (Fig. 3: 4).These plants were collected on the same day.The various populations were observed subsequently and it was found that once the new grass had come away the "dwarf plants" began to grow lank so that by the end of the season they were similar, but still shorter, than plants collected in the adjacent tall unburnt grassveld.The plants that grew in the burnt veld had almost completed their flowering period by the time the plants which grew in the unburnt veld had begun to flower.
The observations in open veld and in forest, and in burnt and unburnt veld, were found to be consistent throughout the range of distribution of E. elliptici folium.The results lead me to suspect that once the type specimen of E. uniflorum Burtt Davy is compared, it will probably have a facies similar to plant 2 in Fig. 3 and hence synonymous with E. ellipticifolium.Compton (1974, pers. comm.) has indicated that he intends to incorporate E. uniflorum under E. nanum in his revised Flora of Swaziland.Judgement is here reserved until the type is traced, since the description of E. uniflorum has features linking E. ellipticifolium and Stirton 1482 (E. sp. nov.) allied to E. cor datum.On the Galpin 1139 sheet from the Bolus Herbarium there is a specimen Bolus 11854 that has three Bolus manuscript names; uniflorum,pumilum and cryptantha.Appended to the specimen is a note, apparently in N. E. Brown's handwriting, that says "We afterwards decided that this was probably a stunted form of E. burkei".The specimen is E. ellipticifolium.Field studies (Stirton, 1975), however, indicated that stunted forms of E. pauciflorum Klotzsch, rather than E. burkei, are deceptively similar to E. ellipticifolium.
The most noticeable variation observed in the field was the range in number of flowers per inflorescence.Plants of the same population were found to have inflorescences bearing from one to ten flowers.
Although most specimens have been reasonably easy to place in this species there remain a few problems.Hybridization cannot be ruled out as the cause of some complexing difficulties encountered in a recent field trip.Two populations in particular require a detailed study as in both areas E. elliptici folium grows sympatrically with the rare E. angustifolium Burtt Davy and an undescribed taxon.A feature of these populations is their marked geographical separation viz.Magoebaskloof (N.Transvaal) and Havelock (Swaziland) and also the wide range of "intermedates" found.Flower colour, pubescence and the shape of stipules have been shown in preliminary hybridization studies (Stirton, 1975) on other species to be reliable indicators of putative hybrids.These three characters varied markedly in "intermediate" plants in both popu lations.E. angustifolium has a striking, stiff, rufous, patent indumentum, linear leaves, erect habit, and yellow flowers, and is not readily confused with either E. ellipticifolium or Stirton 1445 (E.sp.)The latter occurs from the N. Transvaal southwards to Swaziland and although fairly common in isolated areas has not been previously collected.This multi stemmed plant is a pink and yellow flowered perennial with prostrate habit, and small unifoliolate leaves.Its closest affinity is E. ellipticifolium.The intermediate plants at Magoebaskloof seem to be hybrids between E. angustifolium and E. ellipticifolium (Stirton 1442), and between Stirton 1445 (£.sp.) and E. ellipticifolium (Stirton 1446).Further field studies are necessary before all three putative parents are clearly delimited.The inter-relationships of these three species remain obscure.
Three specimens named E. ellipticifolium in this study are doubtful: these are Jacobsen 1587, Coetzer 150, and Vahrmeijer 2433.The last two of these, although very close to E. ellipticifolium, differ in pubescence and their very acute leaves.All these will no doubt be easier to place once all the montane species of Eriosema have been studied.
Eriosema ellipticifolium
Eriosema ellipticifolium is restricted to isolated mountain "islands" in Swaziland, the Transvaal and Natal (Fig. 5).This species exhibits a disjunct distri bution over a wide area and over diverging ecological conditions and veld types.If is found between 1 600-2 500 m growing predominantly amongst rocks in short grassland on dry ridges with a northwest aspect.A number of populations has been located on forest margins and along forest roads at the Witklip, Mariepskop and Woodbush Forest Reserves.Junod 2534 is chosen as lectotype, since the quantitative data given in the protologue indicates that Schinz must have based most of his description on the two specimens on this sheet.
This little-known species has proved to be more common and widespread than was previously accepted.The available herbarium material had been placed under no less than six species.It had been most commonly confused with Eriosema cordatum var.cordatum, but is readily separated from this and all other species by its very long calyx lobes which almost equal the length of the flower.
Despite its widespread distribution (Fig. 5) this species is remarkably uniform and distinctive in the field.Its poor representation in herbaria is probably attributable to its dwarf habit (Fig. 6) and not to its scarcity in the field as, during a recent trip to the eastern Transvaal, it was found to be locally common throughout the moist highlands.
Dr. K. D. Gordon-Gray for her guidance and advice during the initial stages of this study and for kindly providing funds for Fig. 4
. I am grateful to the Director, Botanisches Museum der Universitat Zurich, for the loan of the types of E. ellipticifolium Schinz.Ondersoek van al die beskikbare gegewens dui daarop dat Eriosema nanum Burtt Davy 'n sinoniem van E. ellipticifolium Schinz is.Die verwantskappe tussen E. ellipticifolium en E. uniflorum Burtt Davy word ook bespreek. UITTREKSEL'n | 2018-12-06T23:21:48.799Z | 1977-11-11T00:00:00.000 | {
"year": 1977,
"sha1": "b62b7f7d8f134e309b1a37c6db2ff956725eb2e3",
"oa_license": "CCBY",
"oa_url": "https://journals.abcjournal.aosis.co.za/index.php/abc/article/download/1396/1354",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b62b7f7d8f134e309b1a37c6db2ff956725eb2e3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
28521999 | pes2o/s2orc | v3-fos-license | The bli-4 locus of Caenorhabditis elegans encodes structurally distinct kex2/subtilisin-like endoproteases essential for early development and adult morphology.
Many secreted proteins are excised from inactive proproteins by cleavage at pairs of basic residues. Recent studies have identified several serine endoproteases that catalyze this cleavage in the secretory pathways of yeast and metazoans. These enzymes belong to the kex2/subtilisin-like family of proprotein convertases. In this paper we describe the molecular characterization of the bli-4 gene from Caenorhabditis elegans, which was shown previously by genetic analysis of lethal mutants to be essential for the normal development of this organism. Sequencing of cDNA and genomic clones has revealed that bli-4 encodes gene products related to the kex2/subtilisin-like family of proprotein convertases. Analysis of bli-4 cDNAs has predicted four protein products, which we have designated blisterases A, B, C, and D. These protein products share a common amino terminus, but differ at the carboxyl termini, and are most likely produced from alternatively spliced transcripts. We have determined the molecular lesions for three bli-4 alleles (h199, h1010, and q508) that result in developmental arrest during late embryogenesis. In each case, the molecular lesions are within exons common to all of the BLI-4 isoforms. The original defining allele of bli-4, e937, is completely viable yet exhibits blistering of the adult cuticle. Molecular analysis of this allele revealed a deletion that removes exon 13, which is unique to blisterase A. No RNA transcript corresponding to exon 13 is detectable in the blistered mutants. These findings suggest that blisterase A is required for the normal function of the adult cuticle.(ABSTRACT TRUNCATED AT 250 WORDS)
lian members of the family of convertases include PC1/ PC3 (Seidah et al. 1991;Smeekens et al. 1991), PC2 (Seidah et al. 1990;Smeekens and Steiner, 1990), PC4 (Nakayama et al. 1992;, PCS/PC6 (Lusson et al. 1993;Nakagawa et al. 1993), PACE4 (Keifer et al. 1991), and human furin (hFURIN; Roebroek et al. 1986;Fuller et al. 1989b;van den Ouweland et al. 1990;Wise et al. 1990). Defining the biological significance of individual convertases has been stymied by the lack of evidence demonstrating substrate specificity. For example, when removed from their biological context, many of the endoproteases are able to process the same substrates, raising the question whether such a functional redundancy exists among the family members in vivo. Some insight into defining functionality has been provided by examining the expression patterns and localization of the individual convertases. From such analyses it has been shown that substrate specificity can be influenced by both restricting expression to particular tissues and compartmentalization of the individual enzymes to specific intracellular locations (Seidah and Chr4tien 1992).
Restricted expression is exhibited by PC1/PC3 and PC2 to endocrine and neuroendocrine tissues (Seidah et al. 1990(Seidah et al. , 1991Smeekens and Steiner 1990;Smeekens et al. 1991}, whereas PC4 is restricted to the testis (Nakayama et al. 1992;. These observations imply a distinct functionality for the three convertases within the regulated secretory pathway. In contrast, both hFURIN and PACE4 are expressed in a broad range of tissues suggesting that these convertases are required for processing activity within the constitutive secretory pathway (Roebroek et al. 1986;Bresnahan et al. 1990;van den Ouweland et al. 1990; Van de Ven et al. 1990;Kiefer et al. 1991}. PCS/PC6, like hFURIN and PACE4, exhibits a widespread tissue distribution. However, levels of expression are predominant in intestinal tissue (Lusson et al. 1993;Nakagawa et al. 1993). This may suggest that PC5/PC6 is active in both the constitutive and regulated secretory pathways. Clearly, an added complication in defining substrate specificity arises within cells in which regulated and constitutive pathways are active, as both types of convertase will be produced.
Compartmentalization may also influence the control of proprotein convertase substrate specificity and activity. Individual enzymes are sequestered in separate intracellular compartments. Localization of the individual processing enzymes appears to be a function of the structural differences between each family member. This has been demonstrated clearly for hFURIN, which is concentrated in the trans-Golgi network (TGN) (Molloy et al. 1994). hFURIN, like all members of the kex2/subtilisinlike family, is synthesized as an inactive zymogen (Leduc et al. 1992; for review, see Seidah and Chr4tien 1992). Exit from the endoplasmic reticulum and transport to the TGN appears to coincide with activation of the catalytic function of the enzyme. Furthermore, the intracellular distribution of hFURIN is dynamic. The enzyme is cycled to and from the TGN by the cell surface within coated vesicles. At least in part, the trafficking signals required for cycling are encoded within the cytoplasmic tail of hFURIN (Molloy et al. 1994).
The identification of kex2/subtilisin-like proprotein convertases in simple animal systems that are amenable to classical and molecular genetic manipulation provides an alternate approach to examine the role of these important enzymes. Recently, genes that encode kex2/subtilisin-like endoproteases have been isolated from Drosophila melanogaster and the nematode Caenorhabditis elegans. Two genes isolated from Drosophila, called Dfur-1 (Roebroek et al. 1992(Roebroek et al. , 1994; also called dKLIP-1; Hayflick et al. 1992), and Dfur-2 (Roebroek et al. 1992), encode convertases with sequence similarity to hFURIN. Genes isolated from C. elegans include CELPC2, which shows sequence similarity to PC2 (G6mez-Saladin et al. 1994), and an additional gene identified by the genome sequencing group that most closely resembles hFURIN (Waterston et al. 1992;C. Thacker and A.M. Rose, unpubl.). At present, no known mutations within these genes have been reported.
Previously, this laboratory described the genetic analysis of mutants within the bli-4 gene of C. elegans. In this report we present the characterization of the bli-4 gene and show that it encodes serine endoproteases that belong to the kex2/subtilisin-like proprotein convertase family, bli-4 was first identified by a recessive mutant that results in the formation of fluid-filled separations, or blisters, of the adult cuticle layers (Brenner 1974).
Characterization of the e937 mutation revealed it to be incompletely penetrant; 85%-90% of e937 homozygotes exhibit the blistered phenotype (Brenner 1974;Peters et al. 1991). Subsequently, 13 additional recessive mutants were isolated, all of which are lethal, arresting development in late embryogenesis (Rose and Baillie 1980;Howell et al. 1987;Peters et al. 1991;C. Thacker, M. Srayko, and A.M. Rose, unpubl.). The associated lethality indicates an essential role for bli-4 in embryogenesis in addition to its role in the production or maintenance of the adult cuticle. Here we show that the bli-4 gene encodes at least four proprotein convertases that arise by alternative splicing at the 3' end, generating enzymes with different structural properties. The four gene products share the protease domain yet differ at the carboxyl termini. Alternative splicing generates isoforms with structural similarities to each of the mammalian proprotein convertases that participate in both the regulated and constitutive secretory pathways. Molecular lesions responsible for three lethal mutants were identified within coding sequences common to all of the BLI-4 isoforms. Removal of the unique 3' exon of one isoform results in the blistered phenotype. Our results also suggest that incomplete penetrance of blistering exhibited by the e937 mutation may result from functional redundancy among the various bli-4 gene products. This is the first identification of mutants in a kex2/subtilisin-like gene in a multicellular organism, and the first direct evidence that a member of this gene family is essential for development.
Mutants display both a viable cuticular defect and lethality
Previous genetic analysis in our laboratory has demonstrated that bli-4 is a complex locus (Peters et al. 1991}. Mutations within the gene have been categorized into three classes depending on mutant phenotype and intracomplementation analysis (Table 1). The first mutant class is represented by the recessive allele e937, which originally defined the locus (Brenner 1974), and exhibits blistering of the adult cuticle but is otherwise completely viable (Fig. 1B). The e93 7 mutation shows incomplete penetrance such that -90% of homozygous animals display the blistered phenotype. Currently, there are 13 known recessive alleles of bli-4 that result in lethality. These alleles exhibit a complex pattern of intragenic complementation and have been grouped into two Figure 1D. Arrested animals appear fully differentiated yet are unable to complete elongation ( Fig. 1, cf. C and D}. All class II mutants examined become vacuolated, particularly in the head region (Fig. ID). A single recessive lethal allele, s90, complements e937 and represents a third bli-4 mutant class (class III). Normally, one would interpret the complementation of e937 by the class III allele to mean that these mutations are in two separate genes. However, s90 fails to complement any of the class II lethal alleles, suggesting that the mutation does reside in the same gene. Most animals homozygous for the s90 mutation arrest at the same stage of development and are indistinguishable in phenotype from the class II lethal mutants. However, -30% of the animals hatch and later arrest as early L1 larvae with a distinctive "dumpy" appearance (Fig. IF). Apart from the obvious morphological defect, the larvae respond to touch and pharyngeal pumping is observed, albeit at a very reduced rate. These dumpy animals persist for several days before dissolution. The mutant phenotypes observed for both the class II and III lethal alleles are similar to those seen for cuticle-defective mutants that elongate normally, but subsequently retract in length (Priess and Hirsh 1986). Isolation of lethal mutants in bli-4 implies an essential function for the gene, in contrast to the cuticle defect associated with the original e937 mutation. The interesting genetics associated with the various mutants and the intriguing incomplete penetrance of the blistered mutant prompted us to characterize the gene further.
The bli-4 locus is situated in cosmid KO4FI O
To clone the bli-4 gene, we aligned the genetic map position with the physical map as described in Materials and methods (Fig. 2). Using the cosmid K04F10, we were able to identify DNA rearrangements associated with two bli-4 mutants, hlOlO and e937. hlOlO was isolated in a precomplementation screen for bli-4 mutations in a mut-6 background (Peters et al. 1991). mut-6 mobilizes the transposable element Tcl (Mori et al. 1988); therefore, h 1 O10 was presumed to contain a transposon insertion in bli-4. We identified the Tcl insertion site in a 1.3-kb EcoRI fragment ( Fig. 3A; see below). Analysis of genomic DNA isolated from e937 homozygotes using K04F10 as probe revealed a deletion of 3.5 kb (Fig. 3A). Thus, two DNA rearrangements associated with two independent bli-4 mutations were detected by the cosmid K04F10.
All mutant classes of bli-4 are rescued by KO4FIO To confirm that K04F10 contained all the information required for bli-4 expression, we used a transgenic nematode strain that carries this cosmid to test for rescue of the various mutant phenotypes. The transgenic strain (kindly provided by J. McDowall, University of British Columbia, Vancouver} carries K04F10 as an extrachromosomal array along with the dominant selectable marker rol-6 that causes worms to roll along their longitudinal axis (Mello et al. 1991). The K04F 10-containing array (hEx4) was essentially used as a free duplication to provide wild-type function in a mutant background. Specifically, hEx4 was transferred by genetic crosses to lethal strains carrying a bli-4 mutation and flanking markers. Successful rescue of the lethal phenotype was indicated by the appearance of viable morphological mutants in the resulting outcrossed progeny. Normally the flanking markers would be masked by the lethal phenotype. Using this approach, transfer of the extrachromosomal array to the noncomplementing class II lethal mutant hlOlO szTI] resulted in the generation of Unc-63 Unc-13 Rol-6 progeny. In addition, transfer of hEx4 to the complementing class III lethal s90 [genotype bli-4(s90) tmc-13; sDp2], gave rise to Unc-13 Rol-6 progeny. These results showed that the cosmid is able to rescue the lethal phenotype of both classes of bli-4 mutation. Transfer of hEx4 to blistered e937 animals resulted in the establishment of several roller lines. The progeny from each of these lines consisted of blistered non-Rol-6 and nonblistered Rol-6 animals. Because extrachromosomal arrays segregate in a non-Mendelian fashion (Mello et al. 1991), we would expect to see blistered non-Rol-6 progeny in the roller lines because they are homozygous for the bli-4(e937) mutation. We have maintained these strains for many generations and have never observed blistering of the Rol-6 animals, indicating that the cosmid array also rescued the blistered phenotype of e937. Transfer of an extrachromosomal array containing only the rol-6 plasmid to e937 animals failed to rescue the blistered phenotype. These experiments showed that K04F10 can rescue the mutant phenotype for all classes of bli-4 mutation. Therefore, we concluded that the cosmid contains the bli-4 gene.
Isolation and characterization of bli-4 cDNAs
The 1.3-kb EcoRI genomic DNA fragment from K04F10 that identified the DNA rearrangement in hlOlO was bli-4 encodes kex2/subtilisin-like endoproteases used as a probe to screen the cDNA library of Barstead and Waterston (1989) identifying six cDNA clones. Additional cDNAs were identified as part of the ongoing genome sequencing project (Waterston et al. 1992;Y. Kohara, pers. comm.). The clones were placed into four groups according to their structural similarities and have been designated blisterase A, blisterase B, blisterase C, and blisterase D (Figs. 3 and 4). Blisterase A cDNAs (six clones) encode a putative protein of 670 amino acids. A single blisterase B cDNA contains a 2419-bp insert, including a large open reading frame, which begins with the same potential ATG start codon as blisterase A, that can encode a protein of 730 amino acids. A single blisterase C cDNA contains a 2659-bp insert, which contains an open reading frame of 2481 bp that begins with the same initiator methionine as blisterases A and B, predicting a protein of 827 amino acids. Three blisterase D clones were identified, although all represent partial cDNAs that are missing sequences at the 5' end. The longest clone begins at nucleotide 741 of blisterase B (Fig. 4A). The reverse transcriptase-polymerase chain reaction (RT-PCR) analysis indicated that the bli-4 transcripts are trans-spliced to the SL1 splice leader (Krause and Hirsh 1987 (Coulson et al. 1986, 19881. The positions of hP5 and the left breakpoint of hDf8 in the physical map are indicated by shaded arrowheads. All of the cosmids shown were used to probe genomic DNA isolated from several mutants. data obtained by Northern blot analysis (see below), suggest that all blisterases share the same 5' sequences. All cDNA clones are complete at their 3' ends, terminating with a poly(A) tract. However, all four groups have different 3' ends. The point of divergence is after nucleotide 2151 of the blisterase B cDNA sequence as shown in Figure 4. We determined the genomic intron/ exon organization of the bli-4 gene by partial restriction mapping and sequence analysis and found that the four cDNA groups diverge from each other at an intron/exon boundary (see Fig. 3A). This implies that the different predicted isoforms are produced by alternative splicing. An interesting characteristic common to both the blisterase B unique 3' exon and exon 18, which is specific to blisterase C, is the extremely short 3'-untranslated region (3' UTR). The 3' UTRs for these cDNAs are only 23 and 31 nucleotides long, respectively {Fig. 4), which is unusual, at least when compared with other C. elegans genes.
Rescue of bli-4 mutant phenotypes by a subset of gene products
The functional activity of each BLI-4 isoform was exam-ined by using transgenics carrying various subclones of K04F10. A number of constructs were generated that contain genomic sequences encoding one or more of the bli-4 gene products (see Fig. 3B). Complete rescue of the class II (h199, q508) and class III (s90) lethal phenotypes and the blistered phenotype was obtained using the constructs that contain sequences encoding only the blisterase A isoform (pCeh226, pCeh229, and pCeh230). Complete rescue was also obtained by plasmid pCeh236, which encodes isoforms B, C, and D. In contrast, expression of blisterase B alone (pCeh238) was unable to rescue the lethal phenotypes of both class II and III mutants. However, the same plasmid rescued partially the blistered phenotype suggesting functional redundancy between isoforms A and B in adult nematodes.
bli-4 encodes kex2/subtflisin-like serine endoproteases
A search of the GenBank, PIR, and SWISS-PROT data bases revealed significant amino acid sequence similarity between the predicted BLI-4 sequence and members of the kex2/subtilisin-like family of proprotein-converting enzymes. The sequence similarity is most striking GGC#b~A T I CTGGCAGT T I I ~T CGCAGT TGCAT T CAC TAT T G$U~CAT GAT TC~T T TGC GAT GAJ~GT A TAGG T GCC T GT GGGGAACC~U~TA 300 TACCGTAATACGT T TAGCAA~GAGAT GA TGAGCT T GCACGGCGAATAGC T GCT GAI CAT GACA T GCAT G TAAAAGGT GATCCGT T T T T GGA TACTC TCCT T TATCACT CGGAAACAACAAGGACACGGCGACATAAAAGAGCGAT TGI T GAACGAI T GGAT T CACAI CCAGCCGT CGAAT GGG T T
AAA~CCTCGGCCAAT TGTGGGTCGT T TCCAACT TAAT T TCACTT TAGATGTGAATGGATGCGAGTCAGGAACCCCTGT T TTATAT T T GGAACACGT T CAA 1800
CCTGCAACATCCCAAGGAGT TCTCTCACGCGT TCATCAACTCACTTCTCAG 2151 CA~G~CGCTT~CAAACAT~TAAcTCAAACTCTTCGA~TAAAGGAGGAAGTGGATTC~GTG~GTTCAAAAATGCGATGATACTTACTAC~TGGA~GG 2351 Although the sequence similarity is restricted primarily to the protease domain, the blisterase proteins share structural features outside this region with the other kex2/subtilisin-like endoproteases (Fig. 6). These conserved structural similarities include pairs of basic residues on the amino side of the protease domain that are potential sites of activation by autoproteolytic cleavage. This has been demonstrated in the bacterial subtilisins (Power et al. 1986; Ikemura and Inouye 1988) and, recently, for hFURIN (Hosaka et al. 1991;Molloy et al. 1992Molloy et al. , 1994. The conservation of a tripeptide Arg-Gly-Asp (RGD) motif representing the minimal recognition sequence required for cell attachment located in a similar position relative to the active site [with the exception of Kex2p, which has the sequence Arg-Gly-Thr at this position (Fuller et al. 1988)] is also conserved. Furthermore, certain members of the kex2/subtilisindike family, including hFURIN (Roebroek et al. 1986
LL $ 1 s s A ~l~y _[~]_ N y~R rp w y D~N H~i
A~il ATTYS~A Hydrophobicity analysis (Kyte and Doolittle 1982) of the blisterases revealed two major hydrophobic domains. The first is a short hydrophobic domain at the amino terminus with features typical of a signal sequence. A second hydrophobic sequence is located near the carboxyl terminus of blisterase D and is followed by a highly acidic 48-amino-acid domain (Figs. 4 and 6). Similar hydrophobic domains followed by acidic residues are found in Kex2p, hFURIN, Dfurin-l-CRR, and Dfurin-2. Taken together, this information suggests that the hydrophobic domain of blisterase D is likely to be a transmembrane domain. The remaining BLI-4 isoforms all contain several hydrophobic residues located toward the carboxyl termini, although the hallmarks typical of a transmembrane domain are not apparent.
[~ A A G I I A L ~T~L E A N K N E]A
The overall sequence similarity of the bli-4 gene products with members of the kex2/subtilisin-like family of proprotein convertases implies a conserved functional activity in the nematode. Alternative splicing of bli-4 transcripts results in the putative production of enzymes with structurally different carboxyl termini, specifically a transmembrane domain-containing gene product and at least three proteins lacking a transmembrane domain.
Expression pattern of a bli-4/lacZ fusion gene
A bli-4/lacZ promoter fusion gene was constructed to analyze the expression pattern of bli-4. An -5-kb XbaI-ClaI genomic fragment (see Materials and methods) containing sequences at the 5' end of the bli-4 gene was inserted into the lacZ expression vector pPD21.28 (Fire et al. 1990}. Transgenic animals carrying the construct were isolated and examined by staining with X-gal. Examination of adult and larval transgenic animals re-vealed lacZ expression in all hypodermal cells, the vulva, and the ventral nerve cords (Fig. 7A, B). Expression was first observed in embryos at the twofold stage ( Fig. 7C} just before the developmental time of arrest seen for the bli-4 lethal mutants. Staining appears to be localized to the nuclei of the hypodermal cells in the embryo.
Deletion of sequences in e937 eliminates blisterase A expression
Restriction fragment length polymorphism {RFLP) analysis revealed a 3.5-kb deletion within the bli-4 gene in animals homozygous for the e937 mutation. To characterize the mutation further, the deletion endpoints were sequenced and demonstrated to remove the last exon encoding the unique 3' end of blisterase A (see Fig. 3A). The deletion was expected to result in the loss of expression of this isoform but not affect expression of the other bli-4 gene products. We examined expression of the bli-4 gene in e937 homozygotes by Northern blot analysis. PolylA} + RNA isolated from both wild-type N2 animals and the mutant e937 strain was probed using a restriction fragment common to all of the blisterase cDNAs. Four RNA species were observed in the wild-type lane {Fig. 8AI, corresponding to RNAs of 2.3, 2.4, 2.7, and 3.2 kb. The RNA species were identified tentatively by comparing the lengths of the transcripts with the characterized cDNAs (see Fig. 4). By this criteria we assigned identity of the 2.3-kb transcript as that for blisterase A, the 2.4-kb transcript corresponding to blisterase B, the 2.7kb transcript as blisterase C, and the 3.2-kb message encoding blisterase D. Hybridization of the "common" probe to e937 poly(AI + RNA identified only the 2.4-kb and 3.2-kb transcripts (Fig. 8A). This result indicated that the 2.3-kb message, whose length is consistent with that predicted for the blisterase A cDNA, is missing in e937 animals, cDNAs isolated for blisterase C predict a message of 2.7 kb for this isoform. The 2.7-kb message that hybridizes to the common probe is the correct size predicted for blisterase C, yet this transcript is missing in e937 RNA. Because the exons specific to this isoform lie outside of the deletion breakpoints we would predict that expression of blisterase C would not be affected in e937 animals. Therefore, we suspected that the 2.7-kb message does not correspond to blisterase C. The identity of the 2.7-kb transcript is unclear, although it may represent an alternative blisterase A message that uses a downstream polyadenylation site. However, we cannot rule out the possibility that the transcript encodes a fifth bli-4 gene product.
The 3.2-kb RNA has the size predicted for blisterase D, assuming that the D-specific message has the same 5' end as blisterase B. This was confirmed by rehybridizing the membrane with a probe containing sequences specific for blisterases C and D. As expected, the 3.2-kb message hybridized to the probe (Fig. 8B), confirming the identity of the transcript as corresponding to blisterase D. A 2.7-kb transcript predicted to encode blisterase C was not observed using the blisterase C/D-specific probe, even after prolonged exposure (data not shown). Unfortunately, the low abundance of the 2.4-kb and 2.7kb RNAs and the small size of the unique exons for blisterases A, B, and C for use as probes, made it difficult to establish unequivocally to which of the messages the various cDNAs correspond, at least by Northern blot analysis.
To establish further the expression patterns of blisterases A, B, and C in e937 animals, we used the technique of RT-PCR. First-strand cDNA was synthesized from total RNA isolated from wild-type N2 and e937 animals, cDNA was then amplified using a sense primer corresponding to nucleotides 2088-2105 (of the blisterase B sequence shown in Fig. 4) that lie within exon 12 (a coding exon incorporated in all of the bli-4 gene products), in combination with an antisense primer specific for each BLI-4 isoform. Primers that amplify a 220-bp product from the C. elegans ubiquitin-like gene (ubl) (Jones and Candido 1993) were also included as a positive control. The results obtained for RT-PCR are shown in Figure 8C. Products of the expected lengths for blisterase A (145 bp, lane b), blisterase B (195 bp, lane d), and blisterase C (514 and 556 bp, lane f) were obtained using wild-type RNA. The 556-bp blisterase C amplification product is the result of incomplete processing of the message and contains intron 17. The blisterase C variant encoded by this transcript can potentially encode another BLI-4 product as depicted in Figure 4. Both the expected blisterase B (lane e) and blisterase C (lane g) products were amplified using e937 RNA. No blisterase A product was amplified from e937 RNA (lane c), confirming that expression of this isoform is eliminated in e937 animals.
The results from Northern blot analysis and RT-PCR confirmed that blisterase A expression is eliminated in e937 homozygotes because of the deletion of the 3' exon specific to this isoform. Expression of blisterases B, C, and D is not eliminated by the mutation.
Characterization of molecular lesions in three class II lethal mutants
Restriction analysis of h 1 O10 genomic DNA identified a putative Tcl insertion within the wild-type 1.3-kb EcoRI that the Tcl element had inserted into exon nine, resulting in the disruption of the bli-4 coding region {see Figs. 3A and 4). Exon nine is common to all of the bli-4 gene products (see Fig. 3A). A second class II lethal allele, q508, was isolated after mutagenesis with trimethylpsoralin and UV (kindly sent to us by Lisa Kadyk and Judith Kimble), mutagens known to cause a wide range of deletions in C. elegans (Yandell et al. 1994). We searched q508 genomic D N A for deletions within the first 12 exons common to all of the bli-4 gene products.
This was accomplished using a series of primers, which when used for PCR amplification, generate overlapping products that cover the entire common region. Amplification of exons 11 and 12 using q508 template D N A yielded a product smaller than that generated from wildtype template (data not shown). Comparison of the amplified DNA sequences revealed a 366-bp deletion that results in the removal of the splice acceptor and 37 nucleotides from exon 12 (see Figs. 3A and 4). In a third class II mutant, h199, an A--* T transversion was identified that changes the codon for His127 (CAT) into Leu (CTI'). The h199 lesion resides within exon four, which encodes a portion of the putative amino terminus, proximal to the protease domain (see Fig. 4).
All three mutations identified for the class II lethal mutants reside within coding exons common to all of the bli-4 gene products. On the basis of these findings, we propose that these lethal mutants affect the activity or expression of all isoforms. fragment. We verified this interpretation by determining the exact location of the transposon insertion by PCR amplification using bli-4and Tcl-specific primers. Sequence generated from the amplified product showed
D i s c u s s i o n
We have cloned the bli-4 locus by positioning the gene using the physical map of the C. elegans genome and have shown that it encodes members of the kex2/subtilisin-like family of proprotein cleavage enzymes. Our results demonstrate that bli-4 is contained within the cosmid K04F10, based on the detection of RFLPs associated with two mutations, and the ability to rescue fully the lethal phenotype of the class II lethal alleles, hlOlO, h199, and q508, the class III lethal allele s90, and the blistered phenotype of e937. The high degree of conservation within the catalytic domain suggests a structural and functional homology with other members of the proprotein convertase family.
bli-4 encodes structurally diverse endoproteases
bli-4 is an example of a proprotein convertase gene in which two types of enzyme arise from the same gene through alternative splicing. Structurally, the kex2/subtilisin-like convertase family can be divided into two classes based on the presence or absence of a transmembrane domain near the carboxyl terminus. Blisterase D falls into the class that contains a transmembrane domain, along with Kex2p, hFURIN, Dfurin-1 (all four isoforms), and Dfurin-2. Blisterases A and B belong to the class of endoproteases that do not have a transmembrane domain, with an overall structural similarity to PC1/ PC3, PC2, and PC4. Similarly, blisterase C resembles Cold Spring Harbor Laboratory Press on July 19, 2018 -Published by genesdev.cshlp.org Downloaded from PCS/PC6 and PACE4 most closely. Although altemative splicing in other proprotein convertase genes has been reported, only Dfur-1 produces gene products that differ markedly in structure (Roebroek et al. 1992(Roebroek et al. , 1994. At least two Dfur-1 isoforms show a nonoverlapping, restricted expression pattern (Roebroek et al. 1994). It is possible that the different proteins encoded by both the bli-4 locus and Dfur-1 perform distinct physiological roles in their respective organisms that, in mammalian cells, are fulfilled by many individual convertase genes.
The e937 mutation eliminates the blisterase A isoform
We have demonstrated that the e937 mutation is a 3.5kb deletion within the bli-4 gene that removes the 3' exon unique to blisterase A. Northern blot and RT-PCR analysis shows that the message encoding the blisterase A isoform is missing in e93 7 animals, although a second less abundant transcript of 2.7 kb is also not observed (Fig. 8). The elimination of expression of the blisterase A isoform is consistent with what would be expected from the deletion breakpoints. Blisterase A appears to be required for the normal production or maintenance of the adult cuticle as evidenced by the viable blistered phenotype of e937.
Northern blot analysis using the common probe showed that blisterases B and C are normally either expressed at very low levels or in a short developmental window. Whether the low abundance of these messages is a function of the short 3' UTRs found on both isoform cDNAs remains to be determined. Detection of an unprocessed blisterase C transcript by RT-PCR suggests inefficient splicing, although intron 17 has the capacity to encode a variant C isoform. The sequence of the splice acceptor for intron 17 (TCTCCAG) is unusual when compared with most C. elegans genes (TTTTCAG) (Blumenthal and . This may suggest a potential mechanism for controlling the expression of the blisterase C isoform, further resulting in the low abundance of this transcript. The blistered phenotype is incompletely penetrant. The number of individuals that exhibit blisters in a given population is -90%, and this fraction remains a heritable feature of e93 7 homozygotes, regardless of their phenotype. Because C. elegans is hermaphroditic and strains are isogenic, incomplete penetrance would appear to correlate with the nature of the mutation in this complex gene. Although the molecular basis of penetrance is poorly understood, our preliminary analysis of the e937 mutation may provide a first step toward determining the underlying mechanisms responsible for this phenomenon, at least with regard to bli-4. One possible explanation for the incomplete penetrance observed in e937 homozygotes is that deletion of the blisterase A 3' unique exon causes the bli-4 precursor RNA to be spliced instead to the 3' exons of blisterases B, C, and D or combinations thereof. In this case, the altemative isoforms could compensate partially for the loss of blisterase A activity in e937 homozygotes. We hypothesize that there exists a functional redundancy between the individual BLI-4 convertases. An observation consistent with such a hypothesis is that the blistered phenotype is fully penetrant in heteroallelic combinations of e937 with most noncomplementing class II lethal alleles. If expression of the bli-4 gene products is eliminated completely by a class II mutation, there would be less blisterase B, C, and/or D activity to compensate for the loss of blisterase A. The reduction in the compensating isoforms would result in animals that are more severely blistered, consistent with our observations. Rescue of both the lethal and blistered mutant phenotypes by constructs that encode only blisterase A and plasmid pCeh236 (which can encode blisterases B, C, and D) also suggests functional redundancy between the various BLI-4 isoforms. Rescue of the lethal mutants by constructs that only encode blisterase A is contradictory to what is observed for the e937 mutation. The genetic data indicate that loss of blisterase A expression is not essential for development. One explanation for rescue by these plasmids is that expression from the extrachromosomal array may be aberrant.
The phenotype associated with the e93 7 mutation suggests that blisterase A convertase activity is required for the production or maintenance of the adult cuticle. It could be that the cuticular collagens are substrates for cleavage by this isoform. Although no known mutations within C. elegans collagen genes result in a blistered phenotype, many of the cuticle collagens (e.g., rol-6 and sqt-1) contain an Arg-X-Lys/Arg-Arg motif within the amino termini (Kramer et al. 1990;Kramer and Johnson 1993). Recently, it has been shown that mutations within these motifs disrupt normal collagen function implying that a kex2/subtilisin-like convertase activity may be required for processing of these cuticle collagens (Yang and Kramer 1994). Other candidate substrates for processing by the blisterases include five other genes in which mutations result in blistering of the cuticle (Brenner 1974;Park and Horvitz 1986). As yet, the molecular nature of these five blister mutants remains unknown.
Class II lethal mutants contain mutations in the common coding region
Molecular characterization of the lesions responsible for three class II lethal alleles has revealed that the mutations reside within the first 12 exons that are common to all of the BLI-4 isoforms. The hl O10: :Tel insertion site is within exon nine, suggesting that the presence of the transposon disrupts expression of all of the blisterases. The mutation q508 is a deletion that affects exon 12. We would predict that this lesion results in the complete lack of blisterase activity owing to the truncation of the bli-4 gene products. The deletion removes the splice acceptor and coding sequences from exon 12. Splicing of the upstream region to the unique 3' exons, if it occurs, would effectively cause all of the remaining coding sequences to be out of frame. Thus, q508 is a good candidate for a null mutation. The h 1 O10 and q508 mutations both reside within a region of the blisterases that is uniquely conserved among all of the identified members of the kex2/subtilisin-like family and has been designated the "middle" domain ( Van de Ven et al. 1990), or the "P-domain" in Kex2p (Fuller et al. 1991). Deletion analysis of Kex2p mutants has shown that the P-domain is essential for processing activity and may be required for either proper folding of the protease domain or recognition of substrates (Gluschankof and Fuller 1994). h199 contains an amino acid substitution that would also affect all bli-4 gene products. At present, we are unclear as to the exact mechanism by which this lesion reduces blisterase activity. A systematic search for mutations within the unique 3' exons of the class III mutant, s90, has failed so far to reveal any lesions. Because the s90 mutant phenotype was rescued by K04F10 and subclones of the cosmid, the responsible mutation may be elsewhere within the bli-4 locus.
Preliminary analysis of animals homozygous for the lethal alleles of bli-4 suggests that development does not arrest until the end of embryogenesis. This could indicate that bli-4 is not required until later stages of embryonic development, which is consistent with the earliest time of expression observed for the bli-4/lacZ fusion gene. It is also possible that a maternal endowment of bli-4 activity provides mutants with sufficient blisterase function to survive to this stage. The similarity of the bli-4 lethal mutant phenotype with that shown by cuticle-defective mutants suggests that blisterase activity is required for elongation of the nematode embryo.
Of the other members of the kex2/subtilisin-like gene family, only KEX2 was identified by genetic analysis; null mutations in KEX2 result in the inability to process the a-mating factor pheromone and the K1 killer toxin (Leibowitz and Wickner 1976;Wickner and Leibowitz 1976). KEX2 is not required for viability in yeast, indicating that Kex2p does not play an essential intracellular role. In contrast, potential null mutations in bli-4 result in developmental arrest near the end of embryogenesis before hatching. Such evidence demonstrates that bli-4 is essential to development, and implies the existence of blisterase substrates required for embryogenesis. Such substrates could include both structural molecules and peptide signals. Although essential function involving a secreted protein derived from a proprotein has not yet been described in C. elegans, the disruption of development by bli.4 mutants implies the existence of such interactions.
Aligning the physical and genetic maps for the bli-4 region bli-4 was mapped with respect to dpy-5 and unc-13 by RFLP analysis as described by Baillie et al. {1985). bli-4 was placed -0.3 map units to the right of the polymorphism hP5. The cosmid corresponding to the hP5 site is T09H3. The DNA density in this region was -500 kb per map unit (Starr et al. 1989), predicting that bli-4 would be located -150 kb to the right of hP5 (see Fig. 2). The deficiency hDf8 maps to the right of bli-4. The left breakpoint of hDf8 was mapped to the cosmid C44D11 by restriction analysis. Together, these results positioned the bli-4 coding region to cosmids located to the right of T09H3 and to the left of C44D 11 on the physical map (Fig. 2). To determine the exact position of bli-4 sequences, cosmids that span this region together were used to probe for RFLP in genomic DNA prepared from various bli-4 mutants.
Rescue of the blistered and lethal phenotypes
We used a transgenic nematode strain, KR2276 (kindly provided by J. McDowall), that carries an extrachromosomal array designated hEx4, composed of two overlapping cosmids K04F10 and C29F10 and the plasmid pRF4. The cosmid C29F10 overlaps K04F10 by 95% {data not shown) and was included to facilitate homologous recombination between the various clones to generate the array. The plasmid pRF4 contains the rol-6 gene bearing the sere{dominant mutation sulO06 that confers a righthanded Roller phenotype to animals that carry the array (Mello et al. 1991). As a control, all the genetic crosses outlined below were repeated using a transgenic strain (KR2872) that carries only the plasmid pRF4 as an extrachromosomal array (hEx6).
Rescue of e937 Homozygous e937 males were crossed with KR2276 Rol-6 hermaphrodites. All Rol-6 hermaphrodites obtained in the F~ were isolated and allowed to self-fertilize. Non-Unc-11 Rol-6 hermaphrodites were isolated until no Unc-11 animals were recovered indicating homozygosity for e937. Several Rol-6 lines were established that segregated Rol-6 (wildtype) and Bli-4 progeny. No Bli-4 Rol-6 animals have ever been observed over many generations. However, Bli-4 Rol-6 animals were obtained using the control transgenic strain in parallel experiments.
Rescue of hl010 Spontaneous Lon-2 males of the genotype szTl(I;X) [lon-2] unc-63 bli-4(hlOlO) unc-13; I were crossed to KR2276 Rol-6 hermaphrodites. Successful rescue was determined by the presence of Unc-13 hermaphrodites in the 1:2 generation, unc-13 is epistatic to uric-63 and rol-6. To verify the presence of the rol-6 marker, rescued animals were mated with wild-type males. Rol-6 animals were obtained in the F~ progeny. No Unc-13 progeny were obtained using the control Rol-6 array. The presence of the hl O10: :Tcl within the Unc-13 animals was verified by PCR using primers KRp36 and p618 (see below).
Rescue of q508 and s90 Heterozygous males of the genotype dpy-5 bli-4(qS08)/ + +, or bli-4(s90) unc-13/ + + were crossed with KR2276 Rol-6 hermaphrodites. All Rol-6 hermaphrodites in the F~ generation were isolated and allowed to self-fertilize. Successful rescue was determined by the presence of Dpy-5 progeny in the case of q508 or the presence of Unc-13 progeny in the F 2 generation for s90. No such progeny were obtained using the control transgenic strain. The presence of the q508 deletion within the rescued Dpy-5 animals was verified by PCR using the primers KRp46 and KRp47 (see below). A similar strategy was used for rescue experiments using the K04F10 subclones shown in Figure 3B. Heterozygous males of the genotype dpy-5 bli-4(q508)/ + +, dpy-5 bli-4(s90) unc-13/ + + + or dpy-5 bli-4(h199) unc-13/+ + + were crossed to Rol-6 hermaphrodites bearing pRF4 and the plasmid constructs. Successful rescue was determined by the presence of Dpy-5 progeny in the case of q508 or the presence of Dpy-5 Unc-13 progeny in the F 2 generation for s90 and h199. Individual lines for each rescue were established and observed to give a fraction of dead progeny, resembling the bli-4 lethal phenotypes. Outcrossing of the putative rescued lines produced Rol progeny. Partial rescue of blistering by pCeh238 was evidenced from four independent stable transgenic lines of genotype bli-4(e937); hEx46 to 49 (pCeh238 + pRF4). The percentage of Rol-6 animals that blister in each of these lines is 43% {79/185), 8% (41/523), 17% (17/101), and3% (7/206), respectively, indicating incomplete rescue of the blistered phenotype.
Isolation and characterization of bli-4 cDNA clones cDNA clones were isolated from a h ZAP library (Barstead and Waterston 1989) or provided by Yuji Kohara (National Institute of Genetics, Mishima, Japan; cDNA clone groups YK849 and YK949). The cDNA sequence was obtained from nested exonuclease III deletions as described by Henikoff (1984) and various fragments subcloned into pBluescript (Stratagene). Sequencing was performed using either Sequenase (U.S. Biochemical) or the Amplitaq DNA polymerase sequencing kit (ABI) and a Perkin-Elmer Cetus thermal cycler. Some sequence was obtained using an ABI model 373A automated sequencing machine. DNA and protein data base searches were performed using the FASTA program (Pearson and Lipman 1988). The PCgene {Intelligenetics) package of programs was used to identify the secretion signal peptide, hydrophobic domain, and potential glycosylation and cell attachment sites.
KO4FI O subclones
Subclones used for transgenic rescue of bli-4 phenotypes presented in Figure 3B are as follows, pCeh226 contains a 13.5-kb XbaI-XhoI insert containing exons 1-13. pCeh226 was digested with SalI and recircularized to obtain pCeh229, pCeh230 contains the same sequences as pCeh229 except for the deletion of a 1.8-kb PstI-EcoRI fragment within intron 12. Subclones pCeh226, pCeh229, and pCeh230 encode blisterase A. pCeh238 contains a 10.3-kb XbaI-EcoRI fragment (including exons 1-12) fused to a 0.7-kb BamHI-KpnI fragment that contains exons 14 and 15. This construct encodes blisterase B. pCeh236 contains exons 1-12, and 14-21 and can encode blisterases B, C, and D. The construction of pCeh239 is discussed below. These constructs were coinjected with pRF4 (1:1 ratio) into the germ line of e937 homozygotes (or N2 animals for pCeh239). Stable transgenic lines were recovered and used for subsequent crosses to lethal-bearing strains discussed above. Rol-6 hermaphrodites containing pCeh239 (see below) were crossed to e937 homozygous males. Bli-4 Rol-6 animals exhibiting approximately the same penetrance as e937 homozygotes in the F 2 generation indicated failure of pCeh239 to rescue blistering.
Construction and expression of the bli-4/lacZ fusion
A 5-kb XbaI-ClaI fragment containing the putative promoter region of bli-4 was cloned into the XbaI-BamHI sites of the lacZ expression vector pPD21. 28 {Fire et al. 1990). The ClaI site in the bli-4 gene occurs i0 nucleotides after the putative initiator methionine codon in exon 2. This plasmid was designated pCeh239 and used to transform N2 hermaphrodites as described above. Transgenic Rol-6 lines were fixed and stained with X-gal according to the procedure described by Fire et al. (1990).
PCR amplification of the hlOlO::Tcl and q508 mutations
Template DNA from individual arrested embryos homozygous for each of the mutant bli-4 alleles was extracted as described (Barstead et al. 1991). The PCR conditions used to amplify the Tcl insertion in hlOlO were as follows: 94~ 45 sec; 50~ 30 sec; 72~ 1 min for 30 cycles. The sequence of the primers used for the amplification were KRp36 (5'-CTATAAACCCTTAAT TTG TC-3')(bli-4 specific: anneals to a sequence within intron 7) and p618 (5'-GAACACTGTGGTGAAGTTTC-3')(TcI specific), p618 was a gift from B. Williams {Washington University School of Medicine, St. Louis, MO). The PCR conditions used to amplify the deletion from DNA extracted from homozygous q508-arrested embryos were the same as above except that the annealing temperature was 56~ The primers used to amplify the deletion were KRp46 (5'-ATGCTACTGGTCAGTTTTCA-3') (anneals to a sequence within intron 10) and KRp47 (5'-ATTCTATCCGAATCCTCCGA-3') {anneals to a sequence within intron 12). The PCR products obtained using these conditions were cloned into the pBluescript vector and sequenced using Sequenase version 2.0.
Detection of the h199 mutation h199 was detected by a systematic search through exons 1-10 by heteroduplex mismatch detection using the mutation-detection enhancement (MDE) system CAT Biochemicals). Individual blistered h 199/e937 animals were used as template for independent PCR reactions to amplify overlapping sections of genomic DNA from the region encoding exons 1-10. After running denatured/reannealed PCR products in the nondenaturing gel matrix {40• crux 1 mm) for 21 hr at 20 V/cm, a polymorphism was detected in a 433-bp fragment amplified from three independent PCR reactions by KRp44 {5'-TGTTGAACGATT-GGATTCAC-3') {sense primer corresponding to nucleotides 456-474) and KRp45 (5'-ATACATATCCACCAACTGCT-3') {antisense primer corresponding to nucleotides 702-721). Homozygous-arrested embryos from a heterozygous h199/e937 parent were used as template for PCR using KRp44 and KRp45. Three independent reaction products were cloned and sequenced to reveal an A---, T transversion (nucleotide position 560 in Fig. 4} when compared to sequence obtained from parallel experiments with N2 and e937 worms as template for PCR.
Northern blot analysis
Mixed stage total N2 C. elegans RNA and mixed stage total bli-4(e937) RNA was isolated by centrifugation of nematode lysates prepared in guanidinium thiocyanate through CsC1 gradients {Chirgwin et al. 1979;Kramer et al. 1985). Poly(A) + RNA was purified using the PolyATtract mRNA isolation system (Promega), according to the manufacturer's conditions. Five micrograms of poly(A) + RNA was denatured in 2.2 M formaldehyde, 50% deionized formamide, and 1 x MOPS for 15 rain at 60~ The RNA was fractionated by electrophoresis in an 0.8% agarose gel containing 1 x MOPS and 2.2 M formaldehyde (pH 7.0). RNA was transferred to a Zeta-probe membrane (Bio-Rad) and hybridized with random-primed {Feinberg and Vogelstein 1983) 32p-labeled probes in 0.25 M Na2HPO 4 (pH 7.2}, 7% SDS at 65~ The membrane was washed in 0.3x SSC, 0.1% SDS, at 65~ The common probe used was a ClaI-AccI 480-bp fragment corresponding to nucleotides 1609-2089 of the blisterase B cDNA {Fig. 4), which lies just 3' of the protease domain and is common to all of the blisterases. The blisterase C/D-specific probe was a SacI-BamHI 518-bp fragment corresponding to nucleotides 2239-2757 within the blisterase D sequence shown in Figure 4. The first 275 bp of this fragment are shared by blisterase C; hence, the probe should hybridize to both the C and D transcripts. To standardize RNA levels, the membrane was stripped and rehybridized with a 1.7-kb probe containing genomic sequences encoding the C. elegans S-adenosyl homocysteine hydrolase (AHH) gene.
RT-PCR
First-strand cDNA was synthesized from 5 lag of total RNA using a dT~2_18 primer and the Superscript Preamplification system (GIBCO-BRL). The conditions used were as described by the manufacturer. The primers used for amplification of first-strand cDNA were as follows (see also Fig. 4): KRp6 (5'-CTACTCG-GCTACTCCTGC-3') (nucleotides 2088-2105 in exon 12, common to all of the bli-4 gene products); KRp49 (5'-TCTAT-GAAGCTGTGAGCAGT-3') {antisense primer corresponding to nucleotides 2209-2228 of the blisterase A-specific 3' exon); KRpS0 (5'-ATGGGTACGAGAAGAAGAGT-3') {antisense primer corresponding to nucleotides 2261-2280 of the blisterase B-specific 3' exon), and KRp39 (5'-TAAAGGCGCTCGGCT-TTTTG-3') (antisense primer corresponding to nucleotides 2583--2602 of the blisterase C-specific exon 18). Primers OPC18.r and OPC19.c, which are specific for the ubl gene {Jones and Candido 1993}, were used as an internal positive control. Amplification reactions were performed in which the primer KRp6 was used in combination with one of the antisense primers specific for each BLI-4 isoform, along with the control ubl gene primers. Reaction mixtures for PCR contained a 1-pA aliquot of the eDNA template, 50 pmoles of each primer, and 2.5 units of Taq DNA polymerase (Promega) in 50 mM KC1, 10 mM Tris-HC1 (pH 8.3}, 0.1% Triton X-100, 1.5 mM MgC12, and 200 mM dNTPs in a final volume of 25 ixl. Reactions were carried out in a Perkin-Elmer Cetus thermal cycler for 32 cycles of denaturation {94~ 45 see), annealing (59~ 30 see), and extension (72~ 1 rain} followed by extension at 72~ for 7 rain. Determination of the 5' end of bli-4 was as follows. RT-PCR was performed on poly(A) + RNA from N2 worms. First-strand cDNA was synthesized as above. The resultant eDNA was amplified with KRpl0 (5'-ACTCTCTTCTTCGGTCGC-3') (antisense primer corresponding to nucleotides 504-521 of exon 3) and a sense primer derived from SL1 (including a NotI adapter) (5'-ATAAGAATGCGGCCGCGGTTTAATTACCCA-GTTG-3'). A product of the expected size for KRpl0 with SL1 {-500 bp) was reamplified using SL1 and the nested primer KRpll (5'-GTGTCCTTGTTGTTTCCG-3') (antisense primer corresponding to nucleotides 418--436 of exon 3). The resulting product was cloned and sequenced to confirm that bli-4 is transspliced to SL1. A parallel experiment performed with an SL2 trans-splice leader primer failed to amplify bli-4 sequences. Reaction conditions for PCR were as described above except the annealing temperature for initial amplification on first strand cDNA was 56~ and reamplification was at 58~
Note
The accession numbers for the cDNA sequences reported in this paper are L29438, L29439, and L29440. | 2018-04-03T01:12:31.847Z | 1995-04-15T00:00:00.000 | {
"year": 1995,
"sha1": "8993f3c4c8269d52c92379c653536a72335e05f7",
"oa_license": null,
"oa_url": "http://genesdev.cshlp.org/content/9/8/956.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d0837a96bd121eab22483a7952b7204ed0e7abd7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15010499 | pes2o/s2orc | v3-fos-license | Amarogentin Displays Immunomodulatory Effects in Human Mast Cells and Keratinocytes
Keratinocytes express the bitter taste receptors TAS2R1 and TAS2R38. Amarogentin as an agonist for TAS2R1 and other TAS2Rs promotes keratinocyte differentiation. Similarly, mast cells are known to express bitter taste receptors. The aim of this study was to assess whether bitter compounds display immunomodulatory effects on these immunocompetent cells in the skin, so that they might be a target in chronic inflammatory diseases such as atopic dermatitis and psoriasis. Here, we investigated the impact of amarogentin on substance P-induced release of histamine and TNF-α from the human mast cell line LAD-2. Furthermore, the effect of amarogentin on HaCaT keratinocytes costimulated with TNF-α and histamine was investigated. Amarogentin inhibited in LAD-2 cells substance P-induced production of newly synthesized TNF-α, but the degranulation and release of stored histamine were not affected. In HaCaT keratinocytes histamine and TNF-α induced IL-8 and MMP-1 expression was reduced by amarogentin to a similar extent as with azelastine. In conclusion amarogentin displays immunomodulatory effects in the skin by interacting with mast cells and keratinocytes.
Introduction
Mast cells are strategically located in the upper dermis of normal skin, where host tissue is exposed to external antigens and bacteria. After activation by a range of stimuli (e.g., crosslinking of the IgE receptors and binding of the neuropeptide substance P released from sensory nerve fibers in the skin during inflammation) mast cells release mediators such as histamine [1]. Histamine is also stored in large amounts in secretory glands. It is involved in the elicitation of immediatetype allergic reactions as well as in tissue remodeling and chronic inflammation [2,3] by binding to one of the four known G-protein coupled transmembrane H1-H4 receptors. These receptors are expressed on various cell types including monocytes, lymphocytes, dendritic cells, and keratinocytes [4]. Furthermore, mast cells can synthesize de novo a range of cytokines such as TNF-, growth factors, and membrane molecules involved in inflammatory reactions [2]. In healthy skin only a few mast cells are present in the upper dermis. However, the number of mast cells as well as the histamine level increases in chronic skin inflammation such as psoriasis, chronic leg ulcers, and epithelial skin cancer [2]. Besides the involvement in immediate-type allergy and maturation of dendritic cells, histamine also stimulates human keratinocytes through the H1 receptor to increase the expression of the proinflammatory cytokine IL-6, the chemokine IL-8 [5], the nerve growth factor (NGF) [6], and matrix metalloproteinases (MMPs), especially MMP-1 and MMP-9 [7,8]. MMP-1, the so-called interstitial collagenase, specifically cleaves type 1 collagen, a major constituent of the dermis. MMP-1 also activates other MMPs such as MMP-9 that has the highest substrate specificity for dermal elastin and fibrillin [9]. The cleavage of the components in the basement membrane allows T cells to cross the basement membrane and enter the epidermal compartment during skin inflammation. Mast cells, T cells, and keratinocytes interact with one another in the uppermost dermis of inflamed skin. In the epidermis of lesional skin from patients with atopic dermatitis cutaneous nerve fibres are present at high densities. Their length is increased and they reach the surface of the skin [10]. This abnormal innervation is thought to be one cause of the intense itching of atopic skin. One of the mediators involved in nerve fibre expansion is NGF that is released from epidermal keratinocytes and accumulates in the epidermis.
Mediators of Inflammation
Strong expression of NGF and its receptors is also observed throughout the entire epidermis on a psoriatic lesion. Under physiological conditions keratinocyte-derived NGF plays an important role in the maintenance and regeneration of cutaneous nerves in normal skin, normal proliferation and differentiation of cutaneous nerves, keratinocytes, and melanocytes [11]. However, NGF production in keratinocytes can be increased by histamine and TNF-released from mast cells [6,11]. Kumar and colleagues reported that amarogentin, a secoiridoid glycoside that is present in the Indian plant Swertia chirayita, modulates in arthritic mice the secretion of proinflammatory cytokines including TNF- [12]. However, it is unclear whether amarogentin the bitterest substance in nature that is also present in high amount in the alpine flora Gentiana lutea can also modulate immune reactions in inflamed skin. As amarogentin is an agonist for several bitter taste receptors (TAS2R1, TAS2R4, TAS2R39, TAS2R43, TAS2R46, TAS2R47, and TAS2R50) [13] and the expression of at least the bitter taste receptors TAS2R1 and TAS2R38 can be found on keratinocytes, amarogentin could influence cutaneous inflammation. Recently we showed already that amarogentin enhances keratinocyte differentiation [14]. In this study we analyzed if amarogentin might also influence the release of histamine and TNF-by mast cells and/or the interaction of these proinflammatory stimuli with keratinocytes. In this way amarogentin could be a target in chronic inflammatory diseases such as atopic dermatitis and psoriasis.
Cytotoxicity Test.
Cytotoxicity of amarogentin and azelastine in LAD-2 cells was assessed with the ViaLight Plus ATP assay (Cambrex, Verviers, Belgium) according to the manufacturer's instructions. The method is based on the bioluminescence measurement of ATP that is present in metabolically active cells. Luciferase catalyzes the formation of light from ATP and luciferin. The emitted light intensity is directly proportional to the ATP concentration and is measured using a luminometer (Sirius HT; MWG).
Cell
Culture. LAD-2 human mast cells (kindly provided by Dr. A. Kirshenbaum, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA) were cultured in serum-free media (StemPro-34, Thermo Fisher Scientific, Darmstadt, Germany) supplemented with 2 mM L-glutamine and 100 ng/mL rhSCF (recombinant human stem cell factor; Cell Signaling Technologies). The human keratinocyte cell line HaCaT was cultured in Dulbecco's modified essential medium (DMEM; Invitrogen GmbH, Karlsruhe, Germany) containing 10% fetal calf serum (FCS; PAA, Pasching, Austria). All cells were cultivated at 37 ∘ C in a humidified atmosphere with 5% CO 2 . A part of the amarogentin stimulated group was preincubated for 30 min with the PLC inhibitor U73122 (10 M, Sigma-Aldrich). The supernatants were collected for further assays.
Degranulation Assays.
Mast cell degranulation was assessed by measuring histamine in the supernatant fluid 1 h after cell stimulation with SP (2 M). Histamine levels were assayed 20 min after SP stimulation using a histamine ELISA from IBL international.
Cytokine and MMP-1 Release
Assays. TNF-, IL-6, IL-8, and MMP-1 release into the supernatant fluid 24 h after cell stimulation (either LAD-2 or HaCaT cells) was measured by Enzyme-Linked Immunosorbent Assay (ELISA) using a commercial kit from R&D Systems according to the manufacturer's instructions.
RNA Extraction and PCR.
HaCaT cells were preincubated with amarogentin (100 mM) or azelastine (24 M) for 2 h at 37 ∘ C. Then the cells were stimulated with histamine (10 M) and TNF-(25 ng/mL). After 3 h at 37 ∘ C total RNA was extracted with the RNeasy Mini kit (Quiagen, Hilden, Germany). First-strand cDNA was synthesized from 2 g total RNA in 20 L final volume using the Omniscript kit (Qiagen, Hilden, Germany) with random hexamer primers (Invitrogen). 2 L aliquots of the reverse transcription solution was used as a template for specific PCR reactions with an annealing temperature of 58 ∘ C, the PCR product was analyzed by gel electrophoresis, and the bands were quantified with image J. The PCR primers (20 pMol each) used to amplify NGF and the house keeping gene -actin were NGF forward primer: 5 -aagcggcgactccgttcacc-3 , reverse primer: 5 -ggagcgtgtcggcaggtcag-3 ; -actin forward primer: 5 -cgagcacagagcgtcgccttt-3 ; reverse primer: 5 -gaccccgtcaccggagtcca-3 . the primary antibody (4 ∘ C, overnight) was followed by incubation with biotinylated swine anti-goat, anti-mouse, and anti-rabbit antibody immunoglobulins (1 h, RT), streptavidin conjugated to horseradish peroxidase (20 min, RT), AEC solution as chromogen, and hematoxylin counterstaining. Staining with the rabbit immunoglobulin fraction served as isotype control. Images were taken with a microscope (Carl Zeiss AG, Oberkochen, Germany) equipped with Axiovision software.
Statistical Analysis.
The data from all the procedures were expressed as the average ± standard error using Excel. Two group comparisons were evaluated using unpairedtest. In all analyses, ≤ 0.05 was considered statistically significant ( * ); 0.05 < < 0.06 was considered as borderline significant (bs) and > 0.06 was considered as not statistically significant (ns).
Effect of Amarogentin on Histamine and Cytokine
Release from Mast Cells. Substance P (SP) augments the release of histamine from the human leukemic LAD-2 mast cell line 1 h after stimulation in a dose dependent manner. Furthermore, SP induced the production of TNF-after 24 h (Figures 1(a) and 1(b), first graph). 2 M SP was the best concentration to induce histamine and TNF-release and was used for all further experiments.
To test if the bitter compound amarogentin can inhibit histamine or TNF-release, LAD-2 cells were incubated with 100 M amarogentin according to former experiments and the literature [13,14]. As positive control, we applied the histamine-1 receptor antagonist azelastine at a concentration of 24 M referring to data from the literature [15]. Both concentrations of amarogentin and azelastine were not cytotoxic for In contrast to azelastine (24 M, 30 min) preincubation with amarogentin (100 M, 30 min) did not inhibit histamine release (Figure 1(a)). However, preincubation with both amarogentin (100 M, 30 min) and azelastine (24 M, 30 min) blocked in LAD-2 mast cells the secretion of newly synthesized TNF-24 h after SP (2 M) stimulation (Figure 1(b)). This demonstrates that amarogentin does not inhibit the degranulation of mast cell stored mediators, but it inhibits the new synthesis of TNF-. To test if this inhibitory effect of amarogentin is mediated via bitter taste receptor signaling we used U73122, a phospholipase C inhibitor that was already described as inhibitor of the bitter taste receptor signalling in rat neuronal PC12 cells [16]. According to the literature the used concentration of U73122 was 10 M. It could be shown that U73122 reversed the inhibitory effect of amarogentin on the new synthesis of TNF-.
Effect of Amarogentin on Histamine Induced IL-6 Production in Human
Keratinocytes. The human keratinocyte cell line HaCaT produces IL-6 after costimulation with TNFand histamine [5]. To test the effect of amarogentin in this setting, HaCaT cells were preincubated with 100 M amarogentin or 24 M azelastine for 30 min, before stimulation with 10 −5 M histamine and/or 25 ng/mL TNF-for 24 hours. The used concentrations of amarogentin and azelastine were not cytotoxic for HaCaT cells. In contrast to amarogentin, azelastine inhibited the IL-6 release from HaCaT cells (Figure 2).
Effect of Amarogentin on Histamine and TNF-Induced IL-8 and MMP-1 Production in Human Keratinocytes. IL-8 is a chemoattractant for neutrophils and T cells and may initiate cutaneous inflammation.
To assess if amarogentin inhibits IL-8 cytokine production, amarogentin treated HaCaT cells were stimulated with 10 M histamine. However, histamine stimulation alone did not induce IL-8 expression in HaCaT cells (Figure 3(a)); therefore, 25 ng/mL TNF-was additionally applied to synergistically increase this secretion, because histamine and TNF-are secreted together from mast cells. This effect could be inhibited by amarogentin as well as azelastine. To enable neutrophils and T cells the transmigration through the basement membrane, the expression of the matrix metalloproteinase MMP-1 is required. Again, histamine alone only marginally enhanced the production of MMP-1 (Figure 3(b)), whereas TNF-increased synergistically this expression. Amarogentin as well as azelastine could inhibit the release of MMP-1 in histamine and TNF-costimulated HaCaT cells. U73122, a PLC inhibitor, which can inhibit the bitter taste receptor signaling pathway, reversed the inhibitory effect of amarogentin on MMP-1 and IL-8 secretion in stimulated HaCaT cells. This effect was pronounced related to the IL-8 secretion. The MMP-1 release could only partly be restored.
Similarly, amarogentin as well as azelastine could inhibit the release of MMP-1 in histamine and IFN-(T cell derived cytokine) costimulated HaCaT cells (supplementary data 2). constitutively secrete low amounts of NGF; this secretion can be increased by histamine (10 −5 M) and TNF-(25 ng/mL) costimulation. Histamine alone only has a small effect [5]. As NGF expression is linked with inflammation and pruritus we analyzed if amarogentin could block the release of NGF. However, preincubation with neither amarogentin (100 M) nor azelastine (24 M) for 1 h could inhibit histamine and TNF-induced NGF mRNA-expression after 4 h incubation ( Figure 4).
Discussion
Keratinocytes are actively engaged in skin inflammatory responses by releasing proinflammatory cytokines (e.g., IL-6), chemokines (e.g., IL-8), or matrix metalloproteinases (e.g., MMP-9) [4,5]. The migration of immune cells across the basal lamina is an important process during inflammation that is controlled by the composition of the basement membrane and the balance of proteases, cytokines, and chemokines. MMPs influence all these parameters and are highly expressed in inflammation of, for example, lesional acute eczema [17]. In human skin high amounts of histamine are secreted by activated mast cells or basophils. These histamine induced changes in keratinocytes lead to increased transmigration of T cells through an artificial basement membrane and indicate that the increased MMP expression is functional [4]. In our study we could block histamine and TNF-induced MMP-1 release by amarogentin to a similar extent as with the histamine antagonist azelastine. It is suggested that keratinocytes enhance the effect of histamine due to elevated H1-receptor expression by TNF-and amarogentin might influence this effect. Furthermore TAS2Rs are expressed and functional in keratinocytes [14] and activation of TAS2Rs might also occur in HaCaT cells and explain the effectiveness of the bitter compound amarogentin. Activated bitter taste receptors lead via the G-protein -gustducin to the activation of phospholipase C-2 (PLC 2) and the formation of inositol trisphosphate as well as triacylglycerol and eventually to the opening of the transient receptor potential cation channel 5 [18]. The effect of amarogentin in the HaCaT cells could be reversed by U73122, a PLC inhibitor that is a key enzyme of the bitter taste receptor signaling. These data suggest that the inhibitory effect of amarogentin on SP-induced TNF-release is mediated via bitter taste receptor signaling. In addition, TAS2Rs were upregulated in human leukocytes of patients suffering from severe inflammatory therapyresistant asthma compared to healthy controls. Orsmark-Pietras and colleagues demonstrated that the TAS2R agonists' chloroquine and denatonium inhibit the release of several proinflammatory cytokines including TNF-, IL-1 , and IFN-from blood leucocytes after LPS stimulation [19]. The inhibitory effect of chloroquine on the release of histamine from rat mast cells after stimulation with compound 48/80 or calcium ionophore A42187 was already described 25 years ago [20,21] and can now also be related to bitter taste receptor expression. Very recently Ekoff and colleagues analyzed the expression of 9 TAS2Rs on human mast cells [22]. Furthermore, they studied the effect of 4 TAS2R agonists including chloroquine and denatonium on human mast cell-mediated release of histamine and prostaglandin D2. After activation via IgE-receptor cross-linkage both cord blood-derived mast cells and the mast cell line HMC1.2 expressed all tested TAS2Rs (TAS2R3, TAS2R4, TAS2R5, TAS2R10, TAS2R13, TAS2R14, TAS2R19, TAS2R20, and TAS2R46). Moreover, agonists known to bind to these particular TAS2Rs significantly inhibited the release of histamines from IgE-stimulated mast cells. This suggests that TAS2R agonists may have an anti-inflammatory action. However, the 4 different TAS2R agonists used in the study of Ekhoff and colleagues displayed varying capacities of inhibition and the mechanism behind this pattern is still unknown. As amarogentin can activate TAS2R1, TAS2R4, TAS2R39, TAS2R43, TAS2R46, TAS2R47, and TAS2R50, at least the TAS2R 4 and TAS2R46 overlap with the study from Ekhoff and colleagues and could be activated by amarogentin. In addition, we could demonstrate that the mast cell line LAD-2 expresses also TAS2R1 that can be activated by amarogentin ( Figure 5). Furthermore, only TNF-but not IL-6 or IFN-enhances the production of NGF in human keratinocytes via the Raf-1/MERK/ERK pathway in human keratinocytes and ERK-Inhibitors can inhibit this signaling pathway [11]. Such positive feedback loops of TNF-/NGF may amplify in vivo skin inflammation as NGF induces neurons to synthesize substance P [23]. Although amarogentin does not inhibit the degranulation of LAD-2 cells and does not act as human mast cell stabilizer, it can inhibit the new synthesis of TNF-. This effect could be reversed by the PLC inhibitor U73122 that was already described as substance to inhibit the bitter taste receptor pathway in rat neuronal cells [16]. The data suggest that the inhibitory effect of amarogentin on IL-8 and MMP-1 release is at least partly mediated via bitter taste receptor signaling.
These results supplement our previous observations that the treatment of HaCaT cells with amarogentin influences the differentiation process [14].
A simplified working hypothesis for the action of amarogentin on mast cells, keratinocytes, and T cells is shown in Figure 6. First amarogentin reduces the expression of newly synthesized TNF-in SP-stimulated mast cells. Second histamine and TNF-induce IL-8 as well as MMP-1 release; these secretions could be blocked by amarogentin. Furthermore, amarogentin indirectly influences NGF and IL-6 expression by inhibiting the new synthesis of TNF-in mast cells after stimulation with SP2.
Until now the involvement of TAS2Rs was not described in chronic inflammatory diseases such as atopic dermatitis and psoriasis. However, we could demonstrate that the expressions of TAS2Rs are downregulated in psoriasis (data not shown), so that a stimulation of the remaining receptors may influence the skin condition. The endogenous ligands for TAS2Rs in keratinocytes are still unknown, but it can be speculated that bitter tasting amino acids of the natural moisturizing factors of the skin (e.g., such as tyrosine or histidine) may act in this way.
Conclusion
The bitter compound amarogentin may modulate the milieu of inflamed skin. Mast cells, T cells, and keratinocytes are in close relationship with one another in the uppermost dermis during inflammation and mast cells and keratinocytes express TAS2Rs and react on bitter compounds. Therefore, TAS2R signaling might modulate skin inflammation in addition to already described functions of bitter taste receptors such as bronchodilation, hormone secretion, and bacterial killing. However, it must still be clarified if this effect actually takes place in vivo in pathologic processes such as psoriasis and eczema. | 2018-04-03T02:24:51.213Z | 2015-10-27T00:00:00.000 | {
"year": 2015,
"sha1": "6c32febb8071ec61613f099b3ce97430969dd8be",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mi/2015/630128.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea5d39474e95661296689076ff1a6fd517b2fd53",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
198179441 | pes2o/s2orc | v3-fos-license | The pointwise H\"older spectrum of general self-affine functions on an interval
This paper gives the pointwise H\"older (or multifractal) spectrum of continuous functions on the interval $[0,1]$ whose graph is the attractor of an iterated function system consisting of $r\geq 2$ affine maps on $\mathbb{R}^2$. These functions satisfy a functional equation of the form $\phi(a_k x+b_k)=c_k x+d_k\phi(x)+e_k$, for $k=1,2,\dots,r$ and $x\in[0,1]$. They include the Takagi function, the Riesz-Nagy singular functions, Okamoto's functions, and many other well-known examples. It is shown that the multifractal spectrum of $\phi$ is given by the multifractal formalism when $|d_k|\geq |a_k|$ for at least one $k$, but the multifractal formalism may fail otherwise, depending on the relationship between the shear parameters $c_k$ and the other parameters. In the special case when $a_k>0$ for every $k$, an exact expression is derived for the pointwise H\"older exponent at any point. These results extend recent work by the author [Adv. Math. 328 (2018), 1-39] and S. Dubuc [Expo. Math. 36 (2018), 119-142].
Observe that for any given polygonal line {(x k , y k ) : k = 0, 1, . . . , r} there are many self-affine functions of the form (1.3). For instance, when a k > 0 for every k, (1.2) is equivalent to the system so there is one degree of freedom for each k. One may, for example, freely choose all the vertical (signed) contraction ratios d 1 , . . . , d r , and then the other parameters are fixed. where ϕ(x) denotes the distance from x to the nearest integer. Then φ w satisfies (1.3) with r = 2, a 1 = a 2 = c 1 = −c 2 = 1/2, and d 1 = d 2 = 2 −w . The case w = 1 gives the classical Takagi function (see [20]), shown in the leftmost panel of Figure 1. When 0 < w < 1, φ w is nowhere differentiable and its graph has Hausdorff dimension greater than 1 [14]; and when w > 1, φ w is differentiable everywhere except at the dyadic rationals in [0,1]. Note that when we set w = 2 we obtain the parabola φ 2 (x) = 2x(1 − x), which is uninteresting from the point of view of multifractal analysis. In most of the results of this paper, we will have to explicitly rule out the possibility that φ is a polynomial.
Many more examples, as well as a useful Matlab program for drawing graphs of self-affine functions (which the author gratefully used to produce Figure 1), can be found in Dubuc's expository paper [5]. Dubuc shows that φ defined by (1.3) is nowhere differentiable if and only if |d k | ≥ |a k | for each k. He furthermore determines the global Hölder exponent of φ, and shows that α φ (ξ) = r k=1 |a k | log |d k | r k=1 |a k | log |a k | almost everywhere in (0, 1), unless φ happens to be a polynomial. This generalizes an earlier result of Bedford [3]. In the present article we extend the results of [5] by determining the complete pointwise Hölder spectrum of φ; that is, the function D(α) := dim H E φ (α), E φ (α) := {ξ ∈ [0, 1] : α φ (ξ) = α}, α > 0.
When the maps T k do not include shears (that is, when c k = 0 for all k), it follows from the main result of [1] that D(α) is given by the multifractal formalism (see below). When the maps T k do include shears, however, a drop in Hölder exponent can occur for Hölder exponents greater than 1. This happens at points which are exceptionally well approximated by the images of 0 and 1 under the composite maps S k 1 •· · ·•S kn , for k 1 , . . . , k n ∈ {1, 2, . . . , r}, where S k (x) := a k x+b k . When |d k | ≥ |a k | for at least one k, this does not affect the function D(α) (it is still given by the multifractal formalism). But when |d k | < |a k | for all k, the multifractal formalism no longer holds: the graph of D(α) then includes a straight line from the point (1,0) that is tangent to the graph of the function predicted by the multifractal formalism.
In order to make this precise, we first introduce some notation. Let To avoid degenerate cases, we assume #I + ≥ 2. Define ρ k := log |d k | log |a k | , k ∈ I + , and let α min := min k∈I + ρ k , α max := max Furthermore, let s min , s max andŝ be the nonnegative numbers satisfying Note thatŝ > 0 since #I + ≥ 2. On the other hand, s min and s max are typically zero unless there is a tie for the minimum (resp. maximum) of log |d k |/ log |a k |. Put Note that α min ≤α ≤ α max . For each q ∈ R, let β(q) be the unique real number such that It is well known from multifractal theory (e.g. [7,Chapter 17]) that the function β(q) is strictly decreasing and convex, and its Legendre transform is strictly concave on the interval [α min , α max ], and takes the value −∞ outside this interval. We say the multifractal formalism holds if D(α) = β * (α) for all α. In [1], the multifractal formalism is shown to hold when c k = 0 for every k, provided α φ (ξ) is replaced withα (See also Section 8 below, where we show that the results of [1] hold in fact for α φ .) However, as shown below, the multifractal formalism may fail in the more general setting. In what follows, we shall assume that a k > 0 for each k. It is straightforward, but cumbersome, to state similar theorems for cases where some of the a k are negative.
Define the index set and max{|d k |, |d k+1 |} > 0 . (1.7) Here we interpret 0/0 as zero, and we declare that k ∈ Λ if a 1 = d 1 and c 1 d k+1 = 0, or similarly, if a r = d r and c r d k = 0. A special case of our main theorem below (with r = 2 and a 1 = a 2 = 1/2) was proved by Ben Slimane [4].
(b) Suppose |d k | < a k for every k and Λ = ∅. Let σ > 0 be such that and put p * k := (|d k |/a k ) σ for k ∈ I + . Let is empty if I 0 = ∅, and has Lebesgue measure one otherwise; (iv) The function D(α) is differentiable at α 0 .
The multifractal spectrum of Theorem 1.4 (b) is illustrated in Figure 2. Figure 2. Typical graph of D(α) when |d k | < a k for all k and Λ = ∅ Remarks. (i) Note that the c k 's are used to determine whether Λ = ∅, but play no further role in the determination of the multifractal spectrum of φ.
(ii) When r = 2, the set Λ contains at most the number 1. It is easy to check that in this case (assuming d 1 < a 1 and d 2 < a 2 ), (iii) Theorem 1.4 (b) includes the following special case: If log |d k |/ log a k is constant (for all k ∈ I + ), then α max = α min =α and p * k = aŝ k for each k ∈ I + , so that α 0 =α as well. In this case, the graph of D(α) is just the straight line segment connecting the points (1, 0) and (α,ŝ).
(iv) When a k < 0 for some k, the set Λ has to be modified. Other than that, however, the statement of the theorem remains essentially the same.
We briefly mention some related literature here. Jaffard [11] determines the multifractal spectrum of "self-similar" functions R d → R, but imposes a certain smoothness condition that rules out our functions. However, Jaffard and Mandelbrot [12] adapt Jaffard's method to compute the multifractal spectrum of Pólya's space-filling curve, which is of the form (1.3) except that it maps [0, 1] into R 2 . Ben Slimane [4] gives a complete description of the Hölder spectrum of φ in the case r = 2 and a 1 = a 2 = 1/2. (This includes the Riesz-Nagy function and the generalized Takagi functions from Example 1.1.) Self-affine functions mapping [0, 1] into R d for any d ≥ 1 were studied by Allaart [1] and Bárány et al. [2]. These functions satisfy a functional equation f (a k x + b k ) = Ψ k (f (x)) for k = 1, 2, . . . , r and x ∈ [0, 1], where Ψ k : R d → R d are similarities in [1], and general affine maps in [2]. (Not surprisingly, the price for greater generality in [2] is that the results are less explicit, and severe restrictions must be imposed to obtain the full pointwise Hölder spectrum.) Recently, Jaerisch and Sumi [9] have extended some of the results of [1] to distribution functions of general Gibbs measures.
The rest of this article is organized as follows. In Section 2 we define a kind of generalized entropy function and state two useful duality principles. Section 3 gives two concrete examples illustrating the main theorem. Section 4, the longest and most technical section of the paper, develops an exact expression for the right pointwise Hölder exponent of φ at any point ξ. A crucial role in its proof is played by the method of divided differences that was introduced in this context by Dubuc [5]. This expression is then used in the next two sections, which prove the lower and upper bounds in Theorem 1.4, respectively. The differentiability of D(α) at α 0 is proved in Section 7. Finally, in Section 8, we show how the techniques of this paper can be used to strengthen the main result of [1].
Generalized entropy and duality principles
Before illustrating the main theorem, we present β * (α) and the function in (1.8) as constrained maxima over the r-simplex of certain entropy-like functions. These dual representations will be important for the proofs later and can also be convenient for concrete computations. We denote by ∆ r the standard simplex in R r : ∆ r := p = (p 1 , . . . , p r ) ∈ R r : p k ≥ 0 for each k and where as usual, we set 0 log 0 ≡ 0. A proof of the following useful duality principle can be found in [1].
Proposition 2.1 represents β * (α) as the maximum value of H over the intersection of a simplex with a hyperplane. The characterization is especially useful when r = 2, in which case the intersection consists of a single point, which is the solution of two linear equations in the two unkwowns p 1 and p 2 .
In order to prove Theorem 1.4 (b), we need a second, somewhat more involved duality principle. Lemma 2.2. Assume |d k | < a k for each k. The maximum value of G(p 1 , . . . , p r ) := k∈I + p k log p k k∈I + p k (log |d k | − log a k ) over ∆ 0 r is σ, and is attained at (p * 1 , . . . , p * r ) (where we define p * k := 0 for k ∈ I 0 ). Proof. We could use the method of Lagrange multipliers, but the following argument is more direct. Let q k := (|d k |/a k ) σ , and observe that the inequality G(p 1 , . . . , p r ) ≤ σ is equivalent to p k log p k ≥ p k log q k (the summations being over I + ). But since we recognize the last expression as the relative entropy (or Kullback-Leibler divergence) of the probability vector (p 1 , . . . , p r ) relative to the probability vector (q 1 , . . . , q r ), which is always nonnegative (an easy consequence of Jensen's inequality). Moreover, equality obtains when p k = q k for all k ∈ I + .
Proposition 2.3. Assume |d k | < a k for every k, and define for α > 1, where the summations are over k ∈ I + . Then Proof. This follows from Proposition 2.1 and Lemma 2.2, by noting that p * k (log |d k |− α log a k ) < 0 if and only if α < α 0 .
Examples
The first example, a generalization of Example 1.1, illustrates how to use Theorem 1.4 in combination with Propositions 2.1 and 2.3. For more examples along these lines, see [1].
Example 3.1 (Skew Takagi function). Fix numbers a ∈ (0, 1/2), h > 0 and 0 < d < 1, and consider the self-affine function φ from (1.3) with parameters r = 2, a 1 = 1 − a 2 = a, c 1 = h = −c 2 , and d 1 = d 2 = d. We can view φ as a skewed version of the Takagi function from Example 1.1, with the tent map ϕ replaced by the triangle with vertices (0, 0), (1, 0) and (a, h); see Figure 3. If d ≥ a, we are in the situation of Theorem 1.4 (a) and the Hölder spectrum of φ is given by the multifractal formalism. Here Proposition 2.1 allows us to calculate β * (α) very explicitly. Solving simultaneously the equations p 1 + p 2 = 1 and 2 k=1 p k (log d k − α log a k ) = 0, we obtain , and then Now assume that d < a instead. We must then determine the set Λ. Using the characterization (1.9), we see that Λ = ∅ if and only if d = a(1 − a). Assuming this is not the case, we are in the situation of Theorem 1.4 (b). With σ the unique solution of (d/a) σ + (d/(1 − a)) σ = 1, we can calculate and then D(α) is as given in (1.8), with β * (α) again given by (3.1).
The next example shows that certain self-affine functions can be obtained as indefinite integrals of other self-affine functions.
Example 3.2. Consider the case when c k = 0 for every k, so φ(a k x+b k ) = d k φ(x)+e k for k = 1, 2, . . . , r. Let ψ(x) := x 0 φ(t)dt. A straightforward calculation shows that ψ is again self-affine, satisfying x ∈ [0, 1], k = 1, 2, . . . , r, . It should be clear from the definition of ψ(x) that ψ is differentiable everywhere on (0, 1), and its multifractal spectrum is just that of φ shifted one unit to the right. Indeed, although |d k | < a k for all k, it is not difficult to verify using (1.4) that Λ = ∅, so Theorem 1.4 (a) applies equally to φ and ψ.
Vice versa, if φ satisfies (1.3) with |d k | < a k for each k and Λ = ∅, then φ ′ exists everywhere and is again self-affine, with parametersâ k = a k ,b k = b k ,ĉ k = 0,
The exact Hölder exponent of φ
For the remainder of this paper we shall assume without further mention that a k > 0 for every k.
We first introduce some additional notation.
) if there is a constant C and a polynomial P of degree less than α such that Define the right and left pointwise Hölder exponents of φ at ξ, respectively, by We aim to give an exact expression for α + φ (ξ) for all but countably many points; a formula for α − φ (ξ) can be obtained similarly. Precisely, we will omit the points from the set (These are the r-adic rational points in case a k = 1/r for each k.) A precise statement requires the following notation. Let (4.1) and let L n (ξ) := max{j ∈ N : k n−j+1 = k n−j+2 = · · · = k n = r}, or L n (ξ) = 0 in case k n < r. We now define two indicators, Assuming k i ∈ I + for all i, we define and Theorem 4.1. Assume a k > 0 for all k, and let ξ ∈ (0, 1)\T with coding (k 1 , k 2 , . . . ).
Assume also that φ is not a polynomial. Then Moreover, if γ(ξ) > 1, then the right derivative of φ at ξ is given by Note that the denominator in the expressions for γ 1 and γ 2 is negative. Hence, exceptionally large values of L n (ξ) can reduce the pointwise Hölder exponent of φ at ξ from the "default" value γ 0 when K 1 or K 2 is positive. On the other hand, if L n (ξ) = o(n), then we simply have α + φ (ξ) = γ 0 (ξ). Remark 4.2. We can similarly give an expression for α − φ (ξ). Rather than stating a formal theorem, we briefly indicate how to modify the one above. First, in the definitions of K 1 , K 2 and L n , switch the roles of the digits 1 and r. The definition of χ n should be changed to χ n (ξ) = 1 if k n−Ln(ξ) − 1 ∈ I + and 0 otherwise; and the definition of ζ n should be changed to ζ n (ξ) = 1 if k n−Ln(ξ) − 1 ∈ Λ and 0 otherwise. Then the expression in the theorem gives α − φ (ξ), and if this number is greater than 1, then the left derivative φ ′ − (ξ) is given by the right hand side of (4.3). Remark 4.3. It is possible, in principle, to give exact expressions for α + φ (ξ) also in cases where a k < 0 for some k. However, such statements would be even more complicated than the ones given, for instance, in [1, Theorem 6.1]. We therefore do not pursue this here.
To prove the second statement, suppose γ 0 > 1 and lim sup L n /n < 1. Then there is a constant M such that L n /(n − L n ) < M for every n. There also is a δ > 0 such that, for all large enough n, Starting from (4.4), we then have for all sufficiently large n, independent of n. Hence, γ 2 > 1.
The proof of Theorem 4.1 is quite long and technical. We split it in two parts: First we prove the lower bound (Steps 1 and 2), and then, after introducing some useful facts about divided differences, we prove the upper bound (Steps 3 and 4).
We shall use the following terminology and notation. By a basic interval of order n we shall mean an interval of the form I = S k 1 • · · · • S kn ([0, 1]), where k 1 , . . . , k n ∈ {1, 2, . . . , r}. For any interval I, |I| will denote its length. Let I n,j : j = 1, 2, . . . , r n denote the basic intervals of order n, enumerated in order from left to right. For ξ ∈ T , there is for each n a unique j such that ξ ∈ I n,j ; denote this interval I n,j by I n (ξ). If I n,j = S k 1 • · · · • S kn ([0, 1]) and x, ξ ∈ I n,j , there are unique points t n , τ n ∈ [0, 1] such that S k 1 • · · · • S kn (t n ) = x and S k 1 • · · · • S kn (τ n ) = ξ. In that case, we have the explicit expression (see [5, p. 135 Finally, we use the short hand notation Proof of Theorem 4.1 (lower bound). Here we show that α + φ (ξ) ≥ γ(ξ). We assume throughout the proof that K 1 > 0 and K 2 > 0; the proof in the other cases is simpler and shorter. Note that our assumption implies that γ = min{γ 1 , γ 2 }, and furthermore Fix ξ with coding (k 1 , k 2 , . . . ). The case when k i ∈ I 0 for some i is trivial, so we assume that k i ∈ I + for all i. Let γ = γ(ξ).
. . a kn a 2 min , so there is an absolute constant A > 0 such that A −1 < (x − ξ)/a k 1 . . . a kn < A. We write this as x − ξ ≍ a k 1 . . . a kn . Case 1. If x ∈ I n (ξ), we have simply, using (4.5), there is a number ε > 0 and an integer N such that α + ε < 1 and s n,k log |d k | < (α + ε) s n,k log a k for all n > N, and so This shows that the last product in square brackets in (4.8) tends to zero. The summation in (4.8) we split in two parts. Observe first that For the remaining terms we can apply (4.9), obtaining As a result, the upper estimate for |φ(x) − φ(ξ)|/(x − ξ) α in (4.8), which holds when x ∈ I n (ξ), tends to zero as n → ∞.
Case 2. Suppose now that x ∈ I n (ξ). Let l := L n (ξ). Note that |I n (ξ)| = a k 1 · · · a kn = a k 1 · · · a k n−l a l r . Let p be the largest integer such that (4.10) a k 1 · · · a k n−l−1 a k n−l +1 a l+p Observe that l + p ≥ 0 in view of (4.7). Hence, there is an index j ′ such that |I n+p,j ′ | = a k 1 · · · a k n−l−1 a k n−l +1 a l+p 1 , and it follows that the interval I n+p,j ′ is adjacent to I n (ξ). By (4.10), x ∈ I n+p,j ′ and so I n+p,j ′ = I n+p (x). Observe that our choice of p implies (4.11) a l+p 1 ≍ a l r , and then also (4.12) |d 1 | l+p ≍ |d 1 | l log ar/ log a 1 .
(c) If a 1 > |d 1 |, then the largest term in S 2 is the one with m = n − l, but this term differs at most by a multiplicative constant from the last term in S 1 . So in this case, too, (x − ξ) −α S 2 → 0. This completes Step 1.
For x > ξ, let n again be the largest integer satisfying (4.7).
Case 1. Assume first that x ∈ I n (ξ). Then there are numbers t n , τ n in [0, 1] such that (4.21) There exist ε > 0 and N ∈ N such that (4.9) holds. So for n > N, Combining these last two results with (4.21) yields (4.20) for x ∈ I n (ξ).
Case 2. Suppose now that x ∈ I n (ξ). Let l := L n (ξ). We define the integer p, the adjoining interval I n+p,j ′ to the right of I n (ξ) and the connecting point ξ n ∈ T as in Case 2 of Step 1, and note that (4.11) and (4.12) hold. By (4.5), there are points τ n , t n ∈ [0, 1] such that where k ′ i and s ′ m,k are defined as in Step 1. As in (4.18), (x − ξ) −α r k=1 |d k | s ′ n+p,k → 0 and, more straightforwardly, (x − ξ) −α r k=1 |d k | s n,k → 0, so the remainder terms in (4.22) and (4.23) are of no concern. Putting ν := n − l for brevity, we can write the summation in (4.23) as where R 1 and R 2 denote the remainder terms in (4.22) and (4.23), respectively. This gives It has already been established that the first and last terms tend so zero, so we focus on the term involving B n . Recall from our assumptions at the beginning of the proof that |d r | < a r , so the second summation in (4.25) is bounded. We consider three subcases: (i) If |d 1 | > a 1 , then K 1 > K 2 . If d kν +1 = 0, then χ n (ξ) = 1 and the dominant term in the first summation in B n is the one with m = n + p, which is of order (|d 1 |/a 1 ) n+p−ν = (|d 1 |/a 1 ) l+p . So in this case, Step 1, where we used (4.11) and (4.12) in the second step, and the convergence to zero follows from (4.17). If, on the other hand, d kν +1 = 0, then B n simplifies to At this point, there are two possibilities: (a) k ν ∈ Λ. Then B n simplifies further to B n = c r a r − d r d r a r l by definition of Λ, so that (4.9) gives Then ζ n (ξ) = 1 and B n ≍ 1, so that since α < γ 2 , where we used the obvious analogy to (4.17).
(ii) Suppose next that |d 1 | = a 1 . If d kν +1 = 0, the situation is as in case (i). If d kν +1 = 0, then we have |B n | ≍ l + p, and hence (iii) Suppose finally that |d 1 | < a 1 . Then K 2 > K 1 . Summing the finite geometric series in (4.25) gives (4.28) Note that B n is bounded. If k ν ∈ Λ, then ζ n (ξ) = 1 and we have (4.27). Suppose k ν ∈ Λ. Then B n simplifies to As for the first term, we see at once that The second term vanishes when d kν +1 = 0. If d kν +1 = 0, then χ n (ξ) = 1, and we obtain as in case (i) above. This completes Step 2, and the proof of the lower bound.
The proof of the upper bound in Theorem 4.1 uses the technique of divided differences. We briefly review the definition and basic properties. For a function f and a finite list of distinct points x 0 , x 1 , . . . , x n , the divided difference f [x 0 , x 1 , . . . , x n ] is defined inductively as follows: . . , n − k, k = 1, 2, . . . , n.
Divided differences have some of the same properties as higher order derivatives.
They are linear in f and satisfy a mean value theorem. For the purposes of this article, the most important properties are the following: The next lemma, whose proof can be found in [5,Lemma 12], is crucial for proving the upper bound on α + φ (ξ). Lemma 4.8 (Dubuc). Let f : [a, b] → R and let x 0 < x 1 < · · · < x N be an increasing sequence of N + 1 distinct points in [a, b]. Then for any x ∈ [a, b] there is an index k ∈ {0, 1, . . . , N} such that where δ := min{x k − x k−1 : k = 1, . . . , N}.
so by the counterpart of (4.9), for any α > γ 0 . If χ n (ξ) = 0 for all but finitely many n, then γ = γ 0 and we are done. Assume therefore that χ n (ξ) = 1 for infinitely many n. There is then an increasing subsequence (n i ) such that χ n i (ξ) = 1 for each i, and Take n = n i and let l := L n (ξ). Let j be the integer such that I n (ξ) = I n,j . We may assume also that l ≥ 1 (otherwise we simply scratch this n from the subsequence (n i )). Then the next basic interval I n,j+1 has length a k 1 . . . a k n−l−1 a k n−l +1 a l 1 . Note that a k 1 . . . a k n−l−1 a k n−l +1 ≥ a min a k 1 . . . a k n−l ≥ a min a k 1 . . . a kn = a min |I n (ξ)|.
Let p be the largest integer such that a k 1 . . . a k n−l−1 a k n−l +1 a l+p 1 ≥ a min |I n (ξ)|. Then l + p ≥ 0, so there is an integer j ′ such that the basic interval I n+p,j ′ has length |I n+p,j ′ | = a k 1 · · · a k n−l−1 a k n−l +1 a l+p 1 and is adjacent to (and to the right of) I n (ξ). Write I n+p,j ′ = [ξ ′ n , ξ ′′ n ]. Let g ′ n be the affine map satisfying g ′ n (0) = ξ ′ n and g ′ n (1) = ξ ′′ n . Put t ′ n := g ′ n (t 0 ). Applying Lemma 4.8 to the interval I n+p,j ′ we see that there is a point w n ∈ {ξ ′ n , t ′ n , ξ ′′ n } such that |φ(w n ) − φ(ξ ′ n )| ≥ C 1 r k=1 |d k | s ′ k,n+p , with C 1 as above. Then for at least one x ∈ {w n , ξ ′ n }, Now, since α > γ 1 (ξ), there is an ε > 0 and an integer N such that if n ∈ (n i ) and n > N, then s n,k log |d k | + K 1 l > (α − ε) s n,k log a k . This gives the dual to (4.18), namely for some constant C 2 > 0. Thus, φ ∈ C α + (ξ). This completes Step 3.
Step 4. We assume that γ ≥ 1 and show that φ ∈ C α + (ξ) for any α > γ. Case 1. Suppose first that γ = γ 2 < γ 1 . Take α ∈ (γ, γ 1 ). From the work in Step 2 it follows that with D(ξ) defined as in (4.19). Hence, if there is a polynomial P (x) such that (1.1) holds, it must be the case that P (x) = φ(ξ) + D(ξ)(x − ξ). We show that this leads to a contradiction. Let (n i ) be an increasing sequence of positive integers such that Since γ 2 < γ 0 , we may assume that for each n ∈ (n i ), ζ n (ξ) = 1 and so k n−Ln(ξ) ∈ Λ. Moreover, lim sup i→∞ L n i (ξ)/n i > 0, so we may assume that L n i (ξ) > l 0 for all i, for some suitable number l 0 to be chosen later. Take n ∈ (n i ), and put l := L n (ξ) and ν := n − l. Choose p and j ′ as in Step 3, and write I n+p,j ′ = [ξ ′ n , ξ ′′ n ]. Recall that I n+p,j ′ is a basic interval adjacent to I n (ξ).
We claim that B n is bounded away from zero when k ν ∈ Λ. To show this, we distinguish two cases: (i) |d 1 | < a 1 . Here we can write B n as in (4.28). We first show that l + p is large when l is large. From the choice of p it follows that so that (4.31) l + p + 1 > log a max log a 1 (l + 1). Let Then C Λ > 0 by definition of Λ. Thus, since lim sup L n i (ξ)/n i > 0, we can assume in view of (4.28) and (4.31) that l is large enough so that |B n | ≥ 1 2 C Λ > 0. (ii) |d 1 | ≥ a 1 . Then K 1 ≥ K 2 , so our assumption that γ 2 < γ 1 implies that d k n−Ln(ξ) +1 = 0 for all sufficiently large n. So we may assume d kν +1 = 0. Then (see the note following (1.7)) k ν ∈ Λ means that and of course there are only finitely many possible values for this expression. Denote the smallest of these (in absolute value) by C ′ Λ . At the same time, B n now simplifies to (4.26). Thus, we can assume l is large enough so that |B n | ≥ 1 2 |C ′ Λ | > 0. This proves our claim.
Assume first that γ 1 = γ 0 . Let ξ (j) n := g n (t j ) for j = 0, 1, . . . , N. By Lemma 4.7(i), P [ξ Assume next that γ 1 < γ 0 . Then there is a subsequence (n i ) such that for each n ∈ (n i ), χ n (ξ) = 1. Take such an n. We now proceed as in the second case of Step 3, choosing the integer p and the interval I n+p,j ′ in the same way and letting g ′ n be the affine map which maps [0, 1] onto I n+p,j ′ without reflections. Now set ξ Precisely as in Step 3 (see (4.29) and beyond), it now follows that lim sup n→∞ |R(w n )| (w n − ξ) α = ∞.
Proof of the lower bound
Here we assume first that we are in the setting of Theorem 1.4 (b); that is, |d k | < a k for all k, and Λ = ∅. Without loss of generality we may assume that there is an element k * of Λ such that d k * = 0. (If this is not the case, then there is a k * ∈ Λ such that d k * +1 = 0, and we reverse the roles of α + φ and α − φ in what follows; see Remark 4.2.) Note that k * < r.
Fix α > 1, and assume p satisfies Let E p,λ be the set of sequences (k i ) ∈ E λ satisfying s ′ n,k N n → p k , k = 1, 2, . . . , r, Then µ p,λ (E p,λ ) = 1 by the strong law of large numbers and the well known fact that almost surely, maximum run lengths of digits grow at most logarithmically. Observe also that for (k i ) ∈ E p,λ , (5.6) and (5.7) imply by the construction of E λ , since L − n ≤L − n + 1 for all n ∈ N, and L + n ≤L + n for all n ∈ J 1 .
. Proof. Since |d 1 | < a 1 and |d r | < a r , it follows that K 2 ≥ max{K 1 , 0}. Let (k i ) ∈ E p,λ and ξ := π((k i )). Clearly, ξ ∈ T . By Theorem 4.1, Since L + n /n → 0 along n ∈ J 1 , this lim inf is attained along n = n j . Suppose n = n j ; then L + n = l j and so k n−L + n = k n j −l j = k * ∈ Λ. Hence ζ n = 1. Observe also that r k=1 and similarly, r k=1 s n,k log a k = r k=1 s ′ n,k log a k + l j log a r + o(l j ), using (5.3), which also implies that j/l j → 0 as j → ∞. As a result, r k=1 s n,k log |d k | + K 2 ζ n L + n r k=1 s n,k log a k = r k=1 s ′ n,k log |d k | + l j log a r + o(l j ) r k=1 s ′ n,k log a k + l j log a r + o(l j ) where the first equality uses the definition of K 2 , the last equality follows from (5.4), and the convergence follows from (5.5) after dividing numerator and denominator by N n j , noting that N n j = n j − j i=1 l i − (2j − 1) ∼ n j − l j by (5.3), so that l j /N n j ∼ l j /(n j − l j ) → λ/(1 − λ) = τ . Thus, α + φ (ξ) = α. Likewise, by (5.7) and the analog of Theorem 4.1 for α − φ (see Remark 4.2) we have α − φ (ξ) = γ 0 (ξ) ≥ γ 2 (ξ). These two observations yield the first inclusion of the lemma; the second follows from Corollary 4.9.
We shall need the following lemma (see [7,Proposition 4.9]), in which B(x, ρ) denotes the open ball centered at x with radius ρ, and H s denotes s-dimensional Hausdorff measure.
Lemma 5.2. Let µ be a mass distribution on R n , let F ⊂ R n be a Borel set and let 0 < c < ∞ be a constant. If Then for any ξ ∈ π(E p,λ ) and any ε > 0, Proof. Fix ξ ∈ π(E p,λ ) and 0 < ε < s(p). We first show that Note that subject to (5.4), s(p) can be written as (5.10) s(p) := p k log p k p k log a k + τ log a r .
It suffices to consider n ∈ J 1 ∪ J 3 . For all n we haveμ p,λ (I n (ξ)) = r k=1 p s ′ n,k k , and hence (5.11) lim n→∞ logμ p,λ (I n (ξ)) N n = r k=1 p k log p k by (5.5). Suppose first that n j − l j < n ≤ n j . Then s ′ n,k log a k + l j log a r + o(l j ).
Proof of the upper bound
In this section we prove the upper bounds in Theorem 1.4 (a) and (b). For a short proof of the following elementary lemma, see [1]. Lemma 6.1. Let n ∈ N, and let m 1 , . . . , m r be nonnegative integers with r k=1 m k = n. Put p k := m k /n for k = 1, . . . , r. Then where we use the convention 0 0 ≡ 1.
The next result is probably known. However, since the author could not find it in the literature, a proof is included for completeness. Choose ε > 0 small enough so that This is possible since a s r < 1. If ξ ∈ R 1 , then for every N ∈ N there is an n ≥ N such that L + n (ξ) > (1 − ε)n.
where the second inequality follows from Lemma 6.1, and the last line follows from (6.1). Hence, H s (R 1 ) = 0, and since s > 0 was arbitrary, the lemma follows.
so it remains to bound the dimension of E 2 (α). If α < 1, then E 2 (α) = ∅ by Lemma 4.4, so we need only consider the case α ≥ 1. First, for α = 1, E 2 (α) contains only points ξ with lim sup L + n (ξ)/n = 1 by the second statement of Lemma 4.4, so dim H E 2 (1) = 0 by Lemma 6.2. Assume then, for the remainder of the proof, that α > 1. Here, the third inequality follows from Lemma 6.1 and the fourth from (6.9). The final series converges; thus, letting η ց 0 (and hence N → ∞), we obtain H s (E ε 2 (α)) = 0. Therefore, dim H (E ε 2 (α)) ≤ s, as was to be shown. The upper bound in Theorem 1.4 (b) now follows, using Proposition 2.3. To see the upper bound in Theorem 1.4 (a), make the following two observations: First, if |d k | ≥ a k for some k, then one can check easily that the constrained maximum in the definition of h(α) is achieved on the boundary p k (log |d k | − α log a k ) = 0, from which it follows that h(α) ≤ β * (α) via Proposition 2.1. Second, if Λ = ∅, then there are no points ξ for which γ 2 (ξ) < γ 0 (ξ), so E 2 (α) = ∅. In both cases, we conclude that D(α) ≤ β * (α).
Let d k := |y k − y k−1 |, so d k is the Lipschitz constant (or contraction ratio) of Ψ k . Note that in this setup, d k ≥ 0 for every k. Moreover, since the maps T k factor we have necessarily that k d k ≥ 1 = |a k |. Thus, a situation as in Theorem 1.4 (b) cannot occur for these functions.
In [1], two types of Hölder exponent are considered: One is the pointwise Hölder exponent α f (ξ) defined in the Introduction of the present article; the other is Clearly, α f (ξ) ≥α f (ξ), but a priori these exponents need not be equal. One of the main results of [1] is that (8.2) dim H {ξ ∈ (0, 1) :α f (ξ) = α} = β * (α), α ∈ (α min , α max ), with α min , α max and β * (α) defined as in Section 1 (see [1,Theorem 2.7]). As in the present paper, the proof involves an exact expression forα f (ξ), which can be reformulated in terms of our notation from Section 4 as under the simplifying assumption that a k > 0 for each k and the constant K 1 from Section 4 is positive. (When a k < 0 for one or more k's, the expression is more complicated; see [1, Theorem 6.1]. When K 1 < 0, one interchanges the roles of the digits 1 and r.) Using the method of divided differences that was employed in the proof of Theorem 4.1 (based on Dubuc's lemma), it is straightforward to "upgrade" the proof of [1, Theorem 6.1] and show that (8.3) (and hence (8.2)) holds with α f (ξ) in place ofα f (ξ). Even in the case when a k < 0 for some k, this still works.
In [1,Theorem 2.5], it is shown that α f (ξ) =α f (ξ) for the special case when a 1 = · · · = a r = 1/r. We can now conclude that α f (ξ) =α f (ξ) for any function f of the form (8.1). (This fails in general for the self-affine functions φ from (1.3), which can have a nonzero finite derivative as shown in Theorem 4.1.) | 2019-07-23T02:13:46.000Z | 2019-07-23T00:00:00.000 | {
"year": 2020,
"sha1": "725f41901e882656c482b77e343518ac96f53303",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1907.09660",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "39f8c7605e1ca90174f5f568ffb704bd2b3165c2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
261005270 | pes2o/s2orc | v3-fos-license | Collision-constrained deformable image registration framework for discontinuity management
Topological changes like sliding motion, sources and sinks are a significant challenge in image registration. This work proposes the use of the alternating direction method of multipliers as a general framework for constraining the registration of separate objects with individual deformation fields from overlapping in image registration. This constraint is enforced by introducing a collision detection algorithm from the field of computer graphics which results in a robust divide and conquer optimization strategy using Free-Form Deformations. A series of experiments demonstrate that the proposed framework performs superior with regards to the combination of intersection prevention and image registration including synthetic examples containing complex displacement patterns. The results show compliance with the non-intersection constraints while simultaneously preventing a decrease in registration accuracy. Furthermore, the application of the proposed algorithm to the DIR-Lab data set demonstrates that the framework generalizes to real data by validating it on a lung registration problem.
Introduction
Inadequate management of discontinuities in the displacements field of image registration causes problems for widely-used algorithms utilizing smoothness regularizations [1]. One such instance is sliding motions along domain boundaries as observed in longitudinal registration of lungs. Algorithms producing a single smooth deformation field cannot account for discontinuities. A separation of the registration domain into its sliding segments is, thanks to increasingly well-performing deep learning methods for accurate individual organ segmentation [2][3][4], available but subsequent, independent registration of the resulting segments suffers from the possible overlap of the computed deformation fields. In order to acquire separate and congruent deformation fields without intersection, an additional constraining mechanism is required. To this end, the alternating direction method of multipliers (ADMM) is presented to enforce constraints preventing overlap of the individual components.
Extending the early development of rigid [5] registration algorithms, the implementation of diffusion inspired processes [6], called DEMONs, as well as non-rigid deformations introduced by Free-Form Deformations (FFD) [7] account for more complex image registrations. Analogies from fluid mechanics allowed to consider large deformations [8,9]. Topology preservation in image registration is enforced by the use of diffeomorphisms. The combination with the large deformation algorithms gave rise to another approach, called Large Deformation Diffeomorphic Metric Mapping (LDDMM) [10,11]. One type of discontinuities, particularly present in lung and abdominal registrations, are sliding motions [12]. The sliding of the lung along the pleura and the expansion of the rib cage in the opposite direction leads to severe singularities in the deformation field and consequently invalidate the usual premise of smooth displacement fields [13].
Existing approaches aiming to deal with sliding motions can be categorized into methods computing a single deformation field for the whole image domain and those that split the domain into individual parts for each independently moving subdomain and subsequently computing as many deformation fields. Solutions computing one deformation field require additional constraints controlling smoothing across domain boundaries. One such approach of enriching the b-spline basis functions with additional information around discontinuities, i.e. in the lung case domain boundaries, is proposed in [14] and prevents information exchange over domain boundaries. Due to its built, the approach doesn't require further regularization or a specific interface smoothness. Being a completely intensity based method without explicitly penalizing overlaps, the image gradient may push voxel over the domain boundary regardless of the underlying enrichment. Further approaches utilizing the need for regularization differentiation between sub-domains can be divided into two groups [13]: direction-dependent methods and locally adaptive methods. Direction-dependent approaches decompose the deformation field into tangential and normal components.
Smoothing the image globally only in the direction of the surface normals and tangentially only inside segmented domains offers a closed mathematical solution, with the draw back of stationary normal directions [15]. Using a piece wise diffeomorphic solution with a stationary velocity field implementation that smooths the velocity field domain independently in tangential direction while matching the velocity direction and magnitude on both sides of the domain interface is introduced in [16]. However, using different resolutions in different domains requires interpolation at the domain boundary effectively smoothing intensities. Locally adaptive approaches implement regularization whose weight in the loss function differs according to the location in the image domain. The first locally adaptive version investigated in this section offers to balance diffusion-based L 2 norm regularization for smooth domain interiors and L 1 norm total variation at domain boundaries, showing good results even with minor domain segmentation errors. Low magnitude updates, e.g. small scale sliding motion, can still be falsely smoothed [17]. [18] uses a classical bilateral filtering kernel to identify interface boundaries with the application of a computationally costly kernel. An alternative to bilateral filtering is isotropic total variation [1]. Both methods, bilateral filtering and total variation are challenged, however, by interfaces between similarly textured and low contrast domains. Registration algorithms splitting the image domain into independently moving sub-domains suffer more than the methods above from possible overlaps or gaps in between distinct deformation fields. [19] minimizes the overlaps by assigning unique, penalizing intensities for voxel lying outside the respective sub-domain during registration, allowing for the use of different registration parameters for each sub-domain. The definition of the right intensities can be tedious and time consuming. [20][21][22] drop intensity based penalization in favour if a distinct term added to the loss function penalizing overlaps and gaps. These terms can consist of the product of deformed signed distance fields [20], requiring a computationally expensive creation of motion mask and being restricted to two subdomains, or a local distance metric to the opposing interface via the sum of sample surface points [22]. The second implementation offers a solution to sliding along curved surfaces and surfaces that are separated by a third domain, but relies on the resolution of the sampled surface points. Another approach penalizing a set of sampled surface points is the split Gauss-Newton approach [21,23]. Here any deviation of the point set from the deformed interface is weighted relative to the distance to the interface. If not parametrized properly, the penal term can overrule the registration forces, keeping the deformation field in its initial state. Using not n deformation fields for as many sub-domains, but n + 1, [24] utilizes the additional field to calculate the global normal displacements only. The normal direction is derived from local bases. The remaining deformation fields produce the sub-domain specific tangential displacements, ensuring matching normal displacements along the interface. This approach assumes smooth, small-deforming interfaces, since the bases are created in the moving image only. All aforementioned methods except [1,18] require correct segmentations.
This work presents a registration framework, the Collision Constrained Deformable Image Registration (CC-DIR), to account for sliding motions by combining a collision constraining regularization with the registration term and solving the resulting minimization problem using the ADMM.
Image registration
Formally, in image registration a transformation Φ: O ! O is sought that aims to align the source or moving image I 2 R n via the transformation Φ 2 O to a target or fixed image J 2 R n . The registration problem can be written as a minimization problem of the energy term F: The image similarity M: O ! R measures the alignment between two images. Difference methods like Sum of Squared Differences (SSD) and Sum of Absolute Differences (SAD) are widely used as similarity measures for monomodal registrations that map the same anatomical structures, performing similarly well [25]. SSD methods penalize outliers in intensity heavier [26] and are in consequence a valid approach to properly register the relative high differences between lung tissue and the surrounding air. Image registration as an ill-posed problem has no single solution [27] and often utilizes regularization S: O ! R. Regularization may be applied for smoothing [28] or constraining the deformation to reasonable [29] solutions. Cubic Bsplines utilized in FFD act as a smoothing alternative themselves [28] and free the algorithm of additional regularization implementations.
Parametrization
Parametrizations offer a reduction in dimensionality, improved robustness [1], and decreased risk of overfitting. One example are Free-Form deformations. By deforming an overlying grid of control points, the deformation of the underlying image is calculated by interpolating the pixel-or voxelwise displacements. Widely used approximation functions for individual voxel displacements are cubic B-Splines, which have a very local influence and only affect points in their neighbourhood [7]. For an exemplary image in R 3 , z 2 R n 1 �n 2 �n 3 is the mesh control points spaced equidistantly at δ = (δ 1 , δ 2 , δ 3 ). The displacement of p 2 R 3 with the basis functions [30,31] Location of the respective control points with ξ shifting the support along the axis. The localized coordinates derive from u = ( By decreasing the grid resolution, cubic b-spline represent global deformations as well. Consequently, a multiresolution approach with sequentially increasing resolutions will capture both local and global deformations.
Collision detection
Collision detection algorithms can be classified according to utilized object representation [32], such as purely explicit, implicit or hybrid approaches. Explicit representations may range from simple point clouds to polygon meshes, one of the most common tools utilized [33]. Implicit representations, such as signed distance fields SDF : R n ! R, describe geometries as a mapping function. A hybrid approach of a point cloud, representing the deformable object, with a rigid object, depicted as a SDF, has been proven to perform fast and accurately [34]. On top, this algorithm leverages existing data structures such as segmentation masks and can be directly pre-computed from segmented images by fast marching methods, allowing for fast intersection tests [33]. This sidesteps the need for object meshing, which proves hard to implement for anatomical structures [35]. Let p i 2 R n be any point in the image and o � R n by a closed and bounded [36] subset representing a collision object with δω as its boundary. Detecting a collision of p i with ω requires evaluating the signed distance equating to the shortest distance to its domain boundary. The intersections are formulated as a quadratic penalty function g : R n ! R [37].
This function weights possible collisions by the depth of intersection of colliding particles with respect to the object surface with an adjustable free parameter μ. Neglecting all influences of non-intersecting points by simple piece-wise definition g acts both as the collision detection and correction, with a value only deviating from 0 once p i collides with ω.
Reformulating this collision constraint for the registration problem to account for sliding motion between the image domains ω 1 � I and its counterpart ω 2 � J, let X i 2 ω 1 be the set of all material points in the undeformed configuration. The energy formulation in the deformed configuration x i = X i � Φ reads as Here, the SDF o 2 is calculated in spatial space. Adding the collision constraint energy formulation to the initial registration problem changes the term to
Alternating direction method of multipliers
With the constrained registration problem now consisting of F(Φ, I, J) and C(Φ, I, J), we can take advantage of this formulation by splitting the variable Φ representing the transformation into two distinct deformations Φ R and Φ C for the registration respectively collison problem and adding a coupling constraint.
This formulation clearly illustrates the advantage of applying the ADMM. Utilizing a combination of dual ascent and decomposition, the ADMM splits the objective function into subproblems, favouring problems where the local optimization of theses sub-problems can be carried out efficiently [38]. The registration problems in the form of Eq 9 can be solved by iterating through the sequential updates of Φ R , Φ C , and a so-called dual variable u by until either convergence or a fixed number of iterations is achieved. This scaled version of the ADMM utilizes u as the accumulating sum of residuals for the coupling constraint [39] whereas the free parameter ρ allows to control its influence on the sequential updates Convergence towards an optimum is sufficiently fast for moderate accuracy, but may be slow to produce highly accurate results.
Experiments & results
2D synthetic examples under controlled conditions, as proof of concept, as well as an application to 3D medical data to showcase its feasibility, are conducted.
Implementation
For each domain ω r representing an independently moving object in I 2 R n , in this paper in both synthetic and medical experiments r 2 {1, 2}, a CC-DIR is executed. The object is represented as two point clouds of i respectively j evaluation points for p col , p reg in the collision respectively registration problem. Cubic B-spline parametrization (Eq 2) is used as the deformation model, while for interpolating voxel intensities at non-integer positions, linear interpolation is utilized. For updating F Nþ1 C ; Φ Nþ1 R during the iterations, subsolvers are called to minimize the respective function and assign the minimizing variables Φ C , Φ R . The subsolvers are gradient descent implementations with a backtracking according to [40], running for 2 iterations. ρ is kept at 0.5 across all experiments. In our implementation the stopping criterion for CC-DIR is a fixed set of iterations N fixed .
Synthetic experiments
In order to show the improvements in registration quality by the CC-DIR compared to methods without collision constraining, the synthetic experiments include a Baseline model, a modified Baseline (mod. Baseline) model for improved performance and the CC-DIR. The Baseline model consists of a B-spline FFD registration model with one deformation field and a gradient descent with backtracking as optimization method. The mod. Baseline uses the same setup, however running two FFD registrations consecutively, one for each image domain. The CC-DIR computes one deformation field for each image domain as well. However, CC-DIR uses Alg. 1 to constrain collisions between the two domains. Differences in algorithms between Baseline, mod. Baseline and CC-DIR are kept to a minimum by using the same similarity measure SSD. A small parameter study has been performed for every experiment to identify the best outcome for the synthetic setup with respect to control point grid scale, learning rate, and penalty parameter. Data. Three simplified cases in 2D are computed for a better understanding and visualization of the proposed method, while combining different discontinuities in their respective setups. The image domain is divided by the boundary δO into O L on the left side and O R on the right, each containing the distinct structure I L respectively I R . In the first case, Linear, the boundary δO moves as a whole to the right-hand side while I L and I R are sliding in opposite directions up, respectively down. Additionally, I L deforms under conservation of its total area. The second case, Non − linear, introduces a more complex, non-linear deformation of the boundary δO. Both structures are subject to deformation and move along the boundary in contrary directions, again sliding along the boundary. The last case Growth combines deformation, translation and the sliding motion discontinuity from the second example with a growth O G in the boundary region (Fig 1).
Results.
Two main criteria are quantified and evaluated: registration success and collision constraint violation. The registration success is easily calculated with the Sørensen-Dice index (DICE) for matching the structures I L and I R . For the evaluation of the collision constraint the intersection scores (IS) is introduced. The percentage of intersecting pixels with respect to the image area constitutes to IS, aiming to giving an qualitative overview of the intersection. For all three cases, the CC-DIR performs best in terms of registration as well as preventing intersection ( Table 1). The DICE score indicates a high success in matching the structures, backed by a clean pullback registration visualization in Fig 1. No pixel intersects with the boundaries, so no violation of the collision constraint is detected. The mod. Baseline ranks in second, with satisfying registration results in all but the Growth case. Regarding intersections, this approach performs similarly well for the Linear case as the CC-DIR, but shows more intersections with increasing non-linearities in the boundaries. Losing registration performance quickly, the Baseline approach struggles to match the structures already in the Linear case. Additionally, intersections are reported for every case, hence violating the collision constraints consistently.
Medical experiments
The CC-DIR for evaluation on a medical data set is run as described in Alg. 1. Two deformation fields are computed consecutively and independently. The two independent domains of I 2 R 3 are the lung domain O Lung � I defined by a segmentation and the rib cage O Rib = I\Omega Lung . The multiresolution procedure is derived from the findings in the EMPIRE challenge [41]. For the best possible results, a possible initial affine registration, followed by a multiresolution CC-DIR registration is run. The multiresolution setup consists of multiple successive CC-DIR registrations, with a coarse-to-fine control point grid adaption. The successively run CC-DIR compute updated starting coordinates for p col , p reg , handing these down to the next finer resolution CC-DIR. Image similarity is measured by SSD. Mirroring the synthetic experiments, the registration success and collision constraint violation are evaluated. The results are compared to a set of algorithms also tackling the sliding motion along the lung-ribcage interface [14]. Data. For the medical test setup, the DIR-LAB 4DCT data set from [42] is used with previously segmented lung masks from the Continuous Registration Challenge [43]. The registration is computed between the extreme inhale-exhale image pairs in order to ensure high deformations and sliding motions. To exclude regions with little image inforamtions, the images are cropped to exclude inmaging artifacts at the boundary regions. All CC-DIRs are run for 100 iterations.
Target registration error. For the registration accuracy, anatomical landmarks are used. The data-set provides 300 annotated landmarks in both extreme phases. Consequently the summed position error after registration, called target registration error (TRE) over all landmarks indicates the registration outcome (Table 2). Introduced in the original data set publication [1,20,42], a snap-to-voxel evaluation for the TRE is used. The proposed CC-DIR algorithm's overall TRE accuracy ranks at fourth place, while matching the best benchmark in cases 1 and 2.
A coronal slice of a 3D CT lung scan in Fig 2 shows an example of the registration process. The difference image between the moving and target configuration before and after registration depict a reduction of structural differences. The deformation grid overlaid over the registered image in Fig 3 is the result of the CC-DIR. This close-up shows a clear solution to the sliding interfaces along the pleura as well as a translation in the diaphragm caused by the inhalation.
Congruent interfaces. For the evaluation of congruent, and thus physiologically plausible, rib-cage-lung interfaces, the metric introduced in [24] is used. After transforming a 3D surface mesh of the rib-cage-lung boundary δO by both displacement fields Φ Lung and Φ Rib , the subsequent gap and overlap between the two domains is calculated by voxelizing the respective meshes. The results in cm 3 are recorded in Table 3.
Overall, and in half of the cases specifically, the novel CC-DIR registration shows the least amount of overlap. The average resulting gap between the two domains ranks second after the interface matching algorithm [20].
Shear. In order to estimate the shear along the domain interfaces, the maximum shear stretch introduced in [44] is calculated. Using the deformation gradient tensor F ¼ dx dX which maps the transformation from the undeformed configuration x to the deformed state X, the Table 2. Comparison of TREs for the DIR-LAB data set in mm. The mean TRE of the purposed CC-DIR method ranks at fourth place out of seven with an offset of 0.15 mm to the best performing algorithm. https://doi.org/10.1371/journal.pone.0290243.g003 Table 3. Comparison of gap/overlap for the DIR-LAB data set by case. The purposed CC-DIR algorithm shows the least overlap compared to the six other methods. It rates second in terms of interface gap with a mean offset of 38.1 cm 3 to the first place. All measurements are in cm 3 . maximum shear stretch constitutes to
Case Wu (2008) Delmon (2013) Berendsen (2014) Hua (2017) Eiben (2018) Gong (2020) Proposed CC-DIR
with l i ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi eigenvalues of F T F p : The exemplary colour-coded representation Fig 4 clearly shows an increase in shear stretches along the interface, whereas the displacement field in Fig 3 demonstrates sliding motion. Overall the mean shear stretches amount to 1.3 ± 1.8 between the lung and rib-cage, respectively 0.2 ± 0.2 in the rest of the image. The computation time averages at 822s ± 519s on a Xeon CPU @ 3.60GHz x12 with an GPU implementation on a GeForce RTX 3090.
Discussion
Coupling deformable image registration with collision detection offers up new possibilities for supplying registration algorithms with simple physics. In order to incorporate these physics into the common registration problem formulation, an energy term (Eq 7) is needed. This setup simulates the deformation of the distinct image domains, such as organs, by calculating separate deformation fields. Discontinuities at the domain boundaries, such as sliding motion don't need to be modelled explicitly but result automatically by the collision forces minimizing the non-intersection constraint. Setting up the collision detection requires a domain representation, either as segmentation masks, surface or volumetric meshes. Acquiring these representations can be a difficult task, especially for regions with no discernible or low-contrast domain boundaries. A problem encountered in the experiments is the lack of image information at the domain boundaries. If registration is strictly applied to the respective domain masks, the resulting gap might increase as image information outside of the sub-domain is missing. Dilating the masks to increase the tracking image forces provided an acceptable alternative. This way extensive domain overlap post-registration is prevented by the collision constraints, whereas possible gaps between the domains are closed by the image forces. Compared to single deformation field methods, CC-DIR allows to easily use different parametrizations and parameters for different sub-domains. Since no costly on the run identification of the domain boundary, as in filter based methods, has to be performed, the possibility of surface errors in low contrast regions is avoided. Not relying on normal and tangential decomposition also permits solving sliding motions along complex and non-smooth surfaces. Unless other mentioned methods registering subdomains independently using intensity based constraints, collision detection is geometric and thus modality independent, allowing the collision detection and correction to be used in different settings. Since collision detection is a fundamental area in computer graphics, it can fall back on a considerable body of research and a variety of algorithms, permitting an exchange of the collision detection method, as long as a respective energy term is formulated. An advantage of using collision detection over interface matching methods is the circumvention of redundant constraints, letting the registration energy term drive the deformation. Additionally, collision detection can be applied to scenarios where subdomains are moving freely around the image domain, with no restriction to constant contact between the subdomains, such as in computer vision or robotics. The implementation of the collision constraint in the ADMM has two advantages: First, the modularity of this optimization scheme permits not only the exchange of the collision method but also of the registration method specifications such as similarity metric or deformation model at will. Additional and existing regularization models can be freely and effectively added to the registration energy term, subsequently offering sliding motion representation at no further cost. Second, since the penalty function 6 is not bound to be differentiable, the ADMM still converges [39] and avoids performance issues of gradientbased methods [1]. However, as with many optimization strategies, convexity is of critical importance. First, the convergence of the ADMM is formally mathematically proven for convex functions only. Second, the ADMM solves the minimization for the dual problem instead of the initial primal problem. If strong duality holds, usually for convex functions [45], the dual solution is the primal solution [39]. The obtained solutions from nonlinear and non-convex problems, such as registration, might offer a lower bound and if converged, the ADMM still offers local solutions [39]. Furthermore, with additional regularization, the convexity of the objective functions might be increased and thus reduce the duality gap.
Looking at the results from the experiments in detail, more practical observations can be made. The synthetic experiment setups were chosen to analytically examine the possible improvements of the CC-DIR with collision detection compared to registration algorithms without collision detection. The results from the synthetic experiments show with stark contrast the benefits of the collision detection modeled via the CC-DIR compared to the Baseline approaches. The CC-DIR outperforms the Baseline approaches not only in terms of intersection constraints but also with respect to registration. Additionally to enforcing the non-intersection constraint, the collision detection acts as a powerful regularization term. Since the registration loss functions are likely to be non-convex, this regularization might increase convexity at least locally if not globally. Regarding the intersection constraints, the CC-DIR delivers deformation fields with no intersection at all, while the Baseline approaches have difficulties adhering to these boundaries. Looking at the registration success for the medical data set measured by the TREs, this algorithm ranks in fourth out of seven, with an average distance on 0.15 mm to the best ranking algorithm. However, with the finest voxel resolution of 0.97mm in the DIR-Lab data set, these differences can be regarded as minimal and may be caused by the image discretization. Furthermore, due to its advantageous framework build, replacing the registration methods is not only simple, but any improvement to the registration methods will also likely increase the TRE accuracy. With the least overlap and the second lowest gap between the domains compared to other algorithms, the CC-DIR produces separate yet congruent deformation fields that minimize physiologically not feasible transformations. Even though the CC-DIR doesn't prevent overlapping completely, two adaptions may offer an improvement: individually tuned hyper-parameters and an alternative constraint formulation. The implementation as quadratic penalty functions has the possible disadvantage of producing inexact solutions as the iterates may be drawn to points that violate the equality constraints but satisfy optimality conditions [37]. Exchanging the quadratic penalty function with an exact penalty function might yield the desired solution, however the resulting non-smoothness should be taken into account.
Conclusion
The CC-DIR successfully introduces a collision constrained method into the field of image registration. With its simple framework build, it allows to combine state-of-the-art registration methods with collision detection for discontinuity preservation. No major drawback in terms of registration quality have been observed, thus providing an effective method to account for sliding motion while simultaneously ensuring congruent interfaces. Inspired by the results in this paper, a future development of this framework is the exchange of the registration method with diffeomorphic counterparts. Looking at image registrations of the liver or prostate, which frequently utilize the Finite-Element-Method to simulate deformations [46], an image driven pathway to simulations may open up new possible developments. The advantageous movement parametrization based on basis functions can be used to easily derive FE models. A reformulation with tetrahedral interpolation and movement parametrization coupled with collision detection between tetrahedral meshes and deformable signed distance fields can even boost this process to fuse image registration with bio-mechanical analysis. | 2023-08-20T06:17:30.402Z | 2023-08-18T00:00:00.000 | {
"year": 2023,
"sha1": "2a505af25c0232e287c599b56f53e084960e8a97",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "51909a4e83c0f319b7a2853777dc7b2db7046c27",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1782250 | pes2o/s2orc | v3-fos-license | Integrated mapping of lymphatic filariasis and podoconiosis: lessons learnt from Ethiopia
Background The World Health Organization (WHO), international donors and partners have emphasized the importance of integrated control of neglected tropical diseases (NTDs). Integrated mapping of NTDs is a first step for integrated planning of programmes, proper resource allocation and monitoring progress of control. Integrated mapping has several advantages over disease specific mapping by reducing costs and enabling co-endemic areas to be more precisely identified. We designed and conducted integrated mapping of lymphatic filariasis (LF) and podoconiosis in Ethiopia; here we present the methods, challenges and lessons learnt. Methods Integrated mapping of 1315 communities across Ethiopia was accomplished within three months. Within these communities, 129,959 individuals provided blood samples that were tested for circulating Wuchereria bancrofti antigen using immunochromatographic card tests (ICT). Wb123 antibody tests were used to further establish exposure to LF in areas where at least one ICT positive individual was detected. A clinical algorithm was used to reliably diagnose podoconiosis by excluding other potential causes of lymphoedema of the lower limb. Results A total of 8110 individuals with leg swelling were interviewed and underwent physical examination. Smartphones linked to a central database were used to collect data, which facilitated real-time data entry and reduced costs compared to traditional paper-based data collection approach; their inbuilt Geographic Positioning System (GPS) function enabled simultaneous capture of geographical coordinates. The integrated approach led to efficient use of resources and rapid mapping of an enormous geographical area and was well received by survey staff and collaborators. Mobile based technology can be used for such large scale studies in resource constrained settings such as Ethiopia, with minimal challenges. Conclusions This was the first integrated mapping of podoconiosis and LF globally. Integrated mapping of podoconiosis and LF is feasible and, if properly planned, can be quickly achieved at nationwide scale. Electronic supplementary material The online version of this article (doi:10.1186/1756-3305-7-397) contains supplementary material, which is available to authorized users.
Background
The neglected tropical diseases (NTDs) are a group of more than 17 mostly chronic infectious diseases and related conditions that represent the most common illnesses of the world's poorest people [1]. Through continued advocacy the significant burden and impact of NTDs has been gaining global attention in recent years [1][2][3].
International donors, development partners and endemic country governments are allocating resources for the prevention, treatment and elimination of NTDs. To benefit from these resources, endemic countries and implementing partners are required to develop evidence-based plans, which would allow to track progress and show the effectiveness of programme implementation [4,5].
WHO, international donors and partners have emphasized the importance of integrated -rather than verticalcontrol of NTDs [6,7]. Disease mapping is the systematic collection of georeferenced data to visualise the distribution and prevalence of a disease in space and time [8]. It provides clear information on the geographical distribution of diseases and the population at risk, both of which are important pre-requisites for determining the areas and population to be targeted for treatment and control of NTDs [8]. Integrated mapping of NTDs is a first step for integrated planning of programmes, efficient resource allocation and monitoring progress and impact of control [4,[9][10][11]. Previous integrated mapping efforts focused on a range of diseases, including trachoma, onchocerciasis, schistosomiasis, soil-transmitted helminthiases and lymphatic filariasis (LF) [9,10,12,13]. Integrated mapping has several advantages over standalone surveys: costs can be reduced by coordinated use of personnel and transport, and co-endemic areas can be more precisely identified than through disease-specific mapping [4,9,10,14,15]. However, integrated mapping may be logistically intensive and methodologically difficult, because of differences in the target groups to be mapped and sites to be selected according to the ecology of each disease.
There are two principal causes of elephantiasis, or lymphoedema, in the tropics [16]. The most common cause is LF due to the parasitic nematode Wuchereria bancrofti (and, in Asia, Brugia malayi and B. timori), which is transmitted by blood-feeding mosquitoes [17,18]. The second principal cause is podoconiosis, a form of elephantiasis arising in barefoot subsistence farmers who are in long term contact with irritant red clay soil of volcanic origins [19]. Podoconiosis has significant economic impact; a study in Ethiopia found that podoconiosis halves an individual's productivity [20]. In addition, the disease is known to be stigmatised with significant social exclusion [21]. It has been estimated that 1 million cases of podoconiosis exist in Ethiopia, but the nationwide distribution has not been clearly defined.
The overall distribution of LF in Ethiopia is not well established. According to a recent review, approximately 30 million people are thought to be at risk of LF and Ethiopia bears 6-9% of the LF burden in sub-Saharan Africa [22]. Despite these huge burden estimates for both LF and podoconiosis in Ethiopia -but for isolated studies and historical market and school surveys-no nationwide podoconiosis mapping data existed, and only 112 of the country's 817 districts had been mapped for LF [23]. Ethiopia launched its integrated National NTD Master Plan in May 2013 [24], and LF and podoconiosis were identified as priority diseases along six other NTDs (i.e. trachoma, onchocerciasis, schistosomiasis, soiltransmitted helminthiases, leishmaniasis and dracunculiasis) [24]. Mapping was identified as a critical first step before implementing any operational programming for the control and elimination of these NTDs.
In recent years advances in technology, such as the availability of remote sensed data, application of geographic information systems and mobile technology has enabled rapid mapping of NTDs [4,[25][26][27]. We conducted the first ever integrated mapping of podoconiosis and LF in Ethiopia using mobile-based technology.
The aims of this paper are to describe the methodology used for integrated mapping of podoconiosis and LF, document the lessons learnt, and to provide recommendations for future, similar mappings efforts.
Methods
Two focus group discussions (FGDs) of six individuals each and four in-depth interviews (IDIs) were conducted to assess the challenges faced by mapping team members, including enumerators, team leaders and supervisors. All the data were collected by one of the investigators (KD). Flexible interview guides were used to conduct IDIs and FGDs. Audiotapes were transcribed anonymously, and interviews were conducted in Amharic and were translated into English. Data were analyzed manually. Interpretation of the data was informed by experience during implementation of the survey as well as from the analysis of the qualitative data. By providing these details, we hope that investigators in other endemic countries will benefit and adapt the approach to their local context. Beyond diseasespecific lessons learnt, we hope the broader issues that arose in our mapping efforts may be helpful to investigators planning similar large scale integrated projects.
Ethics approval and consent
Ethics approval was obtained from the Institutional Review Board of the Faculty of Medicine, Addis Ababa University (090/11/SPH) and the Research Governance & Ethics Committee of Brighton & Sussex Medical School (11/116/DAV) for podoconiosis mapping, and from Ethiopian Public Health Inistitute (the then EHNRI) and Liverpool School of Tropical Medicine (LSTM) (12.22) for LF mapping. Once the decision to do integrated mapping was made, amendments were requested and approved by each of these committees. Clearance to conduct the surveys was obtained from the Ministry's of Health Regional Health Bureaus, followed by Zonal Health Offices. The study was explained to each village leader and written consent was obtained to conduct the study in each village. The purpose of the study was explained to all individuals gathered, and the inclusion criteria were explained. The study was then explained to each individual that met the inclusion criteria, and each was asked for consent. Those who provided consent were registered and requested to sign or fingerprint the consent form. Individual written informed consent was obtained from each participant (≥18 years of age). Additionally for those less than 18 years old, consent was obtained from their parents/guardian and the participant themselves provided informed assent. Confirmed W. bancrofti infection was treated by co-administration of one tablet of albendazole and ivermectin, as indicated by a dose-pole according to WHO recommendations [28]. For those with lymphoedema, education was given about morbidity management. As part of the LF elimination and podoconiosis control programs, Ethiopia will put in place a morbidity management and disability prevention plan that will include provision of care for patients suffering from lymphoedema and hydrocele [24].
Integration of LF and podoconiosis mapping protocols
The mapping was conducted by a consortium of universities and institutions, including the Ethiopian Public Health Institute (EPHI, previously called EHNRI), Brighton and Sussex Medical School (BSMS), the Centre for Neglected Tropical Diseases (CNTD) at LSTM and the Global Atlas of Helminth Infections (GAHI) at the London School of Hygiene & Tropical Medicine (LSHTM). Initially, the mapping of these two diseases was planned separately. BSMS and GAHI-LSHTM were working on mapping podoconiosis and the EPHI and the CNTD/LSTM were working on mapping LF. However, through discussion with the Federal Ministry of Health of Ethiopia, the possibility of integrated mapping was raised. Experts from both mapping groups held a meeting and identified the advantages and disadvantages of integrated versus disease-specific mapping. Reasons in favour of integrated mapping included i) both LF and podoconiosis had been identified as priority NTDs in the National NTD Master Plan of Ethiopia (2013-2015) [24]; ii) the two conditions have similar clinical manifestations and the same target age group; iii) diagnosis of podoconiosis requires exclusion of LF; iv) analysis of the data from the 112 districts already mapped for LF indicated potential distribution overlaps in areas between 1225 and 1698 meters above sea level [29]; v) information from other countries indicated that integrated mapping would lead to cost savings compared to multiple disease-specific mapping exercises [4]; vi) the recently-launched WHO morbidity management guideline recommends the integration of management of these two diseases [30]; and vii) both conditions are known to exist in other countries including Cameroon, Kenya, Uganda and Tanzania, so development of a protocol for integrated mapping might have application beyond Ethiopia.
The challenges identified were i) that experts preferred targeted rather than nationwide mapping for LF, mainly due to resource constraints, ii) the lack of existence of a guideline for integrated mapping of LF and podoconiosis; iii) the absence of clear diagnostic criteria for podoconiosis; iv) the logistical challenges of large field teams; and v) that the institutions had no history of working together.
Through further discussion, two of these challenges were ameliorated by deciding to use the WHO guideline for LF mapping [11] as the basis for integrated mapping, and through the development of an algorithm for the diagnosis of podoconiosis (Figure 1), which was accepted by both mapping groups. Preparation for mapping then proceeded on an integrated basis.
Preparatory phase Development of contracts
Contracts were developed between the EHNRI (now the EPHI) and both disease-specific partners (BSMS and CNTD). These set out roles and responsibilities on each side, timelines and budgets. In addition, a Data Sharing Agreement was developed between BSMS and CNTD to delineate the ownership, analysis and publication of data arising from the mapping.
Development of mapping protocol
Initially, disease-specific mapping protocols were developed. Experts in mapping and epidemiology were involved in harmonizing the two protocols and resolved differences between the two approaches. The planning drew on experience from previous large scale studies in the country such as the Malaria Indicators Survey [31,32], previous LF mapping [23] and the national tuberculosis survey [32]. After the final protocol was developed, disease specific training manual and standard operating procedure (Additional file 1) for conducting the mapping were developed.
Procurement and storage of supplies
Supplies and consumables required for the mapping were procured from local markets, while smartphones and immunochromatographic card tests (ICT) were procured internationally. The ICTs (BinaxNOW® Filariasis, Alere, Massachusetts, U.S.) were stored at a central warehouse in EPHI according to the manufacturer's guidelines, and then transported to the field following the appropriate cold chain procedure. Each team was provided with a cold box and ice packs. On arrival at each district, the team was able to exchange the ice packs with deep-frozen ones at the respective health facility. Subsequent batches of ICTs were distributed to each team during supervisory visits. Due to limited availability of ICT cards at manufacturer level and the incertitude that the project could be accomplished on time, shipment of the ICTs occurred in four batches; this required repeated clearance through customs, and resulted in additional costs and supply chain breakdown.
"The custom clearance took a lot of time, the ICTs were sent in four batches and we have to pass through the clearance process four times, each taking 3-4 weeks. If all the ICT were shipped together, the cost of transport and storage would have been reduced. We were paying for customs on daily basis. Because of the ICT shortage there was a one week interruption of the mapping which incurred cost. I think sending the ICTs in one batch would have avoided all these problems". [Coordinator].
Transport
One local supplier was identified following a competitive bid process. In total, 34 vehicles were hired for the entire project period and 4 additional vehicles for supervision. Each team was provided with one vehicle and travelled together during the data collection period. The supplier was responsible for covering vehicle maintenance and drivers' allowances. One focal person from the supplier was identified, similarly one person responsible for coordination of the transport was identified on the mapping team. Any communications regarding vehicle and transport issues were dealt with by the focal persons on each side. As far as possible, the same vehicles were used throughout the mapping process, enabling each team to develop a relationship with the driver. Some of the drivers were not willing to drive in challenging areas nor to include individuals selected from the community among their passengers, thinking that it is not their duty.
"Transportation is key for the success of large scale surveys such as this and the driver is a key person. To reduce disharmony, it is important to clearly indicate what is expected from the drivers and include this in the agreement with the suppliers." [Team leader].
"Clear agreement should be signed with car suppliers because some of the drivers were not willing to go to some difficult places. In my view, incentivizing drivers could be helpful to get their maximum support". [Enumerator].
"The payment for the vehicle rental was a flat rate; all were paid the same price. The payment should be context specific and should consider the distance from the capital and the topography". [Team leader].
Smartphone data collection
Motorola Atrix HD smartphones with an android application were used for data collection, each costing $136 (unit price) [33]. Four forms -'Community' , 'ICT result', 'Podoconiosis questionnaire' and 'LF questionnaire' -were interlinked using a unique identifier. The questionnaire interface was developed by experts from the Taskforce for Global Health. The smartphones had touch screen display and exchangeable batteries, which served for 4-5 hours. They used local Wifi and mobile internet services and were linked to a server in Atlanta. After the application and questionnaires were installed, pilot testing was conducted and changes made before the start of the actual work. An internal GPS allowed the direct capture of geographic coordinates.
Data for this study were collected using the LINKS system [33], a mobile application (app) which allows data to be entered on mobile devices running Android and sent through a 256-bit encrypted connection to a centralized cloud-based database server. Eighty Motorola Atrix HD mobile phones used for this project; two per team and 12 spare for any emergency replacements. Hierarchal data were collected using separate surveys for community level information and for individual level information. These surveys could later be linked together to produce a full analytic dataset. The community survey focused on collecting information on the site (region, zone, woreda (district), and kebele (sub-district)), but also included population counts and information about communitywide treatment of LF and other deworming activities in the past year. The individual and examination survey included information regarding general demographics as well as an assessment of LF morbidity. The perceptions of the data collectors on smartphone-based data collection are presented in Table 1. A major concern was data ownership, as well as lack of technical expertise at local level to deal with technical and operational challenges of the android smartphones and the LINKS system.
"Currently we have an agreement with the server owner about data use. But I would prefer if the server was under our control….In the future if we get capacity building training and if we administer the server in country it would be more secure and preferable. In the current situation you feel that you can't control your data." [Coordinator].
Data collector training
Recruitment of the data collectors was conducted through a formal procurement procedure, with clear and specific job advertisements placed in a national newspaper; job requirements included hands-on experience of data collection and previous use of ICT or other rapid test. A training of trainers (TOT) workshop was organized by the BSMS and EPHI team for six trainers. A participatory approach was used to train the trainers on the smartphones, training manuals and testing procedures. Subsequently, a total of 136 health providers were given two days' classroom-based training (on the mapping protocol, how to operate the smartphones and how to collect data using the android application) and one day field practice in a nearby community. On the first day, all data collectors attended a common plenary session and then breakout session according to their specialty. These individuals were formed into 34 teams each including a health officer, two nurses and a laboratory technician. The TOT training was instrumental in bringing all the trainers on to the same level.
"The TOT training was important: we had thorough discussion among ourselves [trainers], and this helped us to clear some ambiguities. During the training every trainer was talking the same language." [Coordinator].
The mapping process
Each team was provided with supplies and assigned a vehicle for the entire mapping period. Four days were assigned for mapping two sub-districts. On the first day the district health offices were contacted, the mapping explained, permission obtained and suspected high-risk communities were identified based on review of the health records. An additional four community health workers were recruited in each district to serve as community mobilizers and translators. On the second and third days, data collection was carried out in each subdistrict, and the fourth day was used to travel to the next district. In practice, three days per district was usually found to be sufficient, despite the mapping exercise Saves time during entry: paper-based data collection requires double data entry.
Writing on a smartphone is easier than writing on paper.
Data quality Some restrictive rules reduced error. For example, it was impossible to enter age less than 15 years.
The skip pattern reduced error in entering irrelevant data.
Transport and logistics
Easy to carry compared with thousands of questionnaires.
Reduces duplication, stamping and transportation. Smartphones are handy and easily portable.
Data storage Send data instantly. However in case of lack of network access data must be stored and could be lost.
Paper based data are difficult to keep clean.
Communication
Unless you explain to the respondents, they may think that you are playing a game or not fully attending when you are entering data onto a smartphone.
People are more familiar with paper and would be more comfortable to respond to questions.
Feedback mechanism
Feedback is received on a regular basis, since the data managers at central level have access to the completed data instantly. In paper-based data collection you have to wait until a supervisor comes and collects the questionnaire.
Other concerns Charging in areas where there is no electricity is difficult.
Smartphone are costly and may attract robbery.
Once data are sent there is no room to correct, unless you contact people in the central level.
being carried out during the rainy season, which often severely restricted travel.
Sampling
Two-stage cluster convenience sampling was used. The primary sampling unit for the survey was the kebele (lowest level administrative structure, population approximately 5000) (Figure 1). Two kebele were selected from each woreda (district) based on reported history of lymphoedema cases collected through interviewing the woreda health officials, health providers and village leaders one day prior to the survey. Villages within each kebele were also purposively sampled. The secondary sampling unit was individuals selected within each village using systematic sampling from a random start point. Mobilization was conducted one day prior to the survey using Health Extension Workers (HEW, community health workers with an average of two attached to each kebele). Every adult in the community was informed through house-tohouse visits that a survey was to be conducted, and were invited to participate. On the day of the survey, all persons aged 15 years and above living in the selected communities were invited to gather at a convenient point. The study objectives were then explained in the local language, and those willing to participate were asked to form two lines, one of men and the other of women. Fifty individuals were selected from each line using systematic sampling from a random start point, resulting in an overall sample of 50 males and 50 females. Two hundred individuals were therefore tested in each woreda. In most villages, it was possible to mobilize all adults in the community and obtain appropriate samples. Individuals were excluded from the study if they had not lived in the woreda for at least 10 years, had left the woreda for more than 6 months in the year prior to the survey, or did not provide informed consent.
Data collection
Participants were requested to provide a finger-prick blood sample to be tested for circulating W. bancrofti antigen using ICTs. All the participants were tested for ICT, and results were recorded with the individual's ID number both on the card, and on the smartphone proforma after 10 minutes. In villages where there was at least one ICTpositive individual, all ICT-negative people with lymphoedema were asked to provide 5 ml blood for antifilarial antibody (Wb123 assay) testing at the central laboratory in Addis Ababa. The clinical algorithm used in the mapping process was found to be easily understandable by the data collectors. For individuals with lymphoedema, an algorithm was used to identify possible differential diagnoses of podoconiosis ( Figure 2). In this study, a confirmed podoconiosis case was defined as a person residing in the study woreda for at least 10 years, with lymphoedema of the lower limb present for more than 3 months for which other causes (i.e. LF, onchocerciasis, leprosy, Milroy syndrome, heart or liver failure) had been excluded. A diagram showing the order of stations used during data collection is shown as Figure 3. In those individuals clinically confirmed to have podoconiosis, duration of illness, family history of similar illness among blood relatives, and disease stage according to the validated podoconiosis staging system [34] were recorded. "The algorithm was very clear and there was nothing difficult about it. We were given intensive training before we departed to the field. Because of these reasons it was not difficult to use the algorithm. In case there were some uncertainties, the team had a discussion to arrive at a diagnosis." [Team leader].
"The algorithm was supplemented by pictorial presentation of different stages of podoconiosis. Any health workers who had never heard of podoconiosis could easily use the algorithm after the training." [Enumerator].
Field supervision
A team from EPHI, BSMS, CNTD and other partners supervised the data collection. The supervision was intensive during the first two weeks of the start of the project. Experts experienced in both diseases and who had participated in TOT training participated in supervision. During supervision, adherence to the protocol and standard operating procedures were checked. Given the extended experience of the data collectors, adherence to the protocol was found to be very high.
"The supervision was very important. First the data was collected by smartphone; although we have demonstrated the use of smartphone to everyone there were some practical challenges in the field which needed immediate solutions. There were also some practical challenges regarding standard operating procedures, which were given solutions at the field level. Particularly during the first three days there were some problems, but in subsequent days the mapping continued very smoothly. Personally, I was checking adherence on the standard operating procedures, I was giving them practical solutions in the field. For example, some of the sites were not accessible since the data collection coincided with the rainy season. So we facilitated the use of motorbikes, boats and horses". [Supervisor].
Data flow and monitoring
Data were uploaded in real time to an Amazon Elastic Compute Cloud (Amazon EC2) hosted central database which is managed by the Taskforce for Global Health using mobile internet services. Data summarized by district and village could be monitored by two experts given authorized access. This enabled rapid consultation and correction of data in consultation with the field teams.
Additionally the national NTD control team leader at the Federal Ministry of Health had access to the interface to observe progress, though this access did not permit data editing. The data collection was conducted between June and October 2013. In total 129,959 individuals in 1,315 communities in 659 districts were mapped over a period of just three months.
Costs of the survey
According to the planning budgets, the cost of LF-only mapping covering 659districts was $1,212,209, while the budget for podoconiosis-only mapping covering 659 districts was estimated at $1,211,664. The actual financial cost of the integrated mapping of LF and podoconiosis was $1,291,400a significant cost reduction through savings in the areas of team training, ICT and supplies, and travel, as described in Table 2. Overall the individual survey costs 1.9 times as high as the integrated survey approach.
Summary of the results
Individual level data were available for 129,959 individuals from 1,315 communities in 659 districts. A total 8,110 individuals with lymphoedema of the lower limb were identified. A total of 139 individuals were found to be positive for W. banchrofti antigen with ICT. At least one ICT positive case was found in a total of 89 subdistricts in 75 districts.
Discussion
We present here detailed practical information on integrated mapping of LF and podoconiosis in Ethiopia. By Figure 2 Clinical algorithm for podoconiosis diagnosis. There is no point-of-care diagnostic tool for podoconiosis. Currently, podoconiosis is a diagnosis of clinical exclusion based on history, physical examination and certain disease-specific tests to exclude common differential diagnoses. All individuals included in the survey were tested for circulating W. bancrofti antigen using an ICT. Those found to be positive, regardless of the presence or absence of lymphoedema, were excluded from further clinical examination for podoconiosis. The common differential diagnoses of podoconiosis are lymphoedema due to LF, systemic disease and leprosya. The differentiation of podoconiosis from LF used a panel approach, including clinical history, physical examination, antigen and antibody tests. The swelling of podoconiosis starts in the foot and progresses upwards, whereas the swelling in LF starts elsewhere in the leg. Podoconiosis lymphoedema is asymmetric, usually confined to below the knees, and unlikely to involve the groin. In contrast, lymphoedema due to LF is commonly unilateral and extends above the knee, usually with groin involvement. In addition to the clinical history and physical examination, an antigen-based ICT was used to distinguish between the two causes of lymphoedema, although the majority of LF patients are also negative for the antigen-based test. To distinguish between podoconiosis and leprosy, clinical history and physical examination was used. Patients were asked if they had been diagnosed with leprosy and physical examination was conducted to exclude signs of leprosy including sensory loss. Onchocerciasis has clear clinical features which can easily be distinguished from podoconiosis. All lymphoedema cases were examined for signs of onchocerciasis. Systemic causes of lymphoedema were ruled out by examination of other organ systems. Hereditary causes of lymphoedema were excluded since they occur at birth or immediately after birth, whereas podoconiosis requires extended exposure to red clay soil.
presenting our approach, we hope to provide important guidance for future integrated mapping of these and other NTDs. Notable features of the work were that we were able to implement a first integrated mapping of LF and podoconiosis as planned without any major challenges. The project received support from the community, district, regional and federal officials. The approach enabled extensive geographical coverage at relatively low cost, which would have been difficult to attain using disease-specific mapping. We hope that our approach may be implemented in other countries where both diseases are endemic. In countries where LF mapping has already been conducted, applying our approach is likely to be beneficial in monitoring LF intervention progress. In countries where there is no LF, but podoconiosis is suspected to be endemic, the approach could be applied to identify areas requiring intervention.
The mapping data were formally presented to the Federal Ministry of Health within one year of the start of the project. The results were disseminated at national level in the presence of stakeholders. The maps generated will be used to inform LF elimination and podoconiosis control programs in the country. The endemic districts identified will be the bases of scaling up interventions, while the population at risk estimations serves as the basis for expanding preventive interventions.
The integrated mapping approach clearly indicated how integrated mapping, international collaboration and mobile technology together enable the conduct of large surveys for NTDs control and elimination. There were several factors that contributed to the success of the project: firstly, the mapping was needs-based -both the national program, international donors and partners needed the data urgently; secondly, the Federal Ministry of Health was an integral part of the planning and implementation of the mapping, which greatly facilitated collaboration from district health offices and other stakeholders; and thirdly, other recent national surveys [26,35] meant that officials on the ground were familiar with the importance of large scale surveys.
The integrated mapping proved to be an appropriate, well-received alternative to individual disease surveys. It is probable that resources were saved because the cost for adding podoconiosis to the LF mapping only added one more smartphone per team and one more person for data collection and associated tests, such as collection of serum for the antibody test. Parallel efforts and duplication of training, transport, community mobilization, ICT testing, and a database server of two independent disease-specific surveys were avoided. The resources saved from such duplication allowed more districts to be covered, which enabled mapping of all the districts of Ethiopia. In the initial plan, both disease-specific mapping projects were targeting smaller areas where the diseases were suspected. There are several other advantages of the integrated mapping approach beyond those of cost. Many logistic matters Figure 3 Mapping survey setup. Each individual participating in the survey was registered and gave informed written consent. Then they were assigned an in individuals ID and were given a card. The participants retained the card throughout the survey. Then ICT test were conducted, followed by podoconiosis and LF questioner. Finally ICT test results were provided to each individual. were addressed together, such as signing agreements, procurement of supplies, vehicle rental and data management. All these activities would have been duplicated if the individual surveys had been conducted separately. Importantly, the NTD experts were taken away from their daily activities for shorter periods of time.
The application of mobile technology for mapping has shown that it is a highly effective tool in a resourceconstrained setting such as Ethiopia. Previous studies have documented that the time spent and the cost of mobile-based data collection was much lower than paper-based data collection [26]. Our planning budget suggested that the cost of data collection and entry was reduced by half when moving from paper-based to mobile-based data collection. In addition, the data were available in real time allowing experts to give prompt feedback to field teams. To overcome the challenges of power shortages, a car adapter was procured and each team was able to charge their smartphones from the assigned vehicle. In addition the mobiles were kept on flight mode during data collection to save power. In areas where there was no mobile network coverage, the smartphones were able to store data form up to 10,000 individuals, making data collection extremely flexible. Throughout the process, only one mobile phone became nonfunctional, and this was because the enumerators mistakenly uninstalled the program. Each team was provided with 500 paper based questionnaire in case of any problems with the smartphones.
Strong community mobilization was another important component of the mapping. The support obtained from federal, regional and district level officials was instrumental in implementing the project without significant challenge. The involvement of village level support team, including HEWs and kebele leaders, was vital for community mobilization. Given the strong link and trust built between the HEW and the community, mobilization was achieved in a relatively short period of time. Identifying community level personnel such as HEWs is important to build trust between data collectors and the community, and to achieve adequate mobilization and consent for participation. HEWs are uniquely positioned to estimate the number of adults in the community and conduct house to house mobilization [36,37]. The consent process is also worthy of a mention: written consent was thought to be appropriate by community leaders before the start of the community mobilization. In some communities, community leaders were consulted in the decision to participate or not. Community leaders were found to be catalysts for participation in the study.
Despite initial concerns surrounding the podoconiosis algorithm used in the current study, it was easily understood by the enumerators. Supervisors witnessed a high level of accuracy of the use of the algorithm by the data collectors. In cases where there were doubts, discussion was held among the team. Previous studies have indicated that health workers can easily identify podoconiosis from other causes of lymphoedema in endemic districts [38]. However, in the current study, the clinical algorithm was used in areas where podoconiosis is not endemic or potentially co-endemic with LF. The combination of clinical history, physical examination and blood tests were used to reach a diagnosis. Although easily understood by the enumerators, at times the procedure was found to be lengthy and tedious: further refinement of the algorithm is important. This could be achieved through evaluating the predictive diagnostic performances of individual signs and symptoms of podoconiosis.
The skills acquired by through the integrated mapping of these two diseases are highly transferrable to other disease mapping exercises. The smartphones used for the mapping were provided to EPHI, and are currently being used for mapping STH and schistosomiasis. We have trained some 136 health workers to use smartphones in mapping and these skilled enumerators are available for future mapping activities. Integrated mapping has led to further integration of these two diseases, for example the development of an integrated morbidity management guideline for LF and podoconiosis, and the inclusion of an indicator (the number of lymphoedema cases segregated by cause and age) in the routine national health management information system (HMIS).
Challenges
Although the mapping project was successfully completed in a very short period of time, it was not without obstacles. First, bringing two independently planned projects together resulted in several challenges, since the two projects were initially intended to cover different geographical areas, and the planned sample size and methodology were different. This was partly due to the absence of an integrated mapping guideline for these two diseases, and partly to this being the first nationwide mapping conducted for podoconiosis. Challenges also arose in relation to contracts and agreements: organizational cultures differ, as do expectations and formats for such agreements, so harmonizing these multi-organizational agreements took considerable time (a minimum of 3 to a maximum of 8 months). In addition, funding was not available at same time among different partners. Second, although the use of smartphones was a key strength of the mapping, technical support was provided remotely by a team from Task Force for Global Health. During the pilot phase some inconsistencies in the flow of the questionnaires were identified which needed immediate solutions. Solving simple technical problems took time due to the limitations of virtual communication. Third, the server for the data was hosted outside the country, initially giving rise to concerns over data ownership (Table 3). Fourth, the ICTs were shipped in four batches, which led to unnecessary cost, waste of time and interruption of data collection for a week.
Conclusion
To achieve the London Declaration of 2020 targets and the WHO road map for NTDs [7], rapid mapping is very important. Integrated mapping of podoconiosis and LF in Ethiopia was conducted at a large scale in a short period of time. The approach is the first of its kind and provides important lessons for co-endemic or podoconiosis-endemic countries. Strong in-country leadership, international collaboration and use of mobile technology contributed to the success of the exercise. The approach reduced costs, expanded geographical coverage and sped up the availability of data for decision makers. Data were formally presented to the Ministry of Health within one year of the start of the project, and will be used to inform national control and elimination programs. Table 3 Challenges and solutions taken during the mapping
Challenges
Solutions taken Batteries running out of charge Charge all the chargers after work and prepare them for the next day. Use of car charger in areas where there is no electricity. In a few cases where charging was not possible, paper based data collection were conducted for one day, while the other team members charged the smartphones in nearby towns.
Inability to edit once data is submitted in the smartphone.
Communicate with the central team to discuss any errors and edit the data promptly.
Lack of network Store the data in the smartphone and transfer when there is access for internet.
Community mobilization Discuss the best time for the community for mass gathering, such as early in the morning or late in the afternoon. Whenever appropriate, use holidays.
Inaccessibility (some districts during rainy season) Use alternative transport such as motorbikes, boat or horses. In areas where no other possibilities existed, walking was the last resort. | 2016-10-26T03:31:20.546Z | 2014-08-27T00:00:00.000 | {
"year": 2014,
"sha1": "0621d323137f43c6c87e4cdf10538fecc34b0315",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/1756-3305-7-397",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b7956d056e1e38989a0ab0d8d20c2853fea3afd0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
144824120 | pes2o/s2orc | v3-fos-license | Effects of Proficiency and Age of Language Acquisition on Working Memory Performance in Bilinguals
This study examined language proficiency and age of language acquisition influences on working memory performance in bilinguals. Bilingual subjects were administered reading span task in parallel versions for their first and second language. In Experiment 1, language proficiency effect was tested by examination of low and highly proficient second language speakers. In Experiment 2, age of language acquisition was examined by comparing the performance of proficient second language speakers who acquired second language either early or later in their lives. Both proficiency and age of language acquisition were found to affect bilingual working memory performance, and the proficiency effect was observed even at very high levels of language competence. The results support the notion of working memory as a domain that is influenced both by a general pool of resources and certain domain specific factors. Bilingualism today is more of a rule than an exception, since at least half of the world's population is bilingual (Grosjean, 1989). Consequently, the research of bilingual cognitive functioning had been receiving increasing attention in past twenty years. The majority of the studies in the field tackled the questions of long-term memory organization, lexical storing, access and retrieval, bilingual language production, etc. Although the idea of connections between immediate memory capacities and bilingualism is not a new one (cf. Kolers, 1963; Weinreich, 1953), the number of studies addressing this subject is considerably smaller. Working memory – more specifically its verbal component, phonological loop – is seen as a system of critical importance in the process of language
Bilingualism today is more of a rule than an exception, since at least half of the world's population is bilingual (Grosjean, 1989).Consequently, the research of bilingual cognitive functioning had been receiving increasing attention in past twenty years.The majority of the studies in the field tackled the questions of long-term memory organization, lexical storing, access and retrieval, bilingual language production, etc. (cf.Costa, La Heij, & Navarrete, 2006, French & Jacquet, 2004;Schwartz, & Kroll, 2007).Although the idea of connections between immediate memory capacities and bilingualism is not a new one (cf.Kolers, 1963;Weinreich, 1953), the number of studies addressing this subject is considerably smaller.
Working memory -more specifically its verbal component, phonological loop -is seen as a system of critical importance in the process of language Corresponding author: dvejnovi@uns.ac.rs acquisition (Baddeley, Gathercole, & Papagno, 1998;Gathercole & Pickering, 2000).In this view, the role of phonological loop in recollection of familiar phonological material (known words or numbers) is regarded as a byproduct of the development of a system whose primary function is the acquisition of new phonological material (i.e.new language).Though the previous statement renders the significance of working memory for people acquiring the second language (future bilinguals) evident, connections between bilingualism and working memory go beyond the language acquisition processes.There are a vast number of studies showing that long-term knowledge affects immediate memory performance (for an overview, see Thorn, Frankish & Gathercole, 2008).These influences are argued to be both phonological (e.g.Conrad & Hull, 1964;Gathercole, Frankish, Pickering, & Peaker, 1999) and lexical/semantic (e.g.Hulme, Maughan, & Brown, 1991;Hulme et al., 1997;Poirier & Saint Aubin, 1995) in nature.Since bilinguals typically differ in linguistic knowledge of the first (L1) and second language (L2) it is important to investigate the effect these differences on the working memory performance in two languages.The question of concern here is whether a person acquired the first and then the second language (using the phonological loop system), exhibits different working memory performance in the two languages, and what factors might possibly drive those differences.Another potentially interesting question, both for actual models of working memory (e.g.Baddeley, 2003) and bilingual educational practice, is whether the acquisition of L2 might have backward effect on working memory functioning in L1.In other words, could the functioning of verbal working memory, or perhaps even working memory in general, be improved by the L2 acquisition?Current notions of working memory do not predict such an outcome, and they would need to be revised in case of positive effect of L2 acquisition on working memory.
Present study had two specific goals.The first one was to examine the influence of language proficiency on verbal working memory performance in bilinguals' L2.Language proficiency in L2 is known to affect L2 processing from the levels as low as the individual word recognition.For example, Favreau and Segalowitz (1983) showed that highly proficient bilinguals exhibited greater semantic priming than less proficient bilinguals, especially at short stimulus onset asynchrony intervals.Whether, and to which extent, the proficiency effect is present in the verbal working memory operation is still unclear, as the previous findings have not been unanimous.In an early study of Harrington and Sawyer (1992) native (L1) Japanese speakers with upper-intermediate to advanced proficiency in L2 English were tested on a version of reading span task of Daneman and Carpenter (1980).The subjects exhibited no differences in working memory span in L1 and L2.However, significant correlation between L2 proficiency and working memory span in L2 was observed in the same study, suggesting that the issue required further inquiry.Moreover, the same study reported of moderate correlations between L1 and L2 working memory spans (higher correlations were found in Osaka andOsaka, 1992 andMiyake andFriedman, 1998).These findings were compatible with the capacity theory of comprehension (Just & Carpenter, 1992).More recently, Service, Simola, Metsaenheimo and Maury (2002) examined two groups of Finnish-English bilinguals.The task in this study was to memorize the last words of the auditively presented sentences while judging their correspondence with the pictures that were shown.No difference between L1 and L2 working memory spans was found in a group of highly proficient L2 speakers.However, significantly lower working memory spans in L2, as compared to L1 spans, were registered in a group of less proficient L2 speakers.These results suggested that proficiency effect does exist, but that it can only be observed at lower levels of L2 proficiency.Investigating the same issue, Van den Noort, Bosch and Hugdahl (2006) conducted a study that examined working memory functioning of trilinguals.Their subjects were native (L1) Dutch speakers who spoke fluent German (L2) 1 and less-fluent Norwegian language (L3).These subjects performed better on the L1 reading span task, as compared to the L2 task, and their performance in L2 was better than in L3.Accordingly, the study confirmed the language proficiency affects working memory performance.However, the study of Van den Noort et al. showed that the effect might not be exclusive feature of insufficiently proficient language processing.Contrary to the findings of Service et al. (2002), the study suggested that the effect of language proficiency might be a more comprehensive one, and that it could be registered in fluent speakers of foreign language, too.In the same vein, an fMRI study by Chee, Soon, Lee and Pallier (2004) reported proficiency effect in patterns of brain activation during auditory n-back task.Recent studies, thus, generally support the view of some kind of dependence of verbal working memory performance on language proficiency.Whether this dependence is only present in less proficient L2 speakers, meaning that there is some proficiency threshold beyond which the effect is not observed, is still unclear.Alternatively, it might be the case that the proficiency is influencing verbal working memory performance even in highly competent L2 speakers.This dilemma is addressed in our Experiment 1.
The second goal of our study was to examine the age of language acquisition effects on verbal working memory performance.Relevance of age of acquisition factor was extensively explored and documented in different areas of psycholinguistics, especially after the influential study of Morrison and Ellis (1995) which showed that early-acquired words are processed faster than the later acquired ones, even after controlling for the frequency effect.Effects of age of acquisition on the word level language processing were shown in different experimental paradigms: naming (e.g.Brysbaert & Ghyselinck, 2006), lexical decision (Morrison & Ellis, 2000), semantic categorization task 1 Proficiency level in L2 German of these subjects was comparable to the proficiency level in L2 English of the highly proficient group from the study of Service et al (2002).(Brysbaert, Van Wijendaele & De Deyne, 2000), etc.Several reported studies explored the effects of age of acquisition in the second language, as well.
Mainly motivated by the pursuit of the critical period for language acquisition, these showed clear differences in processing of L2 words in function of their age of acquisition.For example, Silverberg and Samuel (2004) found effects of L2 semantic priming in early, but not in late bilinguals.Their results suggested that early bilinguals might have unitary conceptual system, whereas late bilinguals use separate conceptual systems for each of their languages.Abundant research on age of acquisition in past two decades demonstrated this factor affects various aspects of language processing, yet thus far no study investigated whether the age of acquisition of a particular language might be a factor relevant for the working memory processing in that language.On the other hand, bilingual working memory research, as commented, focused principally on the examination of the language proficiency effect.In our Experiment 2, we made the first exploratory step in examining the possibility of independent influence of the age of language acquisition on verbal working memory performance.
EXPERIMENT 1 The goal of Experiment 1 was an elaborate examination of the effect of second language proficiency on working memory performance in bilinguals.
Method
Participants: Thirty-one first year students of psychology at the University of Novi Sad took part in this experiment.Out of all first year psychology students, those with the lowest and the highest scores on the placement test of English language (Quick Paper and Pen Test, 2001) were chosen for the experiment 2 .Selected students formed the less proficient (LP) and the high proficient (HP) experimental groups.All subjects spoke Serbian as their native language (L1), and English as their second language (L2).Groups were of the similar age (M LP = 20; M HP = 20.5;t(29) = 0.877, p > 0.05), did not differ in age at which they began L2 acquisition (M LP = 10.6;M HP = 9.6; t(27.394)= 1.466, p > 0.05), nor duration of L2 learning (M LP = 9.87; M HP = 10.69;t(21.731)= -0.852,p > 0.05). 3All 15 subjects from the LP group attended the lowest offered level of the English course (pre-intermediate), while the HP subjects attended the most advanced level of English course offered (upper-intermediate) or were exempt from the course due to very high English competence.Mean English test scores were 27.2 and 45.94 (out of 60) for the LP and the HP group, respectively.Difference between them was significant (t(29) = -18.224,p < 0.01).
2
The placement test is regularly administered to all first year students in order to assign them to study groups with similar English competence level for their English language course.The test is comprised of 60 multiple-choice questions that examine the knowledge of English grammar, vocabulary and comprehension and is valid criterion for the English language competence selection (Radić-Bojanić, 2008).3 Reported language background information was obtained through subjects' self-reports.
Tasks: Subjects were administered the reading span task of working memory (Daneman & Carpenter, 1980) in two parallel language variants: the Serbian and the English.The reading span task was chosen as it is the most commonly used procedure for the verbal working memory capacity assessment.The English task procedure resembled the procedures used in Waters and Caplan (1996) and Engle, Tuholski, Laughlin and Conway (1999) and was based on the findings of the study of methodological and technical aspects of the reading span task (Lalović & Vejnović, 2008).Stimuli (task elements) were presented on the computer screen one at a time.Task element consisted of a sentence, followed by the question-mark, followed by the uppercase target-word (for example: "Nigel can't swim as fast as his younger window and his friends can.? FOWER").Subjects were asked to read out loud each sentence the moment it appears on the screen, say "yes" if the sentence made sense or "no" if it did not 4 , and read and memorize following target-word.When this was done presentation of the next element was activated by the experimenter.After the presentation of several elements, three question-marks would appear on the screen notifying the subject to pass to the reproduction phase in which he was instructed to write all the target-words from the previous trial in the response sheet.Reproduction phase was then succeeded by the presentation of next trial, and the presentation and reproduction phases would alternate until the end of experiment.
The experiment consisted of twelve trials (sequences of elements between two reproduction phases), and trial size (2-5) was calculated as the number of elements in a given trial.Total number of elements in the experiment was 42, so that trials of each size were administered three times within the experiment.The order of the presentation of trials was randomized so the subject could not know the timing of the next recollection phase.The Serbian version of the task was exactly the same as the described English in all aspects save for the language employed.
Stimuli: Eleven to fifteen word long sentences, followed by target-words (nouns), were used as stimuli.They were presented on 17" computer screen, in 20pt Arial white font on dark background.In order to make the two language versions of the task parallel, the stimuli were matched for several relevant characteristics: sentence length, syntactic and vocabulary complexity 5 , target-word frequency and target-word length (as measured by the number of phonemes).Mean sentence length was 12.36 for Serbian, and 12.43 for English task; with two language variants not differing significantly (t(82) = -0,365, p > 0.05).Frequencies of the Serbian target-words were extracted from the Serbian Language Corpus (Kostić, 1999) and the English from the CELEX database (Baayen, Piepenbrock, & Gulikers, 1995).Mean target-word frequency was 266.29 occurrences per million words for Serbian, and 194.43 occurrences per million words for English target-words.This difference was not significant either (t(82) = 1.139, p > 0.05).Average number of phonemes was 4.69 in Serbian targetwords and 4.38 in English target-words, and their difference was statistically insignificant (t(82) = 1.319, p > 0.05).
Design: Main analysis of Experiment 1 included L2 working memory span as a dependent variable, L2 proficiency as a categorical predictor, and L1 working memory span as a 4 Half of the sentences were made semantically implausible by substituting animate subject for inanimate.Semantic verification of the sentences was introduced in order to make sure the sentences were read for comprehension.5 In the pilot examination it was ensured that syntactic complexity and vocabulary of the selected sentences were appropriate for the low-proficiency level of L2.Half of the sentences were then kept for the English version of the task, and the other half was translated for the use in the L1 (Serbian) version of the task.
continuous covariate predictor.Additionally, ANOVA and hierarchical regression models were performed (see the results section) and within-group comparison of the L1 and L2 working memory span was made for each of the groups.
Working memory spans were operationalized as an average proportion of correctly reproduced elements of all trials.For each trial, an index representing the number of correctly reproduced elements divided by trial size was calculated.Final score was obtained as the sum of all the indices, divided by total number of trials in the experiment (12), and the reading span index took the values between 0 and 1.
Procedure: Two language versions of the task were administered individually in one session.The order of administration was balanced and had no effect on results.The DMDX software v.3.2.5.4 (Forster & Forster, 2003) was used for the presentation of the stimuli.Average duration of the session was around 25 minutes.
Results
Correlations between the English test scores and the reading span measures in two languages were calculated together for subjects from the two groups.The span scores in two language versions of the task were highly and positively correlated (r(29) = 0.647, p < 0.01) 6 .The English test scores were significantly correlated with working memory span in English task (r(29) = 0.432, p < 0.05), but not with the span in the Serbian version of the task (r(29) = 0.199, p > 0.05).
Mean working memory spans and standard deviations for two groups of subjects are shown in Table 1.Proficiency effect on L2 span was examined by linear modeling (ANCOVA design).Significant main effect of proficiency level on L2 reading span task performance was registered (F(1, 28) = 5.022, p < 0.05) even after controlling for the covariate L1 span effect (F(1, 28) = 18.760, p < 0.01).Additional test showed that unique contribution of proficiency level was significant (R 2 = 0.12, F(1, 29) = 4.955, p < 0.05), after L1 span effect was accounted for.At the same time, effect of proficiency on L1 span was not significant (F(1, 29) = 1.21, p > 0.05).
In order to further inspect the proficiency effect, the reading span task performance in L1 and L2 was compared for each group of subjects.Both groups had significantly higher spans when they performed the task in their native, as compared to their second language (t(14) = 5.809, p < 0.01, for the LP group; and t(15) = 4.633, p < 0.01, for the HP group).
Discussion
Results showed that the reading span scores in L1 an L2 were highly correlated.Subjects' reading span in L2 (English) was moderately correlated with their English knowledge test scores, while, expectedly, English knowledge test scores did not correlate with L1 working memory spans.The pattern of these correlation coefficient magnitudes is similar to the one reported in Osamu (2006).It suggests that performance on the reading span task in L2 depends both on 1) a common pool of resources that is involved in verbal working memory processing independently of the task characteristics, and, at least to some extent, on 2) the mastering of L2.
Subjects from the highly proficient L2 group performed substantially better on the L2 task than their less proficient matches, while the two groups' performance in L1 was comparable.Importantly, the effect of proficiency level on L2 performance remained significant even after the L1 performance influences were statistically partialled out, and its unique contribution improved the model significantly.Thus, Experiment 1 showed that language proficiency influences working memory performance in L2.
Additionally, Experiment 1 also showed that both experimental groups were more successful when faced with the reading span task in L1 (Serbian) than when performing the same task in L2 (English).Better L1 performance of the less proficient group can easily be attributed to their higher proficiency in L1, as compared to proficiency in L2.Yet more importantly, similar claim could be made for the highly proficient L2 subjects, too, for their L1 proficiency is arguably superior to their L2 proficiency, as well.Consequently, it could be concluded that the proficiency effect is not exclusive feature of the non-proficient language processing.Our results showed that it can be spotted even in highly proficient L2 speakers.
However, there is an alternative explanation for the subjects' superior L1 performance (that is particularly relevant for the highly proficient group).As described, our subjects started acquiring L2 (English) at the age of nine or ten -at the time they already were reasonably competent L1 (Serbian) speakers.Therefore, it perhaps might be the case that the difference in the performance in two languages that was observed in highly proficient subjects was not (only) due to their superior proficiency in L1, but (at least partly) due to the difference in the age at which they had acquired those languages.This issue was addressed in our Experiment 2.
EXPERIMENT 2
The main goal of Experiment 2 was to examine effect of age of language acquisition on verbal working memory performance in bilinguals.
Method
Participants: A group of 15 subjects, students of psychology or Serbian language for ethnic minorities, participated in Experiment 2. Their native language was Hungarian and they all spoke fluent Serbian, as a language of the community they live in.The results of this group (the early acquired -EA group) were contrasted with the data obtained from the highly proficient L2 speakers of Experiment 1 (the later acquired -LA group).Both of the groups, thus, were comprised of proficient L2 speakers.Importantly, the groups matched in L2 proficiency self-assessments on a ten-point Likert scale (M LA = 7.31; M EA = 8; t(29) = 1.92; p > 0.05) and crucially differed in the age at which they had started L2 acquisition (M LA = 9; M EA = 4, t(29) = 6.680, p < 0.01).
Tasks: Two language versions of the reading span task were administered: the Hungarian and the Serbian.The Serbian version was the same as in Experiment 1.The Hungarian was constructed for the purpose of this experiment and matched the Serbian in all relevant aspects (see below).
Stimuli: Two language versions of the reading span task were matched for the sentence length, syntactic and vocabulary complexity, frequency and the length of target-word.Average sentence length was 12.36 words in both versions of the task.Hungarian target-word frequencies were extracted from the Hungarian National Corpus (Magyar Nemeti Szövegtár, 2003), their mean was 205.43 occurrences per million, and it matched mean Serbian targetword frequency (t(82) = 1.026, p > 0.05).Mean number of phonemes in target-words was 4.45 and 4.69 for the Hungarian and Serbian version of the task, respectively, with the two versions not differing in this respect either (t(82) = 0.962, p > 0.05).
Design:
The main analysis of Experiment 2 included L2 working memory span as a dependent variable, with age of L2 acquisition as a between-group categorical predictor and L1 working memory span as a continuous covariate predictor.Within-group comparison of L1 and L2 spans of the EA group was performed for further examination of proficiency effect.Working memory spans were calculated in the same fashion as in Experiment 1.
Procedure: Administration of the task was individual.Participants were administered two parallel language variants of the reading span task (Hungarian and Serbian) in one session.The order of task administration was balanced and had no effect on the results.Technical aspects of the presentation of the stimuli were identical to those in Experiment 1.
Results
Descriptive statistics of the subjects tested in Experiment 2 (the EA group), together with the highly proficient subjects from Experiment 1 to whom they were compared to are displayed in Table 2. Due to lower variability in language proficiency of the EA subjects, their L1 and L2 spans were even more highly correlated (r(29) = 0.84, p < 0.01) than those of the groups tested in Experiment 1. Linear model (ANCOVA) was performed in order to examine the effect of the group membership on L2 span.Significant main effect of group was registered (F(1, 28) = 5.064, p < 0.05), even after controlling for the covariate L1 span effect (F(1, 28) = 41.405,p < 0.01).Conversely, group effect on L1 span was not significant (F(1, 29) < 1).While compared groups equaled in their L1 performance, the EA group performed significantly better in L2.Furthermore, subjects from the EA group had larger spans in L1 than in L2 (t(14) = 2.342, p < 0.05).
Discussion
As in Experiment 1, high correlation between working memory spans in L1 and L2 was found.This correlation was even higher than the correlation registered in HP group of the Experiment 1.
Unlike in Experiment 1, subjects from both groups considered in Experiment 2 analyses were proficient speakers of both L1 and L2.They matched in self-assessment of L2 proficiency, with the critical difference between the groups being the age of L2 acquisition.Subjects from the EA group started L2 acquisition at the age of four, while those of the LA group did so at the age of nine.This difference is both statistically significant and substantial.Furthermore, the EA group subjects acquired L2 within the critical period that is often claimed to be the maturational constraint for fully successful language acquisition (cf.Johnson & Newport, 1989;Newport, 1990), whereas the age of first exposure to L2 for the LA group goes well beyond the critical age of seven, with their L2 acquisition continuing during puberty period.
Experiment 2 showed that the two groups had similar working memory spans in their L1.The EA group, however, performed significantly better in the L2 task than the LA group.Accordingly, the experiment showed that the verbal working memory performance is affected by the age of language acquisition.
Results of Experiment 2 also showed that the L1 span was larger than the L2 span even in subjects who acquired L2 early in their lives and have very good command of it.This finding gives an additional support to the results of Experiment 1.More specifically, it shows that language proficiency effect on verbal working memory is not only characteristic of lower level L2 mastering, but that it is present even in very highly skilled speakers.
GENERAL DISCUSSION
This study examined factors that influence bilingual verbal working memory performance.The nature of previously reported proficiency effect was under particular scrutiny in our experiments, while the age of language acquisition effect was examined for the first time.The answer to the first principal question of the study -is verbal working memory performance affected by language proficiency?-is clearly affirmative.Several findings support this claim.Most importantly, critical comparison of two experimental groups in Experiment 1 showed that proficient L2 speakers had larger L2 working memory spans than the group characterized by lower L2 proficiency, even after the influence of L1 performance was statistically controlled for.At the same time, two groups did not show significant differences in their L1 performance.Moreover, subjects' scores on L2 knowledge test were correlated with their L2 spans, and not with their L1 spans.Registered proficiency effect confirmed previous findings of Service et al. (2002) and Van den Noort et al. (2006), in spite of notable methodological differences in three studies.In particular, Service et al.'s experimental task was considerably different from ours: their subjects were shown pictures while the sentences were presented auditively.The task was to verify the correspondence between the two, while at the same time memorizing the last words of the presented sentences.Languages used were Finnish and English, and there were several procedural particularities in this study.In the Van den Noort's study standard reading span task was administered in three Germanic languages (Dutch, German and Norwegian), while the same task and different scoring procedure was applied in our study.The languages we used (Serbian, Hungarian and English) origin from different and quite distant language groups.Thus, the concurrence of the results from the three studies proves that the proficiency effect is robust enough to be registered by the application of different experimental procedures and in very different languages.Additional support for this finding comes from the neurological study of Chee et al. (2004) where different brain activation at different L2 proficiency levels in working memory (n-back) task was reported.On the basis of the presented results, we concur with the view of Service and colleagues, and argue that language proficiency affects verbal working memory performance.It is our view that this happens in the following way: higher language proficiency in a given language leads to greater automatization of its processing, which leads to smaller processing costs in comprehension of the verbal material.This in turn causes that larger portion of available resources can be employed in the retainment of the information in working memory, which can ultimately be observed in superior working memory spans in proficient language processing.
Furthermore, results of the study allow for a more precise specification of registered proficiency effect.In Experiment 1, the effect was shown at lower levels of language proficiency (L1 span > L2 span).But more importantly, the same result was obtained in highly proficient L2 speakers, too, for even highly proficient L2 subjects performed better when they were administered the task in L1 (in which they were more proficient), than in L2 (which they mastered fairly well, but not as good as L1).Finally, larger L1 span was found even in the group of early acquired L2 subjects (Experiment 2).Clearly, these results conflict with the threshold hypothesis that predicts the effect is to be observed at lower levels of language proficiency only.Presence of the effect at all examined proficiency levels contrasts with the findings reported in the study of Service et al. (2002), and concurs with the results of the Van den Noort's et al. (2006).We suspect that the divergent findings of Service et al. are likely to have emerged due to application of considerably different procedure than the one used in other two studies.Our results suggest that even highly proficient L2 speakers do not reach the level of automatization of L2 processing that is characteristic of their L1 processing.Consequently, when comprehending L2 material they need to engage more of available resources than in L1 processing, and this in turn results in less information retained in their working memory.
However, there is one objection that can be made with respect to previous discussion.It can correctly be noted that superior L1 working memory performance of the Experiment 1 subjects may not necessarily have been caused by their superior L1 proficiency, since there also was a substantial difference in the age at which these subjects had acquired two languages.Having in mind the critical period hypothesis of language acquisition (cf.Johnson & Newport, 1989;Newport, 1990), one could wonder whether the L1 working memory superiority was (at least partly) caused by this other difference.The outcome of Experiment 2, however, does not support this skepticism, as similar superiority of L1 performance was found in subjects that had started with L2 acquisition very early (at the age of four), well before L1 acquisition was anywhere close to completion, and certainly well before the end of the critical period.
Relevance of the age of language acquisition on verbal working memory performance was examined in Experiment 2 where two highly proficient L2 groups were tested.Critical comparison showed that the group that acquired L2 at the early age had larger L2 working memory span than the group that started with the L2 acquisition later (the groups equaling in L1 performance).This showed that, in addition to language proficiency, age of language acquisition presents another domain specific variable that affects bilingual verbal working memory performance.Given that we have shown that proficiency affects working memory performance, potential objection regarding previous conclusion concerns the question of whether the subjects of the two highly proficient groups were exactly equal in their L2 competence, i.e. whether L2 proficiency of the later-acquired group was as high as that of the early-acquired group.In response to this, we firstly note that an effort was made to select the best later-acquired L2 speakers available.Their L2 proficiency was judged as high both by experts and by the objective L2 test scores.These L2 (English) test scores could not have been compared with objective L2 (Serbian) test scores of the early-acquired group, since matching standardized test of Serbian as the second language is not available.However, we asked our subjects to assess their L2 proficiency, and these self-assessments did not show between-group difference.Also, subjects from the two groups were similar in declaring preference of the use of L1.Based on all this, we conclude that there was no evidence indicating between-group difference in L2 proficiency, and suggest that the observed results were likely to be the consequence of the differences in the age of L2 acquisition.This first establishment of the age of language acquisition effect is to be reconfirmed in subsequent experimentation.
Besides two principal outcomes of the study, two other results are also worth noting.Firstly, high positive correlation of performance in reading span tasks in different languages is found in both experiments.These correlations are comparable in size to the ones reported in Osaka & Osaka (1992) and Van den Noort et al. (2006).Differences among three languages used in our study additionally strengthen this finding.Viewed in the light of domain generality discussion, this result supports the notion of working memory as a cognitive capacity that is largely domain general, or at least language independent.However, registered interlingual correlation was far from perfect, indicating some specific language factors (e.g.language proficiency and age of language acquisition) also contribute to the working memory performance.Secondly, an effect of L2 proficiency or age of L2 acquisition on L1 working memory performance was not shown in either of the experiments.Learning of L2, even in an early period, does not seem to have backward beneficial effect on general working memory functioning.As discussed, this comes as no surprise, since current notions of working memory do not predict such an effect.
In conclusion, results of the study support the notion of a general pool of resources that are engaged in every working memory processing.This general ability, by large, determines working memory performance in verbal domain irrespectively of the language employed.However, the research unambiguously showed that some specific characteristics of language also have effect of verbal working memory performance.These are, namely, language proficiency and age of language acquisition.Age of acquisition effect was shown for the first time, and language proficiency was proved to affect working memory performance even in very high levels of language mastering.
Table 1 :
Means and standard deviations of L1 and L2 spans for two groups of subjects.
Table 2 :
Means and standard deviations of L1 and L2 spans for the groups of early and later acquired L2 subjects. | 2016-10-10T18:24:48.217Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "aac301552f501593393e287330ce92fb72af0783",
"oa_license": "CCBYSA",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0048-57051003219V",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "aac301552f501593393e287330ce92fb72af0783",
"s2fieldsofstudy": [
"Linguistics",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
218785471 | pes2o/s2orc | v3-fos-license | Advantages and disadvantages of teleworking in Brazilian public administration: analysis of SERPRO and Federal Revenue experiences
This study investigates the advantages and disadvantages of teleworking in public administration from the perception of 98 teleworkers and 28 managers at the Brazilian Federal Data Processing Service (SERPRO) and the Federal Revenue Service. Qualitative-quantitative research, consisting of questionnaires applied to teleworkers and interviews with managers, dealt with structural, physical, personal, professional, and psychological aspects. The results showed advantages such as better quality of life, work-family balance, greater productivity and flexibility, the possibility of creating standard metrics, better assessment of the workload, and reduction of costs, stress, commuting time, as well as less exposure to violence. As for disadvantages, the study identified elements such as non-adaptation, lack of communication and connection with the company, psychological problems, lack of infrastructure and control of the teleworker. The research concludes that teleworking requires a management model that makes it more adherent to the public sphere.
INTRODUCTION
Public administration is the subject of increasing debates in Brazil, regarding the state's size, legitimacy, efficiency, and performance.Society has demanded fast, quality public services and the use of information and communication technologies (ICTs) has contributed to responding to this request.In this context, there is a demand for new forms of service provision from the state, seeking greater efficiency to respond to the population's needs (FREITAS, 2008).
The Master Plan for the State Apparatus Reform (BRASIL, 1995) worked on the concept of efficiency of public administration, which is the need to reduce costs and increase the quality of services and was considered central in the reform.Bresser-Pereira (2008) mentioned that one of the classic objectives of public administration is to protect the public assets, i.e., defend the res publica against private interests.Over the last decades, public administration has been increasingly related to the need to increase efficiency and agility, reduce costs and structure, and become more transparent and democratic (FARIA, 2009).
This study approaches teleworking considering this context.Teleworking has recently developed within private companies, as a consequence of the increasing use of information technology (IT), and has contributed to organizational flexibility and management processes (BOONEN, 2008), promoting greater efficiency in management.Also, according to the Brazilian Teleworking and Telemarketing Society (SOBRATT, 2016), among the advantages of teleworking are the reduction in the use of natural resources because of lower consumption in the workplace, improvement in the quality of life for workers, and better urban mobility as a consequence of fewer workers commuting.Data from the Brazilian Institute of Geography and Statistics (IBGE, 2010) show that about 20 million workers work from home.
However, there is little research on teleworking in the public sector.Thus, this study deepened the analysis of teleworking in public administration based on the perception of managers and teleworkers of public agencies in Brazil.The research approached the Federal Data Processing Service (Serpro), the Federal Court of Accounts (TCU), the Regional Labor Court of the State of Paraíba (TRT-PB), the Court of Justice of the State of São Paulo (TJ-SP), and the Federal Revenue Department.The only agencies that responded to the research and participated in the interviews were Serpro and the Federal Revenue Department.
The contribution of this research is to deepen the knowledge about the results of adopting teleworking in the public administration, from the perspective of managers and teleworkers, with the intention of promoting the expansion of this form of work to other public agencies, adjustments in the researched agencies and, contribute to design public policies on telework.
LITERATURE REVIEW
The concept of teleworking Information technology (IT) is currently the main tool to support companies in their administrative and operational activities.It is one of the fundamental elements in improving the quality of products, services, and results of organizations, providing agility in well-defined bureaucratic processes, which has boosted teleworking in Brazil.The term 'telecommuting,' also known as 'home office' and 'teleworking,' was first presented by Nilles (1975).
For Rabelo (2000), telework means to take the work to the worker, instead of the worker to the workplace.Pérez, Sánchez, and Carnicer (2007) consider telework as an alternative form of work organization, characterized by allowing workers to use information and telecommunications totally or partially from their home or remote place.Boonen (2008) defines telework as a decentralized form of work that was born as a response of the West to the global economic crisis.Finally, Sakuda and Vasconcelos (2005) proposed another definition, considering that teleworking is the use of computers and telecommunications to change the already consolidated work structure, involving several economic, social, organizational, environmental, and legal aspects.
For this study, telework is any work carried out at a distance, that is, outside the workplace, using ICTs that allow working from anywhere, receiving and transmitting information, images or sound related to the work activity (SOBRATT, 2016).
Telework in the world
From the study of Dunleavy, Margetts, Bastow et al. (2006), a series of discussions on the role of governance in the digital age pointed to the flexibility of work in public organizations.Among the issues pointed out, one of the organizational innovations that has been adopted in public organizations is teleworking.The issue has been approached recently by authors such as Dahlstrom (2013), Caillier (2012Caillier ( , 2013)), Eom, Choi and Sung (2016) and De Vries, Tummers andBekkers (2017, 2018).
There are several studies worldwide related to teleworking.According to Nilles (1994), telework includes more than just working at home and communicating with the office via telecommunication tools.It also includes working in work centers (an office area for employees from several companies) in the teleworkers neighborhood or in satellite offices (office of the company located in an area where there is a concentrated number of its teleworkers).
In this context of widespread use of telecommunications, it is important to initially understand the adaptation of users and the background of the implementation of this type of work.For example, Eom, Choi and Sung (2016) investigated the characteristics and behavior of South Korean government ICT users and analyzed the influence of the antecedents in the intention to use them.The study found that young people and employees in lower positions are more inclined to use government ICT and that social isolation and lack of communication, leadership and management influence negatively.Pérez, Sánchez and Carnicer (2007) studied the benefits and barriers of telework for employees and employers in industrial and service companies in Spain, identifying that the companies that adopted telework training programs faced fewer barriers.
Regarding the benefits for the worker, Tremblay ( 2002) carried out research in Quebec (Canada) with public and private companies to observe the advantages and disadvantages of telework perceived by workers in the region.The author found that there are significant gender differences in the workers' perception, although men and women agree that flexibility in working hours and not spending time in traffic are the main advantages of this form of work.Troup and Rose (2012) conducted comparative research between formal and informal telework in the public sector in Queensland (Australia) and found differences in job satisfaction and the distribution of tasks between men and women with children, suggesting that teleworking can affect work and family relationships.Some research, however, pointed to the negative aspects of using telework.Cooper and Kurland (2002), in a comparative study of the impact of telework on the professional isolation of employees of public and private organizations, concluded that isolation is closely linked to the activities carried out.Also, Hislop, Axtell, Collins et al. (2015) studied how the use of ICTs (cell phones) for autonomous workers influence work experience.The authors focused on the worker's location, how they are managed and the experiences of social and professional isolation.The result showed that cell phone use provided greater spacetime flexibility and helped people cope with social isolation, but it made them feel that they were always available for work.
Telework in Brazil
In Brazil, recent studies on teleworking in the private sector show a tendency for companies to incorporate this new form of work.Mello, Santos, Shoiti et al. (2014) demonstrated that teleworking aims to reduce costs, improve productivity and quality of life of teleworkers by eliminating commuting time, as well as being used, from the point of view of social responsibility, for the social and digital inclusion of people with disabilities.
According to Gaspar, Bellini, Donaire et al. (2014), in a study with knowledge teleworkers, some of the elements that increase the chances of success in the adoption of this form of work are: the incentive for spontaneous telework, the previous analysis of the environment in which it will be developed, the teleworker's lifestyle, their training, social activities promoted, the stimulation of creativity, proactivity and innovation, as well as the gradual implementation.
Regarding the advantages and disadvantages of teleworking, the case study by Barros and Silva (2010) explored the individuals' perceptions about the elements of telework in the company Shell Brazil, contributing to identifying potential difficulties experienced by the employees (Box 1).Nohara, Acevedo, Ribeiro et al. (2010) addressed the perceptions of teleworkers regarding the quality of their professional life.The authors found that the main positive aspects of telework are autonomy, working time flexibility, family life and stress reduction.
More recently, Aderaldo, Aderaldo and Lima (2017) reinforce the pleasure-suffering dichotomy in telework; on the one hand, it can bring professional maturity to young people and on the other hand can lead to precariousness and lack of control of the workload.
Telework in public administration
Considering the high level of use of information and communication technologies in its activities, the Brazilian state is mature to discuss and approve norms to promote the introduction of teleworking in the public administration (SILVA, 2015).In the Brazilian public sector, Serpro was a pioneer in adopting teleworking in a comprehensive and structured way with a pilot project in 2005 (VILLARINHO and PASCHOAL, 2016).
As for the legislation, despite the pilot project started in 2005, an amendment to the Consolidation of Labor Laws (CLT) by Law 12551 only in 2011 guaranteed the same rights for teleworkers and traditional employees.The law states that teleworking requires practice, a physical structure and a different attitude from the people involved.However, it does not specify, for example, how to evaluate the worker's attendance, which is an indicator required in the employees' annual performance evaluation.Thus, the issue of telework legislation in Brazil remains embryonic and the legal framework must be developed to provide more security for all parts involved.
METHODOLOGY
The objective of this study is to deepen the analysis of telework in the Brazilian public administration.Among the motivations to choose this topic are the expansion of telework in Brazil in the last decades and the research gap represented by the small number of studies on telework in public administration.
The research adopted a mixed qualitative and quantitative methodology, adequate for dealing with complex social phenomena.
The quantitative approach brings data and indicators representative of the available data.The qualitative approach complements the findings, deepening the information to explain social reality (CRESWELL and CLARK, 2017).
SELECTION OF CASES
The selection of the public agencies participating in the research was based on a list published by Sobratt (2016) of the agencies that have used telework for at least two years.To expand the representativeness of the research, public agencies from all regions of the country were selected and contacted: Data Processing Service (Serpro), headquartered in Brasília with offices in 11 Brazilian state capitals; the Federal Court of Accounts (TCU), in the Federal District; the Regional Labor Court of the State of Paraíba (TRT-PB); the Court of Justice of the State of São Paulo (TJ-SP) and the Federal Revenue Department, with offices in Brasília and Rio de Janeiro.All the selected agencies required a formal request to participate in the research.However, only Serpro and the Federal Revenue Department effectively joined, responding to the questionnaires and participating in the interviews.
Serpro is a public company created by Law 4516/1964, which offers technology solutions.It has offices in all five regions of the country.Currently, it has about 10,000 employees, according to its management report (SERPRO, 2017).Its experience with teleworking started with a pilot project in 2005, with 18 people and then expanded to 87 in telework.
The Federal Revenue Department is an agency created by Decree 63659/1968 that seeks to carry out inspection procedures, i.e., activities that are conceptually external to the institution.In 2012, the pilot project for teleworking was regulated through RFB Ordinance 947/2012, expanding this form of work to the areas of corporate systems development and analysis and judgment of fiscal, administrative processes, in which there are teleworkers in Brasilia and the Rio de Janeiro.It has about 23,000 employees and has about 120 teleworkers (RECEITA FEDERAL, 2017).
Universe and sample
From a universe of about 210 teleworkers, 98 questionnaires were answered: 70 from Serpro and 28 from the Federal Revenue Department.Managers answered 28 questionnaires, 25 from Serpro and three from the Federal Revenue Department.Four interviews were conducted with managers who provided e-mail and telephone contact information in the third section of the questionnaire, three interviews with managers working in Serpro and one in the Federal Revenue Service.
Box 3 presents an overview of the profile of the 28 managers who responded to the questionnaire.Among the respondents, there were twenty-two male and six female.Most managers (39%) were between 50 and 59 years old, followed by those who were between 35 and 39 years old (28%).As for education, 93% had at least higher education, which suggests that managers are more experienced and educated.
Data collection instruments
From the advantages and disadvantages presented in Box 2, questionnaires were elaborated for the two groups (teleworkers and managers), to obtain a broader perception of the aspects related to telework in the agencies.A pilot test was carried out to validate the questionnaire with two professionals working at Serpro, who suggested changes that were incorporated.
After adjustments, the questionnaires were made available in a digital format and organized in three sections.The first section was designed to identify the profile of the interviewee: agency, gender, age, education, length of work in the agency as manager or teleworker and length of work in the agency as a traditional employee.The second section has included statements about costs for teleworker and manager, infrastructure and teleworking environment, exposure to violence and pollution, working relationship with manager and colleagues, productivity, flexibility, professional development in the agency, social and professional isolation, control and teleworker management and adaptation to telework.Finally, the third section left some space where respondents offered other information about their daily work related to teleworking.The managers were requested to provide their email and telephone to schedule interviews.Of the 24 advantages and 24 disadvantages identified (Box 2), 40 statements were presented to teleworkers.The other eight were evaluated only by the managers: 'costs with equipment,' 'organizational difficulties,' 'employment opportunities for people with disabilities,' 'difficulty of control,' 'errors of selecting the task,' 'lack of supervision,' 'professional isolation,' and 'psychological problems'.The advantage 'balance between work and personal life,' was assessed in a complementary way to the disadvantage 'conflict between work and family life.'Managers evaluated the eight statements mentioned and another twelve that were evaluated by teleworkers, totaling 20 statements with emphasis on the structural and professional indicators.
For the teleworkers, there were eight statements related to structural indicators, three to physical indicators/well-being, nine to personal indicators, fifteen to professional indicators and five to psychological indicators, totaling 40 statements.For the managers, there were six statements related to structural indicators, one to physical indicators/wellness, twelve to professional indicators and one to psychological indicators, totaling 20 statements evaluated.As the data collection stage obtained 98 responses from teleworkers (which is enough for a more advanced study), we conducted a factor analysis (principal component analysis), as well as applying descriptive statistics.
The second part of the research contained interviews with the managers, conducted based on a semi-structured script that explored their perceptions according to the following categories: a) implementation of telework in the public agency; b) telework selection process; c) advantages and disadvantages of teleworking for the manager and the teleworker; d) advantages and disadvantages of teleworking for the public agency; e) supervision of teleworkers; and f) workers performance evaluation system.
Descriptive statistics
The descriptive statistics enabled quantitative analysis of the data.The profile of the participants that completed the questionnaire was verified to provide an overview of the 98 teleworkers, as well as to observe the responses proposed in the questionnaire regarding the advantages and disadvantages of teleworking in public administration.
It was found that of the 98 participating teleworkers, 58 were male and 40 female, 41% aged between 50 and 59 years old, 40% aged between 45 and 49, 13% aged between 40 and 44 years old, 13% between 35 and 39, 12% over 60 years old and 6% aged between 30 and 34 years old.The data suggest that the maturity of the employees marks the teleworkers profile, as well as their experience, considering that 68% of the teleworkers had worked for 10 years or more at the agency.Therefore, it is fair to say that there were fewer opportunities for younger employees.As for education, 17 teleworkers (17.4%) completed high school and 81 (82.6%) completed at least a degree, showing that higher education seems to be a requirement to be a teleworker in the public agencies studied.
Table 1 shows the percentages of disagreement/agreement regarding the statements of teleworkers of the indicators in Box 2. For analysis, in some cases the results of the "partially agree" and "completely agree" as well as "partially disagree" and "completely disagree" were summed.Therefore, on the structural indicators, the analysis reveals that, for cost reduction, teleworkers were divided (41.8% agreed that there was a reduction in the cost of water and electricity), but the same percentage disagrees with this reduction.The most notable gain in the respondents' perception was the reduction of transportation costs with the agreement of 91.9%.
Table 1 Evaluation of teleworkers regarding the advantages and disadvantages of teleworking
Regarding infrastructure, 81.6% said that they disagree that there is a lack of structure for teleworking.However, in the item related to specific training, even though 50% agreed indicating that there was training for teleworking, there was the disagreement of 33.6% of the respondents, which reveals a point for attention since the training can help to reduce the insecurity generated by the new forms of work adopted.Regarding the available technology, 85.7% considered it adequate.
Concerning physical and welfare indicators, teleworkers largely agreed they feel safer working at home (85.7%) and less exposed to violence and pollution, both with 87.8%.The analysis of personal indicators revealed that, although 62.2% disagreed, 21.4% agreed that domestic activities were a source of distraction in teleworking, which may contribute to a decline in professional performance.The topic of distraction is reinforced when 25.5% of teleworkers agreed that they could do other jobs at the same time, independently, with 24.5% preferring not to respond, which indicates that the number may be even higher, revealing distortion of the objectives of teleworking.
Other aspects of the personal indicators were flexible hours, less commuting, fewer interruptions, more privacy, meals at home, better quality of life, and silence in the work environment had high agreement rates, between 80 and 95%, appearing as the main block of benefits perceived by teleworkers.
Regarding the professional indicators, the aspects of freedom, motivation, flexibility, productivity, and quality of work appeared with a high level of agreement, between 74 and 94%, showing that teleworkers considered these elements as benefits.However, for the promotion and professional development in the agency, it is important to observe the 23.4% and 14.3% of the respondents, respectively, that agreed that promotion and professional development became more challenging after teleworking.Also, 12.3% agreed that they do not obtain recognition from coworkers, 26.6% considered that there is more oversight regarding results for teleworkers, 15.3% perceived that they had lost their status after teleworking and 14.3% feared poor assessment.Although these were not high numbers, they were the type of findings that indicate the care needed to create mechanisms of isonomy and support for the teleworkers.
Fernando Filardi Rachel Mercedes P. de Castro Marco Tulio Fundão Zanini Finally, the psychological indicators showed that the teleworkers operating in public agencies studied did not consider they have difficulties of concentration, prejudice for social life, nor a conflict between work and family life since the results of these aspects were above the 80% of disagreement with the statements.Also, they affirmed to have greater interaction with family and less stress, with agreement above 85%.These results diverge in part from the findings of Soares (1995), who pointed out a possible re-adaptation of the professional to the context of labor in the new millennium.They also diverge from the results found by Rocha and Amador (2018), who emphasized risks of intensification of work, difficulty in separating space and length of time working, family and personal life, and the risk of extended working hours indefinitely due to the use of digital devices.
In a complementary way, Table 5 presents all the percentages of agreement/disagreement of the managers' affirmations in the research, and it is analyzed by the categories established in the item "data collection instruments." Regarding the implementation of telework, although more than 70% of the managers mentioned that they observe a reduction in costs with space, and 46.1% stated that there was a reduction in costs, 28.6% reported difficulties to implement telework.
For 64.3% of the managers there was no change in the organizational structure, and for 82.1% the necessary infrastructure was made available.However, 25% believed that there was a lack of specific training to migrate to telework, which may have made the transition harder.Also, 28.6% pointed out that there were no people with disabilities in the teleworking program, compromising the aspect of inclusion in the selection of teleworkers.
Regarding the advantages and disadvantages, 85.7% of managers believed that teleworkers are free to organize tasks, 82.2% said that they feel motivated by teleworking, and 64.3% that they have flexibility in working relationships.On the other hand, 21.5% believed that professional development within the agency was hindered, and 14.1% believed that they were not recognized by co-workers.
Finally, regarding supervision and evaluation, 71.4% considered that teleworkers are managed by goals and 53.6% considered them to be more productive.However, 17.9% affirmed to have difficulties controlling and supervising the work, showing that some adjustments still need to be made so that these forms of work can be fully used in public agencies.Source: Elaborated by the authors.
Table 2 Evaluation of managers about the advantages and disadvantages of teleworking
In addition to these results, 29.3% of the managers mentioned that some people did not adapt to telework, and 7.1% mentioned the existence of cases where psychological problems were identified, which is the aspect with the greatest lack of response (28.6%).This situation revealed a concern about the teleworkers' physical and mental health, pointing to the need for careful selection of those who are fit and above all willing to work in this regime.
Results of the factor analysis
The research used the software SPSS Statistics to carry out the factor analysis with the answers obtained from the 40 affirmations made to teleworkers observing a Likert scale, aiming to verify whether the advantages and disadvantages identified in the literature review could be grouped in the form of Box 2. The scale adopted had the following values: Completely disagree (1), partially disagree (2), neither agree nor disagree (3), partially agree (4), completely agree (5).
The quantitative analysis of this study consisted of evaluating the questionnaire applied to the 98 teleworkers, grouping the statements regarding the advantages and disadvantages of telework in factors created by factor analysis.The objective of this procedure was to reduce the number of variables explaining the main advantages and disadvantages of teleworking for teleworkers of Serpro and the Federal Revenue Department, with the lowest possible loss of information.
The factor analysis package of SPSS Statistics software was used, identifying the following criteria: the anti-image correlation matrix, which brings the values related to the observed measure of sampling adequacy (MSA); factors commonalities; and factor loadings, that were higher than 1.0.
The factor analysis by the principal component method evaluated the values of MSA and factor loading.For the sample of 40 variables, a Kaiser-Meyer-Olkin (KMO) adequacy measure of 0.751 was obtained, indicating the adequacy of the factor analysis.The Barlett sphericity test, used to examine the hypothesis that the variables were not correlated in the population, was adequate at the significance level of 1%.This result generated twelve factors that corresponded to 73.84% of the data variance.
Fernando Filardi Rachel Mercedes P. de Castro
Marco Tulio Fundão Zanini The first factor is represented by 17 variables, which corresponded to 25.87% of the explained variance, and the second factor is represented by five variables, corresponding to 9.93%.When comparing the results obtained by the factor analysis with the grouping shown in Box 2, we verified that five indicators listed could be more detailed in the factors found, in order to accurately capture the variations of the respondents' opinion.From the descriptive statistics presented, the factors can be analyzed as follows: • Factor 1 -Quality of life at work: It was representative of all the indicators in Box 2, except for the psychological ones.A disadvantage indicator "technology still does not perform as expected" was not confirmed as a disadvantage in this study, i.e., for participants the available technology was adequate.
• Factor 2 -Professional indicators of teleworking: The factor focused on the statements about the teleworker and their professional development in the agency.The indicators of disadvantages evidenced in the factor were not confirmed in this study, as the majority of the respondents did not consider their professional development as a problem.
• Factor 3 -Personal indicators of teleworking: It brought the facilities/difficulties between personal and professional life of the teleworker.The disadvantage indicator "Social isolation" was not confirmed as a disadvantage in this study.
• Factor 4 -Structural indicators of teleworking: Represented the material and technical infrastructures of teleworking.
The indicators of disadvantages evidenced in the factor were not confirmed since most of the respondents said that there was no lack of infrastructure nor an error in the selection of tasks.
• Factor 5 -Balance between work and family: the factor reflected the parsimony between better work attendance and more interaction with the family.The results validated these items as advantages of teleworking.
• Factor 6 -Relationship with the manager: This factor brought two professional indicators of oversight and delivery of work.The disadvantage indicator "more oversight" was not confirmed as a disadvantage in this study, as most (43%) affirmed that there was not more oversight for teleworkers in comparison to other workers, but the greater number of respondents from Serpro influenced this result.For the Federal Revenue Department, teleworkers have 15% higher performance targets than traditional workers.If the teleworker does not meet the performance goal, they can be requested to return to work in the office, as provided by the RFB Ordinance 947/2012.
• Factor 7 -Internal factors influencing the work: Segregated in a single factor the statement on flexibility in working relations, as a confirmed advantage of teleworking.
• Factor 8 -External factors influencing work: These are personal indicators that can positively or negatively influence the option for telework.The disadvantage indicator "distraction with domestic chores" was not confirmed as a disadvantage in this study, and the advantage indicator "self-employed" was not confirmed as an advantage, that is, most teleworkers are not able or do not want to work independently (be self-employed).
• Factor 9 -Home structure: These are structural indicators of the condition of working at home.The results validate these items as disadvantages of teleworking.
• Factor 10 -Adequacy to teleworking: Segregated in a single factor the statement about the worker's adaptation to telework as an important item to be evaluated separately.In this study, 86.7% disagreed with the statement of nonadaptation to teleworking, but three respondents did not adapt.
• Factor 11 -Agency structure: The change in organizational structure was seen as a disadvantage originally, but this analysis showed that there were no major changes for the agencies.
• Factor 12 -Training: Lack of training was seen as a disadvantage originally.The analysis showed that 50% had specific training for telework, but 33.6% did not have training, which shows that there is still a gap to be developed in order to ensure greater security for the teleworker.
Results of the content analysis
The content analysis was based on an initial organization of the information collected from the four interviews with managers (which lasted an average of 40 minutes) in order to identify the main advantages and disadvantages of telework as they reported.Subsequently a categorization of the open questions was made.These categories of analysis helped in the results triangulation and inferences.
Fernando Filardi Rachel Mercedes P. de Castro Marco Tulio Fundão Zanini The content analysis was used to treat the data collected (BARDIN, 2011).The data were prepared, coded, and categorized to be analyzed.The interview script applied to the 4 managers was divided into the following categories of analysis: a) implementation of telework in the public agency; b) telework selection process; c) advantages and disadvantages of teleworking for the manager and the teleworker; d) advantages and disadvantages of teleworking for the public agency; e) supervision of teleworkers; and f) workers performance evaluation system.
For Serpro and Federal Revenue managers, the deployment of teleworking at the public agency was viewed with some mistrust, as described in the following statements: There was a first movement of adherence, some employees came in, others were observing how it would be because they were afraid of losing contact with the company.(M1) They had a very difficult time trying to measure and set goals for employees, but now that they are seeing the potential benefit of teleworking, they are accepting it.(M2) The lexical selections "fear" and "difficulty" suggest an organizational change that people were not prepared to accept.This corroborates the 2016 Management Report of Serpro ( 2017), in which the company places teleworking as part of its benefits plan, as an incentive for employees to join the program.
Regarding the teleworking selection process, it was possible to verify that internal openings are announced, listing several selection criteria that focus on the interest and desire to improve performance.There is also a concern with the working conditions, as described in the following statements: They consulted colleagues in the office who had an interest [in telework], they ranked those who were interested, in a way that the ones with low performance would have the chance to improve by teleworking, and then they created an initial group.(M2) They received a registration form, [...] asked if the person has a physical disability, asked about socioenvironmental conditions, whether they live alone, if they have a room in their house that can be turned into an office, if there are people living in the "workplace", if you have pets [...].(M3) In Serpro (2017), three openings for telework were announced: the first, for 18 positions in 2005; the second, for 50 positions in 2007; and finally, 110 positions in 2012.The agency currently has 87 teleworkers.As for the Federal Revenue Department, in 2014 the agency implemented a pilot project for workers in the activity of analysis and judgment of administrative processes, according to its annual activity report (RECEITA FEDERAL, 2014).
In the perception of the interviewed managers of the two agencies, the advantages identified for the manager and the teleworker were: quality of life and work, productivity, flexible hours, and creation of standardized measurement, as described below: You gain in the quality of life of the person, you gain in the quality of work because then they can dedicate and give their best work possible.(M1) For the worker, it is a considerable advantage: it is the adjustment of their health and quality of life; they can make their own schedule.The biggest benefit is when I can create a measurement, and with this measurement, I can assess both colleagues who are teleworking and the others in the office, because the gain in performance is comparable.(M2) However, in the perception of the managers interviewed, the disadvantages evidenced are non-adaptation, loss of connection with the company, psychological problems, social isolation, and lack of immediate communication -as observed in the discourse: The disadvantage is that they may not be immediately available, right, there is this aspect of the person being by your side or not.(M4) Some people were approved and did not adapt [...] they started to show signs of depression because they were isolated in a home environment.[...] They loose some of this emotional bond with the company.This factor should be observed.(M3) The discourses indicate signs of professional isolation, indicating that the communication and engagement mechanisms used in teleworking must be intensified to bring workers closer to the organization.Regarding the main advantages identified for the agency, in the perception of the managers interviewed, are reduction of the cost with the employee; higher productivity; knowing the real demand for work; and less exposure to risks such as stress, violence, and disease, as demonstrated in the following responses: A point of advantage would be the cost of maintaining the employee at the company's premises [...]There is a tendency for them to produce better at home, and to be less exposed to the risk of burglaries or stress when commuting.(M3) So, this benefit of knowing the actual demand for work is even more important than the productivity gain pointed out by the colleagues.(M2) The RFB Ordinance 947/2012, of the Federal Revenue Department (RECEITA FEDERAL, 2012), provides that teleworkers' performance goals should be at least 15% higher than those for traditional workers.Therefore, for this agency, knowing the real work demand is the greatest benefit, confirming the interviewee's discourse.
In the perception of the Serpro and the Federal Revenue managers interviewed, the main disadvantages for the agency were: problems of technological infrastructure; control of the teleworker; differences in the relationship between traditional workers and teleworkers; and no strategy for bringing the teleworker back to the traditional work, as described in the following answers: The Regarding supervision, the managers interviewed affirmed that there is little difference in these activities specifically related to teleworkers.The main supervision activities were the appointment of a teleworker project manager, and the obligation to come to Serpro's office every fortnight, according to the following responses: You can request [the teleworker to come], but this is rare.Their presence is required once a fortnight.(M3) Serpro has a system of performance management, so, it demands that you have a formal plan of work to evaluate during a specific period.Thus, we use this plan to carry out these activities [evaluation] as if the employee were in the company.(M1) Productivity [...] every area that has an employee in teleworking, there is someone that we called telework project manager, who is responsible for passing the tasks for the employee, supervising the results and assessing the productivity Finally, for the performance assessment, the agencies seek to treat all workers the same.It is clear for the interviewees that there is no difference in the evaluation system applied for the teleworkers, as stated in the following narratives: [...] [They] are treated equally, the norms on this matter.There is no difference between an employee that is inside the company and one that is doing telework.(M1) The measurement adopted is the same.They do the same thing.The difference is the target that in the case of teleworking, the performance expected is of 15% higher [than the performance of the traditional workers], but the work process is identical.(M2) Advantages and disadvantages of teleworking in Brazilian public administration: analysis of SERPRO and Federal Revenue experiences 44-46 Cad.EBAPE.BR, v. 18, nº 1, Rio de Janeiro, Jan./Mar.2020.
DISCUSSION
The results show that teleworkers consider as the main advantages of the approach, the reduction of costs on commuting and food, safety, less exposure to violence and pollution, privacy, greater interaction with the family, and quality of life, focusing strongly on the workers' individuality.In the aspects related to professional activity the items, autonomy, motivation, productivity, flexible hours, less interruptions, and quality of work, were considered as gains.In this sense, our findings corroborate studies by Costa (2013) andMello, Santos, Shoiti et al. (2014), who observed better quality of life, autonomy and motivation as advantages of teleworking.
On the other hand, the findings of this research point to disadvantages such as technological infrastructure problems (especially lack of specific training), non-adaptation to telework, loss of connection with the company, professional isolation, lack of immediate communication, loss of status, fear of poor evaluation and lack of recognition, compromising promotion and professional development in the agency.In this sense, our study corroborates the work by Caillier (2012) and De Vries, Tummers andBekkers (2017, 2018) that identified negative effects of teleworking, including demotivation, professional isolation, and less organizational commitment on the days when they worked entirely at home.In addition, it has been observed that domestic chores may hinder work performance, there is a need to raise awareness of the family and there is a temptation to do other jobs independently.These negative aspects constitute points of attention that must be considered in the process of teleworking observing the characteristics of the public agency.
In the research with the managers, the study revealed as advantages of teleworking: saving time, cost reduction, creation of standardized measurements, and the knowledge of the real demand of work.The main disadvantages identified were: difficulty in communication and control of the teleworker, differences in the relationship between the traditional worker and the teleworker, workers who do not adapt, psychological issues, and the teleworker's return to traditional work.
After analyzing the different aspects involved in this research, the results point to a way of developing teleworking practices through two lines of action.First, creating mechanisms that help balance the teleworkers' professional activities and personal life, giving greater attention to the infrastructure, technology, and psychological support.Second, introducing management and control tools that aim to minimize the managers' lack of practice in managing teleworkers, seeking isonomy in recognizing and evaluating them.
FINAL CONSIDERATION
Public administration has been seeking ways to be more efficient in its various activities.In this context, teleworking has also been adopted to reduce costs, to better use time, and to increase productivity.The evidence obtained in this study, however, is that, despite notable advances, there are still challenges to be overcome in order to reach the full potential of teleworking.
Freitas (2008) points out that there is a state bureaucracy that must be overcome to give rise to new management processes, a situation that cannot be thought only in moments of crisis or institutional restructuring.There are several institutions adopting teleworking, but this issue is still not widespread in state and municipal public agencies.
The limitation of this study lies in the fact that, of the five public agencies contacted, only two of them submitted to the research.Nevertheless, this article points out aspects about teleworking that had not been discussed in the literature.The suggestion for future work is to expand the study to other public agencies and in other states, to have a better understanding of the impacts of teleworking in the Brazilian public administration.
big question is to create a quick mechanism so that you can get in touch with each other as if the person were in the office company.They have problems accessing some management systems and internal services.(M1) Teleworking is a lot of work for the management ... for a coworker who went out on telework, I had to use the work of almost two employees only to control the teleworker activity.
(M2) Sometimes you do not have many mechanisms to ... "undo" [the option to become] a teleworker.(M3) Several managers saw this as a privilege and could not cope with the "jealousy" of people who could not enter this form of work.(M4) RFB Ordinance 947/2012 provides for project managers to control the activities of the teleworker and the forms of disconnection of this form of work, while in Serpro there is no such instruction. | 2019-10-27T04:31:55.509Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "94155dbc366ac2856ecb84be30c5ee268b02c3dd",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/cebape/v18n1/en_1679-3951-cebape-18-01-28.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e90d0e2761ae7c0e18bdca2bf7f6c8bb1cb246fd",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Business"
]
} |
256204325 | pes2o/s2orc | v3-fos-license | Generation of a recombinant chickenized monoclonal antibody against the neuraminidase of H9N2 avian influenza virus
We previously reported a monoclonal antibody (mAb), 1G8, against the neuraminidase (NA) of H9N2 avian influenza virus (AIV) with significant NA inhibitory activity. To generate a recombinant chickenized mAb (RCmAb) against the NA of H9N2 AIV for passive immunization in poultry, the gene of the fragment of antigen binding (Fab) of mAb 1G8 was cloned and fused with the fragment crystallizable (Fc) gene of chicken IgY. The RCmAb 1G8 was expressed in COS-1 cells and could be detected in cell culture supernatant. The results of NA inhibitory activity tests of the RCmAb 1G8 in an enzyme-linked lectin assay (ELLA) and a microneutralization (MN) assay showed that the RCmAb 1G8 maintained significant NA inhibitory activity and neutralizing ability. This is the first chickenized antibody against AIV, which would be a good candidate for passive immunization in poultry.
H9N2 avian influenza virus (AIV) has been prevalent in
China since the first outbreak in Guangdong Province in 1992. H9N2, a low-pathogenicity virus in domestic poultry, causes mainly respiratory symptoms, immunosuppression and a decline in egg production. Great economic losses can be caused when poultry are co-infected with other pathogenic organisms (Horwood et al. 2018;Pan et al. 2012). H9N2 is also a gene donor to other influenza viruses, such as H7N9, which is highly pathogenic to humans. Control of H9N2 in avian flocks is very important not only to animal health but also to public health.
AIV contains two surface proteins on the virus particle. One is haemagglutinin (HA), and the other is neuraminidase (NA). Both of them can induce neutralizing antibodies in chickens. Specific antibodies induced by NA contribute mainly to immunity by limiting viral replication and disease severity ). In addition, NA inhibitors, such as oseltamivir, zanamivir, peramivir, laninamivir, and a recently approved polymerase acidic (PA) inhibitor, baloxavir marboxil, are currently used to treat influenza virus infections. However, the emergence of antiviral drug resistance is a major concern for these inhibitors (Hussain et al. 2017).
Antibodies against NA can inhibit NA enzyme activity and neutralize viruses with antibody-dependent cell-mediated cytotoxicity (ADCC) and complementdependent cytotoxicity (CDC) (Chen et al. 2018;Eichelberger and Wan 2014). Immunization with inactivated influenza vaccine induces much less NA-specific antibodies than natural infection (Chen et al. 2018). How to induce high concentrations of antibodies to NA in chickens needs to be investigated. Recently, breakthroughs in antibody phage display and monoclonal antibody (mAb) development have contributed to the development of humanized antiviral antibodies for passive immunization (Chen et al. 2017;Dong et al. 2013;Rudraraju and Subbarao 2018). However, few antibodies were chickenized for passive immunization of chickens (Roh et al. 2015). We previously reported a mouse-derived mAb 1G8, that has a significant inhibitory effect on the NA enzyme activity of H9N2 AIV (Wan et al. 2016). Additionally, mAb 1G8 could react with all H9N2 AIV strains isolated in eastern China. In the present study, we generated a recombinant chickenized monoclonal antibody (RCmAb) by fusing the gene of the fragment of antigen binding (Fab) of mAb 1G8 with the fragment crystallizable (Fc) gene of chicken IgY. This study is the first report of an RCmAb 1G8 against AIV, which would contribute to developing anti-AIV antibodies with heterogeneous antibodies for application in chicken passive immunization.
RT-PCR and variable region sequence analysis
Total RNA of 1G8 hybridoma cells was extracted with an AxyPrep multisource total RNA miniprep kit (Corning, Jiangsu, China). Taking the cDNA from reverse transcription as template, the variable genes of the light chain or heavy chain were amplified with primers (Table 1). PCR products were purified with a gel extraction kit (QIAGEN, Dusseldorf, Germany) and then ligated with the pGEM-T easy vector (Promega, Wisconsin, USA) for sequencing.
The nucleotide sequences of the light chain (Accession number: MT490634) and heavy chain (Accession number: MT490635) were analyzed online by IgBLAST (NCBI). Then, the 3-D structure of the 1G8 Fab was constructed with ABodyBuilder (http://opig.stats .ox.ac.uk/ webap ps/newsa bdab/sabpr ed/abody build er/, OPIG) by submitting the amino acid sequences of both the light chain and heavy chain.
Plasmids construction
According to the types of top V genes in IgBLAST, we searched all V genes of the same type in the international immunogenetics information system (IMGT, www.imgt.org). To amplify Fab gene of mAb 1G8, primers were designed based on genes of Acession numbers AJ279029.1, BC049234.1, LC110289.1 in GenBank for light chain and Accession numbers BC085312.1, M19899.1, X05878.1 in GenBank for heavy chian (Table 2). Firstly, both light chain gene and Fd gene with signal peptide sequences were amplified from cDNA derived from hybridoma cells. And Fc gene of chicken IgY
Recombinant chickenized mAb 1G8 (RCmAb 1G8) expression
To generate the RCmAb, two plasmids, pCAGGS-LC and pCAGGS-FdchFc, were co-transfected into COS-1 cells at a ratio of 2:3 as reported (Pham et al. 2006;Smith et al. 2009). The culture medium of transfected cells was replaced by Opti-MEM (Thermo Fisher Scientific, Massachusetts, USA) after washing 3 times with phosphatebuffered saline (PBS) at 6 h post transfection. Plasmid pCAGGS was also transfected into COS-1 cells as negative control. The supernatants of transfected cells were respectively collected after 48 h post transfection for further assay.
Western blot analysis
The supernatants of co-transfected cells were collected and treated with loading buffers with/without DL-dithiothreitol (DTT). Treated supernatants were used for SDS-PAGE and then transferred to nitrocellulose membranes (GE, Massachusetts, USA) for Western blot analysis as reported (Rüdiger Ridder 1995). Then, the membrane was directly incubated with an HRP-conjugated goat antichicken IgY (H+L) antibody (Jackson Immunoresearch, Pennsylvania, USA) for 1 h at 37 °C. After washing with PBST, the chemiluminescent signals were observed in a FluorChemE imaging system (Protein Simple, California, USA) covered with Clarity Max Western ECL Substrate (Bio-Rad, California, USA).
Immunofluorescence assay (IFA)
To determine the reactivity of the recombinant antibody to H9N2 AIV, MDCK cells infected with X1 virus at 0.1 MOI were fixed by cold acetone-alcohol (
Enzyme linked lectin assay (ELLA)
The inhibition of the NA enzyme by the recombinant antibody was measured by ELLA (Couzens et al. 2014). First, fetuin was coated onto the surface of a 96-well plate. The supernatants of co-transfected cells were diluted from 2 − 1 to 2 − 10 and mixed with predetermined X1 virus. Purified mAb 1G8 was diluted and mixed with the virus in the same way, from an initial concentration of 20 µg/mL as a positive control. The mixtures were individually incubated in fetuin-coated wells for 16-18 h. After washing six times with PBST, horseradish peroxidase-conjugated peanut-agglutinin (PNA-HRP) (Sigma-Aldrich, Shanghai, China) was added and incubated at room temperature for 2 h. The plate was washed another six times with PBST, followed by the addition of tetramethylbenzidine (TMB) substrate and incubation for 15 min. The colour development was stopped with 1% SDS and detected by an ELISA reader (BioTek, Vermont, USA) at OD650.
Microneutralization (MN) assay
The supernatant of co-transfected cells was diluted five fold with Opti-MEM containing 2 µg/mL trypsin treated with N-p-tosyl-l-phenylalanine chloromethyl ketone (TPCK) (Sigma-Aldrich, Shanghai, China). The supernatant of cells transfected with pCAGGS was diluted in the same way as the negative control. Purified mAb 1G8 at concentration of 5 µg/mL was used as a positive control. Then, all cells were separately incubated with 10 TCID 50 of the X1 virus at 37 °C for 30 min and finally incubated with MDCK cells in 6-well plates. Three days later, the supernatants were collected and used for detection of the HA titer and TCID 50 . HA titer was measured with 0.5% chicken red blood cell as previously described (Jin et al. 2019). TCID 50 of the virus in the supernatant was tested by previously described method (Jin et al. 2019) and calculated according to Reed-Muench assay.
Characterization of variable region genes of mAb 1G8
Sequences of the variable region of mAb 1G8 were analyzed online with IgBLAST (NCBI). The types of top V genes were IGKV6-32*01 for the light chain and IGHV3-1*01 for the heavy chain. Three complementarity-determining regions (CDRs) in the light chain and heavy chain were identified. For the light chain, CDR1 consists of "QSVNND", and CDR2 consists of only three amino acids, "YAS" (Fig. 1a). The CDR3 of the light chain is made up of "QQDYTSPFT". In the heavy chain, CDR1 consists of "GYYITSDFT", and CDR2 consists of "IHYNGNS" (Fig. 1b). The longest CDR3 of the heavy chain is made up of "AKYSFGNYEFFDV". The 3-D structure of the 1G8 Fab was generated with ABodyBuilder (OPIG) by submitting amino acid sequences of the Fab of both the light chain and heavy chain. The shortest CDR2 of the light chain is located in the middle of the variable region and is tightly connected with the other CDRs (Fig. 1c, d). Interestingly, the longest CDR3 of the heavy chain is also located in the middle of the variable region, which is complementary to the shortest CDR2.
RCmAb 1G8 was expressed in COS-1 cells
RCmAb 1G8 secreted into supernatant was analysed by Western blot. RCmAb 1G8 tetramers (over 180 kDa) and recombinant heavy chain dimers (130 kDa) were detected in the samples without DTT (Fig. 2a). This result indicated that the light chain and recombinant heavy chain could be expressed in the cells. The expressed protein could form tetramers and dimers and be excreted out of the cell. A weak band for the recombinant heavy chain monomer (65 kDa) was also detected in the samples with DTT (DTT+ in Fig. 2a).
RCmAb 1G8 reacted with NA of H9N2
To determine whether the RCmAb 1G8 could specifically bind NA of H9N2 AIV, MDCK cells infected with X1 virus and COS-1 cells transfected with the pCAGGS-NA (X1) plasmid were incubated with the expressed RCmAb 1G8. Fluorescence imaging showed that the RCmAb 1G8 could react with the X1 virus (Fig. 2b) and NA protein of X1 virus (Fig. 2c), while the supernatant of COS-1 cells transfected with the pCAGGS plasmid showed no signal. Interestingly, the RCmAb 1G8 could also react with all H9N2 AIV strains isolated from 1999 to 2019 in eastern China, as mAb 1G8 did.
NA activity inhibited by the RCmAb 1G8 was measured by ELLA. The results showed that similar to mAb 1G8, the RCmAb 1G8 could inhibit NA activity (Fig. 3a). The inhibition of NA activity by the RCmAb 1G8 in Fig. 1 Sequences of the 1G8 variable region and 3-D structure model of the 1G8 Fab. The variable region sequences of the light chain (a) and heavy chain (b) are shown with underlined CDRs. The image of the Fab structure was constructed with DeepView software (Swiss PDB Viewer). The model is displayed in top view (c) and side view (d) with carton and surface rendering. The light chain is shown in white, with CDR1 marked in blue, CDR2 marked in cyan and CDR3 marked in green. The heavy chain is shown in grey, with CDR1 marked in red, CDR2 marked with purple and CDR3 marked with yellow supernatant from co-transfected cells was equivalent to that by 0.156 µg/mL mAb 1G8 in the ELLA.
RCmAb 1G8 neutralized H9N2 virus in MDCK cells
Strong neutralization of H9N2 AIV by the RCmAb 1G8 was also demonstrated in an MN assay. The HA titer of virus in the RCmAb 1G8 group (which is equivalent to 0.031 µg/mL mAb 1G8 in the ELLA) was only half of that in the negative control group (Fig. 3b). The mAb 1G8 group (5 µg/mL) was HA negative. Consistent with the HA results, no virus was detected in the supernatant medium of the mAb 1G8 group in the TCID 50 test (Fig. 3c). The RCmAb 1G8 could also significantly inhibit the growth of the X1 virus, which indicated that NA activity was inhibited by the RCmAb 1G8 and that most nascent viruses could not be released from infected cells.
Discussion
There are two surface glycoproteins on influenza A, HA and NA, which have essential functions in the influenza life cycle. HA mediates binding of the virus to the host cell, and NA cleaves off terminal sialic acids from glycans on the host cell and on the emerging virions, thereby enabling release of progeny viruses from the host cell. NA exhibits a slower antigenic drift that is generally discordant with that of HA (Kilbourne 1990). Therefore, antibody responses against NA typically show broader cross-reactivity than those against HA (Chen et al. 2018). NA is a validated drug target, and several small molecules that inhibit its activity are licensed as influenza therapeutics.
H9N2 AIV is a major epidemiological pathogen in domestic poultry in China and a great threat to domestic poultry and public health (Huang et al. 2013;Jiao and Liu 2016;Wang et al. 2015). Commercial H9N2 vaccines have been widely applied to elicit protective antibodies but have poor effects on preventing infection with current H9N2 AIV strains. Many broad-spectrum antibodies against AIV have been generated and applied in passive immunization in animal experiments as candidates for human-use antibody drugs (Doyle et al. 2013;Kallewaard et al. 2016). In this study, we generated the first chickenized mAb against the NA of H9N2 AIV.
Chickenized antibodies can be generated by fusion with the Fc of chicken IgY and CDR grafting. Although chickenized antibodies with CDR grafting were proven to have reduced immunogenicity in chickens, it is difficult to generate similar variable region structures because of conserved framework regions (FRs) in chicken IgY Roh et al. 2015). The Fab gene of mouse-derived mAb 1G8 was fused with the chicken Fc gene and expressed in a secretory form in COS-1 cells. | 2023-01-25T15:07:24.680Z | 2020-08-20T00:00:00.000 | {
"year": 2020,
"sha1": "a58ab9098a76d821df8345d91b4a11348da24e62",
"oa_license": "CCBY",
"oa_url": "https://amb-express.springeropen.com/track/pdf/10.1186/s13568-020-01086-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "a58ab9098a76d821df8345d91b4a11348da24e62",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
11948830 | pes2o/s2orc | v3-fos-license | The Impacts of Chrysanthemum indicum Extract on Oxidative Stress and Inflammatory Responses in Adjuvant-Induced Arthritic Rats
Chrysanthemum indicum has been used as a therapeutic agent against inflammation, hypertension, and respiratory conditions for many years. This research's aim has been to examine the antioxidant impacts that Chrysanthemum indicum extract (CIE) has on the oxidative stress and inflammatory responses in adjuvant-induced arthritic (AA) rats. 40 rats were categorised into 4 groups according to a completely randomized approach: Group I involved normal control rats (CTRL) that received a basal diet; Group II involved arthritic control rats (CTRL-AA) that received the same diet; Group III involved rats that received a basal diet and 30 mg/kg CIE; and Group IV involved arthritic rats with the same diet as Group III rats (CIE-AA). After injection with complete Freund's adjuvant, body weight, arthritis score, and the serum levels of TNF-α, IL-1β, IL-6, myeloperoxidase (MPO), malondialdehyde (MDA), superoxide dismutase (SOD), and glutathione peroxidase (GSH-PX) were assessed. The results demonstrated that CIE delayed the onset time of arthritis and decreased the clinical arthritis severity score (P < 0.05). Observations of CIE-AA and CTRL-AA rats demonstrated that CIE alleviates oxidative stress and inflammatory responses in CIE-AA group. In conclusion, CIE alleviated oxidative stress and inflammatory responses, thereby highlighting its potential use as a candidate for clinical treatments of rheumatoid arthritis.
Introduction
The cardinal symptoms of rheumatoid arthritis (RA), an autoimmune disease, are chronic synovitis and the impairment of articular cartilage and the underlying bone in joints. It is classified as a systemic inflammatory disease which targets the joints by generating proliferative synovitis. Over time, RA has the potential to lead to the malformation or destruction of affected joints, and this has been found to lead to working disability and higher mortality rates [1]. Research shows that approximately 1% of the global population suffers from the condition, and it has been identified in patients ranging from 35 to 50 [1].
Disequilibrium regarding pro-and anti-inflammatory cytokines has been found to initiate autoimmunity and lasting inflammation, and this is the factor that contributes to the RA's characteristic joint impairment [2]. After observing the way in which joint destruction and levels of proinflammatory cytokines in the serum or arthritic tissues of RA patients are positively related, researchers identified that a range of proinflammatory cytokines, including tumour necrosis factor-(TNF-) , interleukin-(IL-) 1 , and IL-6, perform a significant function in the condition's biological process [3].
For transgenic mice displaying the overexpression of TNF-, acute inflammatory responses and the rapid onset of destructive arthritis are consistently observed [4]. In contrast to this, IL-1 deficient mice [5] or IL-6 deficient mice [6], in the context of experimental animal models of human RA, display reduced synovial infiltrate and tissue impairment. It is worthwhile to note the evidence to suggest that biologic medications for TNF-, IL-1, and IL-6 lower the radiographic onset of joint disease at the same time as they hinder the condition's 2 Evidence-Based Complementary and Alternative Medicine activity [7]. Also notable are the findings demonstrating that the nuclear factor-B (NF-B), which is primarily constituted of p65 and p50 complex, performs an important function in transcriptionally regulating proinflammatory gene expression in the context of RA [8]. Furthermore, it is necessary for the inhibitor of NF-B (I B) to degrade if NF-B is to be activated, and this drives the nuclear transport of NF-Bp65 [9]. The regulation of cytokine gene expression takes place based on NF-B activation, and in contrast to this, appropriate receptors can be drawn on by cytokines to facilitate I B degradation and NF-B activation. The consequence of this is the enhancement of RA's inflammation development. An important finding to consider for the present study is the way in which significant oxidative stress, in a similar way to the rise of proinflammatory cytokines, functions as a central risk factor for joint damage in RA. The stimulation of inflammatory cells, including neutrophils and macrophages, to discharge reactive oxygen species (ROS) in the synovial fluid takes place in view of cytokine overproduction, the relevance of which is emphasised insofar as this serves as an intermediary of tissue damage [10].
Contemporary clinical practice primarily draws on disease-modifying antirheumatic drugs, among other means, for RA treatment. Other commonly employed agents range from nonsteroidal anti-inflammatory drugs and corticosteroids to biologic medications. In view of this, it is important to acknowledge that several damaging secondary effects result from the use of many of these approaches, the most notable of which include ruptured gastrointestinal blood vessels, cardiovascular complications, and liver conditions [11,12]. Given the commonality of these side effects, survey evidence has been published to suggest that 60-90% of individuals receiving these treatments look for supplementary or substitute therapies [13].
Innovative medications sourced from curative plants have historically offered significant treatment options for various conditions, including RA. Consequently, researchers have taken as their subject the attempt to identify botanically derived drugs. The inflorescence or bud of Chrysanthemum indicum has found extensive usage throughout the historical practice of TCM, and it has primarily been applied in treating inflammation, hypertension, and respiratory diseases. Phytochemical profile of CIE has identified flavonoids, terpenoids, and phenolic compounds [14], and other studies have published findings to highlight its antiviral, antioxidant, antiinflammatory, antibacterial, and immunomodulatory characteristics [15]. Given the organic nature of the therapeutic agent in combination with the widespread usage it enjoys in traditional medicine and the culinary sphere, Chrysanthemum indicum constitutes a promising candidate for alternative medical practice, particularly regarding the alleviation of RA's symptoms and other organ manifestations.
Therefore, it is important to account for the gap in the literature with regard to the matter of investigating the antiinflammatory and immunomodulatory features of the plant's active components, and this constitutes the primary intention of this study. Specifically, the author will examine the impact that CIE has on paw swelling, joint impairment, the generation of inflammatory mediators, and NF-B activation in adjuvant arthritis (AA) rats.
Chrysanthemum indicum Extract Preparation.
After gathering Chrysanthemum indicum Linné (Asteraceae) flowers at a nearby market, authentication was conducted by examining microscopic and macroscopic features. 70% ethanol (with a 2-hour reflux) was used to extract the Chrysanthemum indicum's dried flowers two times, and a reduced pressure was subsequently used to concentrate the extract. Prior to storing the concentrated extract at 4 ∘ C, it was subject to filtering and lyophilization. The dried extract's yield from the initial resources equaled 12.35%. Then the lyophilized powder was suspended in 10% dimethyl sulfoxide (DMSO) to lyse the cells, filtered with a 0.2 m syringe filter, and subsequently lyophilized.
Laboratory Animals and Adjuvant Arthritis.
After obtaining 40 2-month-old adult male Wistar rats weighing between 180 and 200 g from the Tianjin Laboratory Animal Centre (Tianjin, China), conventional environmental conditions were used for maintenance: namely, a 12-hour light/dark cycle, 25 ± 2 ∘ C, and 50% humidity. Food and drinking water were freely available for the animals. The research protocol received approval based on Tianjin Hospital's regulatory requirements for the care and use of experimental animals, and the experiment was conducted in accordance with relevant provisions.
A completely randomized approach was used to allocate 10 rats to one of 4 groups. The features of each group are listed as follows: the first group (Group I) involved normal control rats (CTRL) managed with a basal diet; the second group (Group II) involved arthritic control rats (CTRL-AA) managed with the same diet; the third group (Group III) involved rats managed with a basal diet and 30 mg/kg CIE; and the fourth group (Group IV) involved arthritic rats managed with the same diet as Group III rats (CIE-AA).
These distinct diets were maintained for each group for a period of 7 days and, following this, the arthritic rats in the CTRL-AA and CIE-AA groups were subject to anaesthetisation using isoflurane. Arthritis was brought about with one intradermal injection of 4 mg heat-killed Mycobacterium butyricum in Freud's adjuvant with 0.1 ml of paraffin oil. With 7-day intervals, the body weight was logged three times from day 0 to day 14 following induction by injection. The mice were sacrificed after the treatment process had finished on day 14, and then the arthritis index for each specimen was evaluated by examining the paws. The evaluation scale ranged from 0 to 4, where 0 was equivalent to no erythema or swelling; 1 to moderate erythema or swelling of a single or multiple digits; 2 to a wholly swollen paw; 3 to erythema and ankle swelling; and 4 to ankylosis (namely, the inability for ankle bending). A severity score was derived as the composite of the sum of each paw's score. Day 14 also involved the extraction of tissues for homogenate preparation from joint, and after extraction, the tissues were subject to immediate freezing and storage at −80 ∘ C for further analysis.
Measurement of Serum
Indicators. ELISA determination kits were employed to identify the levels of TNF-, IL-1 , and IL-6, and this was carried out based on the conventional curve (Beyotime Institute of Biotechnology, China) [16]. A Bio-Rad microplate reader (Bio-Rad Laboratories, Inc., Hercules, CA, USA) was used to log the optical density at 405 nm, and the process articulated by Liu et al. [17] was conducted to analyse myeloperoxidase (MPO), malondialdehyde (MDA), superoxide dismutase (SOD), and glutathione peroxidase (GSH-PX) activity.
Preparation of Whole Cell Extract for NF-B Determination.
Following the experimental process, 10 mg joint tissue samples were extracted. Incubation then took place with a 100 L tissue lysis buffer (Thomas Scientific, Swedesboro, NJ, USA) for a duration of 30 minutes on ice. A BCA kit (Bio-Rad Laboratories, Inc.) was used to assess the protein concentration, and a TransAM NF-B p65 Transcription Factor Assay Kit facilitated the monitoring of NF-B activation. Quantity One software, version 4.4.0 (Bio-Rad Laboratories, Inc.), was employed to measure absorbance, and this was identified as 450 nm. The recorded outcomes were articulated in the form of absorbance per milligram of total protein.
Statistical
Analysis. SPSS 21.0 for Windows was used to facilitate data analysis with a nonparametric Mann-Whitney test. For each of the four groups, the researcher carried out a one-way analysis of variance, and the results were articulated in the form of mean ± standard error of the mean (SEM). Intergroup comparative analysis was facilitated by employing the post hoc least squared differences (LSD) test with < 0.05 being regarded as statistically significant. Figure 1(a) presents the observed increase in body weight over the course of the experiment, and it shows that the weight of the arthritic control rats decreased significantly while the normal control rats' body weight increased ( < 0.05). On day 7, the recorded body weight increase for the CTRL-AA group was considerably lower when compared to the CTRL and CIE groups ( < 0.05), and the body weight increase for the CIE-AA group was not significantly different than the other three. On day 14, the recorded body weight increase for the CTRL-AA and CIE-AA groups was considerably lower when compared to the other two groups ( < 0.05), while the body weight for the CIE-AA group was considerably lower than the CTRL-AA group ( < 0.05, see Figure 1(a)). Figure 1(b) displays the progression of the arthritis score index. Over the course of the initial phase of the condition (namely, until the eighth day following adjuvant injection), an examination of the arthritic rats revealed a minor inflammatory reaction in the injected paw; the arthritis scores for the injected paws ranged between 2 and 3 for the CTRL-AA and CIE-AA groups. Inflammation was seen to commence on day 9, and for the CTRL-AA group, arthritis scores rose to the highest end of the scale on day 14. For the CTRL-AA and CIE-AA groups, the arthritis scores were notably greater than those of the other two groups from day 7 ( < 0.05), and the CTRL-AA group's arthritis sores were notably greater than that of the CIE-AA group at day 10 ( < 0.05).
Results
As demonstrated in Figures 2(a) and 2(b), when considering the CTRL and CIE groups in relation to the CTRL-AA and CIE-AA groups, the MPO and MDA levels in the serum were significantly higher for the latter ( < 0.05). Additionally, the CIE-AA group displayed notably lower MPO and MDA levels ( < 0.05) when considered in relation to the CTRL-AA group (Figures 2(a) and 2(b)). As seen in Figures 2(c) and 2(d), a significant inhibition in the level of GSH-Px and SOD ( < 0.05) in the CTRL-AA and CIE-AA groups was observed by way of comparison with the CTRL and CIE groups. Furthermore, when comparing the CIE-AA group with the CTRL-AA group, the activity of GSH-Px and SOD for the former was notably higher than the latter ( < 0.05).
This study took measures to derive a quantitative measure of the levels for TNF-, IL-1 , and IL-6, primarily because this information is key to an accurate understanding of the function that Chrysanthemum indicum performed in the experimental rat model. Figure 3 shows that, for the CTRL and CIE groups, TNF-, IL-1 , and IL-6 levels are virtually identical. Dissimilarly, when comparing the CTRL-AA group to the CTRL and CIE groups, it was observed that TNF-, IL-1 , and IL-6 levels were greater for the former ( < 0.05). Therefore, the results indicate a clear causation between the consumption of CIE for the CIE-AA mice and a fall in TNF-, IL-1 , and IL-6 levels ( < 0.05).
Through the assessment of nuclear NF-B (p65), it was possible for the researcher to identify NF-B activation in cell extracts from joint. In turn, this information facilitated the identification of whether suppression of NF-B activation pathways resulted in the protective impact of isoflavones with regard to arthritis. Figure 4 illustrates that, for the CTRL-AA and CIE-AA groups, nuclear NF-B (p65) were considerably greater than in the other two groups ( < 0.05). In addition to this, the collected data demonstrated that when comparing the CTRL-AA group with the CIE-AA group, nuclear NF-B (p65) was considerably greater in the former ( < 0.05).
Discussion
Aside from a recently conducted study, which found evidence to suggest that the butanol-soluble component of Chrysanthemum indicum resulted in the inhibition of auricle edema in mice [18], research addressing the anti-inflammatory function of Chrysanthemum indicum and, moreover, its molecular mechanism is not extensive. This research demonstrates that Chrysanthemum indicum extract was a key contributing factor in facilitating a rise in body weight gain and a reduction in arthritis scores is, therefore, a valuable addition to the extant literature.
Dietary factors play essential roles in body health, disease status, and inflammatory responses [19]. This study has examined the potential that CIE has to play an important role in treating and preventing adjuvant arthritis, and the results are promising regarding its therapeutic role as a therapeutic agent against inflammatory conditions. Lee et al. examined the phagocytic activity of macrophages using a mouse model, and the findings revealed that CIE had a positive impact [20]. Another study, that of Cheng et al., published further findings to support the way in which CIE and its fractions may be beneficial for its anti-inflammatory features; specifically, Cheng et al. 's study addressed mouse auricle edema [18]. Nevertheless, although the extant findings provide insight into the degree to which Chrysanthemum indicum has antiinflammatory properties, information regarding its precise mechanisms in vivo model system is scant. Our previous research showed that CIE had beneficial effects on the inflammation responses and oxidative stress in a ankylosing spondylitis model in mice [21]. However, the extant literature contains no studies which address the anti-inflammatory impacts in an adjuvant arthritis model. Consequently, the present research constitutes the only study published to date on the topic of CIE's anti-inflammatory activity and its action mechanisms among adjuvant arthritis rats.
Adjuvant arthritis is a widely used rodent model in studies addressing rheumatoid arthritis owing to similarity of its pathological characteristics to human RA [22]. As noted in this study, the appearance of pannus formation, inflammatory cells infiltration, cartilage degradation, and bone erosion are core features of adjuvant arthritis, and this constitutes the central rationale as to why adjuvant arthritis is popular in fundamental RA research and anti-RA therapeutic research. The present research has examined the positive impacts that CIE has on oxidative stress in the serum of AA rats, and it is therefore relevant to note that recently published reports have noted the link between RA and oxidative stress in human 6 Evidence-Based Complementary and Alternative Medicine and animal populations [23,24]. Furthermore, regarding sufferers of RA, studies have reported on increased lipid peroxidation, oxidative stress, and a decrease in enzymatic antioxidants, such as GSH-Px and SOD [23,24]. Additional research demonstrates that MPO constitutes a frequent in vivo index of granulocyte infiltration and inflammation and, moreover, it functions as an indicator of oxidative stress [25].
This study found that oxidative stress was higher in the serum of AA rats when considered in relation to the control groups, and a correlation was identified between the presence of dietary CIE and the suppression of MPO activity; the latter finding indicates the alleviation of oxidative stress among AA rats. These findings reinforce the experimental outcomes of Comar et al. [26], which indicate a link between increased ROS content in the liver of arthritic rats and a stimulated prooxidant system in combination with an insufficient antioxidant defence mechanism. In view of the integral part that ROS plays in RA, a connection can be established between the serum's biochemical and histological modifications and variance in the oxidative state. Our previous data also showed that CIE significantly increased the activities of catalase (CAT), SOD, and GSH-Px in ankylosing spondylitis mice [21]. In view of this, alleviating oxidative stress could serve as a viable way to facilitate the prevention and treatment of the liver complications associated with arthritis. By focusing on the serum of arthritic rats, this study also examined the impact that CIE had on the oxidative stress parameters; one of the key findings is that the consumption of CIE among AA rats resulted in higher GSH-Px and SOD activities in the longterm. The author suggests that CIE, owing to its effect of heightening antioxidant enzyme activity, alleviated oxidative stress in the liver.
A series of papers have found evidence for the relevance of NF-B in RA [27,28], and the degree to which it is inhibited by CIE has been identified as an indicator of CIE's promise as therapeutic agent. A number of recently conducted experimental models have found that mediators of this kind have the capacity to gather leukocytes, including neutrophils [29,30]. This study's findings emphasise the potential of CIE insofar as it can facilitate the significant inhibition of inflammatory response and, moreover, decrease NF-B, TNF-, IL-1 , and IL-6 levels. Therefore, there is reason to suppose that the anti-inflammatory capacity of CIE could be applied to inhibit inflammatory mediators including NF-B, TNF-, IL-1 , and IL-6. The same trends were also observed in our previous report; CIE modulated NF-B pathway and further altered the levels of TNF-, IL-1 , and IL-6 [21]. Inflammatory cytokine generation is a critical process in the regulation of inflammation and the advancement of tumours, and this study's findings corroborate CIE's capacity to inhibit TNFand IL-1 . Consequently, the body of evidence to suggest that CIE constitutes an effective therapeutic agent against tumour progression and inflammatory response is growing.
As aforementioned, this study's findings, derived from experimentation with AA rats, demonstrate that CIE improved oxidative stress and, furthermore, facilitated a fall in the serum levels of IL-1 , IL-6, and TNF-. An equally critical finding stems from the indication that CIE has the capacity to suppress NF-B activation in the AA rats' joints.
In view of these considerations, CIE may yet emerge as a viable and highly effective way to prevent and treat RA. | 2018-04-03T00:25:45.305Z | 2017-04-10T00:00:00.000 | {
"year": 2017,
"sha1": "ab9a93872a0398c32dfb4fdde3b2434a145967a3",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2017/3285394.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3742facd5710fc009616de35d976e62d80dd310",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212665403 | pes2o/s2orc | v3-fos-license | Does Honesty Require Time? Two Preregistered Direct Replications of Experiment 2 of Shalvi, Eldar, and Bereby-Meyer (2012).
Shalvi, Eldar, and Bereby-Meyer (2012) found across two studies (N = 72 for each) that time pressure increased cheating. These findings suggest that dishonesty comes naturally, whereas honesty requires overcoming the initial tendency to cheat. Although the study's results were statistically significant, a Bayesian reanalysis indicates that they had low evidential strength. In a direct replication attempt of Shalvi et al.'s Experiment 2, we found that time pressure did not increase cheating, N = 428, point biserial correlation (rpb) = .05, Bayes factor (BF)01 = 16.06. One important deviation from the original procedure, however, was the use of mass testing. In a second direct replication with small groups of participants, we found that time pressure also did not increase cheating, N = 297, rpb = .03, BF01 = 9.59. These findings indicate that the original study may have overestimated the true effect of time pressure on cheating and the generality of the effect beyond the original context.
cheating-in the time-pressure condition than in the selfpaced condition. These findings indeed suggest that dishonesty comes naturally, while honesty requires overcoming the initial tendency to cheat. To promote honesty, the authors therefore recommended giving people time to think rather than pushing for an immediate decision (see https://www.psychologicalscience.org/news/ releases/when-do-we-lie-when-were-short-on-time-andlong-on-reasons.html). This study was theory driven, relied on an established manipulation, and included manipulation checks; in addition, the materials and data for the study are publicly available, and the study has been frequently cited.
There are, however, also reasons to question whether time pressure would increase dishonesty. First, limited cognitive capacity-also assumed to trigger automatic tendencies-has led to decreased, rather than increased, dishonesty in a variant of the die-roll game (Foerster, Pfister, Schmidts, Dignath, & Kunde, 2013; for a critique, see Shalvi, Eldar, & Bereby-Meyer, 2013). Second, in a game in which participants could decide to send a dishonest message to another participant in order to receive more money themselves, time pressure increased rather than decreased honest behavior (Capraro, 2017;Capraro, Schulz, & Rand, 2019; for a moderation explanation, see Köbis, Verschuere, Bereby-Meyer, Rand, & Shalvi, 2019). Third, a meta-analysis of 114 studies showed that lying systematically took longer than truth telling (Suchotzki, Verschuere, Van Bockstaele, Ben-Shakhar, & Crombez, 2017), leading the authors to conclude that honesty-and not dishonesty-is the automatic tendency. Fourth, time pressure was found to slightly increase cheating in a multiple-die-roll paradigm using a virtual die (D'hondt, Van der Cruyssen, Meijer, & Verschuere, 2019) but not in a single-die-roll paradigm .
The extent to which these reasons cast doubt on the validity of the finding that time pressure increases dishonesty or whether they can be explained by procedural differences remains unknown. Therefore, and because of the low diagnostic value of the original study (Bayes factor, or BF = 1.15; see Table 1), we set up an attempt to replicate Experiment 2 of Shalvi et al. (2012).
Reproducing the Original Results
We first verified the original results by reanalyzing the data provided by the authors. Following their exact analysis strategy, we reproduced the key effect of interest. The two-tailed Mann-Whitney U test showed that participants in the time-pressure condition reported a significantly higher die-roll outcome than participants in the self-paced condition. The effect was small to moderate (Experiment 1: rank biserial correlation, or r rb = −.22, 95% confidence interval, or CI = [−.43, .01], corresponding to a Cohen's d of 0.44; Experiment 2: r rb = −.28, 95% CI = [−.48, −.05], corresponding to a Cohen's d of 0.58). We also reproduced ancillary effects (see https://osf.io/fjca2/).
Preregistered Direct Replication (PDR) 1
PDR 1 was a preregistered replication of Experiment 2 reported by Shalvi et al. (2012), using a protocol approved by the original authors and a sample size more than five times that of the original study. The current study deviated in several ways from the original study. The most notable deviation between PDR 1 and the original studies was the session size. Sessions in the original study consisted of up to 6 participants, but in PDR 1, 228 and 233 participants were used. The prime reason for the larger session size was that we Table 1. Self-Reported Die-Roll Outcomes in the Time-Pressure Condition and the Self-Paced Condition of Shalvi, Eldar, and Bereby-Meyer (2012) wanted to make it feasible to test a substantially larger number of participants than the original study within a reasonable time. Note that the original authors did not consider session size to be a key element of their design and that a vignette study (see https://osf.io/h8bjv/) provided no evidence that session size would affect the perceived chance of bonus payment. Secondly, when analyzing the original data, we noticed that-even before we excluded participants who did not meet the 8-s deadline-the sample sizes of the time-pressure condition and the self-paced condition were unequal. The original authors clarified that this was a result of the randomization procedure: Up to 6 participants subscribed for a session, and all participants within a session were randomly assigned to either the time-pressure or the self-paced condition. Such between-session randomization is undesirable, as the experimenter is no longer blind to condition and could influence the results (Rosenthal, Persinger, Vikan-Kline, & Fode, 1963). We therefore chose to randomly assign participants to the time-pressure condition and the self-paced condition within each session. Furthermore, there were also differences in the precise die-rolling procedure (original study: shake cup back and forth on a table; current study: shake cup in hand), the software (original study: E-Prime; current study: Qualtrics), the test language (original study: Hebrew; current study: English), the country (original study: Israel; current study: The Netherlands), and whether participants were tested in their first language (original study) or in English, which for most participants was their second language (our study). We will return to these differences in the General Discussion.
Method
The design and analysis plans were preregistered on the Open Science Framework (https://osf.io/jez3g). All materials, data, and analytic scripts are available at https://osf.io/fnh9u/. The study was approved by the ethical committee of the Social and Behavioral Sciences faculty at the University of Amsterdam and registered as Number 2018-CP-9470. The protocol was carried out in accordance with the provisions of the World Medical Association Declaration of Helsinki.
Participants. Camerer et al. (2018) showed that the effect size of replications is on average about 50% of the original effect size, so we aimed for 90% power to detect an effect of half the original size (d = 0.29, i.e., 50% of 0.58; note that the preregistration incorrectly mentions 50% of d = 0.66, implying a lower minimum required sample size of 366). For a one-sided independent Mann-Whitney U test with an alpha of .05, the minimum sample size is 428. Anticipating preregistered exclusions (i.e., exclusion of participants who failed to report within the time limit in the time-pressure condition), we tested all attendees of two mass test sessions at the University of Amsterdam. In each session, students performed a battery of tasks of which our task was the first. Four hundred sixty-one first-year psychology students participated. Thirty-three participants in the time-pressure condition were excluded because they did not report their die-roll outcome within the time limit. The final sample contained 428 participants (71.73% female, 27.57% male, 0.70% other) with a mean age of 19.77 years (SD = 2.58): 198 participants in the time-pressure condition and 230 participants in the self-paced condition.
Procedure. Participants first gave informed consent. Each was then randomly assigned to the time-pressure condition (i.e., roll the die and report the outcome within 8 s) or the self-paced condition (i.e., roll the die and report the outcome at their own pace) using Qualtrics (2019) permuted block randomization, which ensures an even distribution of participants across conditions. All participants received a paper cup with a lid and a six-sided die. They were invited to put the die in the cup, close the lid, shake the cup once, look through the hole in the lid to see the result of their roll, and report the outcome on the computer (see Fig. 1). As a financial incentive for cheating, and in accordance with the original instructions, participants were informed that several of them would be randomly selected to receive a monetary reward according to their reported die-roll outcome. More specifically, they learned that their reported number would be multiplied by 2 (1 = €2, 2 = €4, etc.), leading to a bonus of up to €12. After reading the instructions on the computer screen, participants were guided to press a button that started a timer measuring how long it took them to roll the die and report their outcome. The instructions were delivered in English (see https://osf.io/c2z4f/).
To evaluate whether the participants believed that a financial incentive was present and that their die roll was fully anonymous, we collected self-report ratings after the die-roll game. Participants were asked to rate the following statements on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree): "Several students will receive a monetary reward for the dice under cup game" and "My dice roll was fully anonymous-only I could know what I rolled." They were also asked to indicate on a slider (0-100%), "What is the chance that you will get the reward?" To evaluate whether the participants had read the instructions attentively, we asked them to answer the multiple-choice question, "The ratio between the dice roll and the possible reward is . . ." by choosing among the following options: "the reward (in euro) is equal to the outcome of the dice roll," "the reward (in euro) is two times the outcome of the dice roll," "the reward (in euro) is half of the outcome of the dice roll," or "the reward (in euro) is four times the outcome of the dice roll."
Results Preregistered analyses.
Effect of time pressure on reported die-roll outcome. Participants in the time-pressure condition did not report significantly higher die-roll outcomes than participants in the self-paced condition (see Table 1).
Exploratory analyses.
Time-pressure manipulation check. Data from an extreme outlier (91 s; more than 5 SDs from the mean) were excluded from the time-pressure manipulation check. Participants in the time-pressure condition took less time to report the outcome of the die roll (M = 4.98 s, SD = 1.39 s) than those in the self-paced condition (M = 9.10 s, SD = 5.43 s), t(425) = 10.38, p < .001, d = 1.01, 95% CI = [0.80, 1.21], indicating that the time-pressure manipulation was successful.
Exclusions. Repeating the analyses without any exclusions, or using the subsample that expressed strong belief in the payment scheme (i.e., agreed or strongly agreed with the statement, "Several students will receive a monetary reward for the dice under cup game"), did not alter the pattern of findings (see https://osf.io/zqpw8/).
Self-report ratings. Self-report scales showed that most participants (90%) answered the control question regarding the payment scheme correctly. Most participants (84%) reported that they strongly believed their report was anonymous. Participants estimated their chance of winning the monetary reward at 23% (SD = 27%). Unexpectedly, only a minority of the participants (35%) reported a strong belief that several students would be paid for the die-roll game.
Discussion
We found no evidence that time pressure increases cheating in the die-roll paradigm. Self-report ratings revealed that participants may not have fully appreciated the financial benefit of cheating. The difference in session size may have resulted in a different social dynamic, potentially influencing cheating behavior (Amir, Mazar, & Ariely, 2018).
PDR 2
To rule out the possibility that the difference in results between our first replication study and the original study was due to the use of different session sizes, we ran another replication that used the same session size as the original, allowing up to 6 participants at once. We also explored the possibility that testing participants in their first language versus their second language would modulate the effect.
Method
The design and analysis plans for PDR 2 were preregistered on the Open Science Framework (https://osf .io/9bg3z). All materials, data, and analytic scripts are available at https://osf.io/xwzpc/. To make maximum use of our resources, we preregistered our intention to terminate data collection as soon as decisive evidence was found (Stefan, Gronau, Schönbrodt, & Wagenmakers, 2019). Specifically, after having tested double the sample size of the original study (i.e., 148 participants), we calculated, after each additional session, the BF for the Bayesian Mann-Whitney test that assessed the differences between the time-pressure condition and the self-paced condition on the self-reported die-roll outcome. We used a zero-centered Cauchy prior (r) scaled at 0.707 (the default setting in JASP; JASP Team, 2019) in all Bayesian analyses. If decisive evidence were reached for either the alternative hypothesis (i.e., that time pressure leads to a higher reported outcome compared with the self-paced condition; BF 10 > 10) or the null hypothesis (i.e., that the time-pressure manipulation does not affect reported outcome; BF 01 > 10), we would terminate data collection. After running 319 participants (N = 297 inclusions), we reached decisive evidence for the null hypothesis (BF 01 = 10.14), and we ended data collection. 2 Participants. Participants were recruited in a university building at both the University of Amsterdam and Maastricht University for a die-rolling study. They received €2 for participation in the 10-min study and were informed during recruitment that they could earn a bonus payment. Other than gathering a minimum of 3 and a maximum of 6 participants per session, there were no inclusion or exclusion criteria during recruitment. In the time-pressure condition, 22 participants were excluded because they did not report their die-roll outcome within the 8-s time limit. The final sample contained 297 participants (55% female) with a mean age of 21.60 years (SD = 3.23 years). About half of the participants had Dutch nationality (62%), and about half of the participants spoke Dutch as their native language (60%). The time-pressure condition contained 138 participants (54% female, 46% male) with a mean age of 21.92 years (SD = 3.61 years). The self-paced condition contained 159 participants (43% female, 57% male) with a mean age of 21.31 years (SD = 2.84 years).
Procedure. Participants chose a die from a box with dice and then took a seat at one of the six individual tables. 3 On each table was a laptop and a cup with a lid on it. General oral instructions were given to the group. After obtaining informed consent, we gave all further instructions individually via the computer screen. Participants were invited to test whether the die was fair by rolling it a few times. Then they were asked to put the die in the cup and close the lid. Each participant was randomly assigned to either the time-pressure condition (8-s deadline) or the self-paced condition using Qualtrics permuted block randomization. As a financial incentive for cheating, and in accordance with the original instructions, participants were informed that several of them would be randomly selected to receive a monetary reward according to their reported die-roll outcome. Specifically, they learned that the bonus pay would be twice the reported outcome (1 = €2, 2 = €4, etc.), leading to a bonus of up to €12. The instruction page explaining the reward and the die-under-the-cup task was displayed for a minimum of 30 s to prevent participants from going through the instructions without paying proper attention. After 30 s, the next button appeared and pressing it started a timer measuring how long it took participants to roll their die and report their outcome (see Fig. 1). Participants could choose to take the task in English or in Dutch (for the materials, see https://osf.io/6c9qr/).
After reporting their die-roll outcome, participants were asked to provide their gender, major, age, nationality, and native language. To gain insight into how participants perceived the task, we collected self-report ratings after the die roll. (All questions are reported on https://osf.io/6c9qr/.) Here, we highlight that participants were asked to rate the statements "Several students will receive an extra monetary reward for the dice under cup task" and "My dice roll was fully anonymousonly I could know what I rolled" on a 5-point Likert scale ranging from 1, strongly disagree, to 5, strongly agree. To evaluate whether the participants had read the instructions attentively, we asked them to answer the multiple-choice question, "The ratio between the dice roll and the possible extra reward is equal to / two times / half of / four times . . . the outcome of the die roll." They were also asked, "What was your perceived time-pressure during the die roll?" Responses were made on a 5-point scale ranging from very high to very low.
Deviations from the original study. Except for session size, the deviations between the current study and the original study were the same as for our first replication study (i.e., precise die-rolling procedure, software, test language, and country).
Preregistered analyses.
Time-pressure manipulation check. Participants in the time-pressure condition took less time to report the outcome of the die roll (M = 5.25 s, SD = 1.46 s) than those in the self-paced condition (M = 7.88 s, SD = 4.62 s), t(295) = 6.43, p < .001, d = 0.75, 95% CI = [0.51, 0.98], indicating that the time-pressure manipulation was successful.
Effect of time pressure on reported die-roll outcome. Participants in the time-pressure condition did not report significantly higher die-roll outcomes than participants in the self-paced condition (see Table 1).
Exploratory analyses.
Exclusions. Repeating the analyses without any exclusions did not alter the pattern of findings (see https://osf .io/c2n6h/).
Test language.
Using a similar die-rolling paradigm, Bereby-Meyer et al. (2018; but see Köbis et al., 2019) found that participants cheated more when the experiment was conducted in their native language. Given that the original study also tested participants in their native language, we separately analyzed the subsample tested in their native language (n = 177). Participants in the time-pressure condition (n = 80; M = 3.86, SD = 1.65) did not report significantly higher die-roll outcomes than participants in the self-paced condition (n = 97; M = 3.68, SD = 1.70), Z = 0.70, p = .241, rank biserial correlation (r bc ) = −.05, 95% CI = [−∞, .09]. (See https://osf.io/c2n6h/ for the full results.) Self-report ratings. Most participants (84%) answered the control question regarding the payment scheme correctly. Most participants (89%) also reported that they strongly believed their report was anonymous. Participants estimated their chance of winning the monetary reward at 36% (SD = 27%). A majority (77%) of the participants reported that they strongly believed that several students would receive an extra reward for the die-roll game. The perceived time pressure was higher in the time-pressure group (M = 3.11, SD = 1.04) than in the self-paced group (M = 2.34, SD = 1.04), t(295) = 6.36, p < .001, Cohen's d = 0.74, 95% CI = [0.50, 0.98].
Self-reported time pressure provides for an additional test of the time-pressure effect. Within the timepressure condition, we examined whether greater perceived time pressure was related to higher reported die-roll outcomes. The Kruskal-Wallis test on average indicated that the die-roll outcome for each of the five levels of perceived time pressure (very low, low, neutral, high, very high) was not significant, χ 2 (4, N = 138) = 3.16, p = .532 (see Fig. 2).
General Discussion
What is people's automatic tendency in a tempting situation? Shalvi et al. (2012) found that time pressure, a straightforward manipulation to spark "thinking fast" over "thinking slow," provoked more cheating, and they concluded that people's initial response is to serve their self-interest and cheat. We found no evidence that time pressure increased cheating in the die-roll paradigm. There are three possible reasons why replication studies do not produce the same results as the original study: (a) methodological problems in the replication study, (b) overestimation of the true effect size in the original study, or (c) differences between the studies that moderate the effect (Wicherts, 2018).
The first possibility is that methodological limitations in the replication study produced different results. In our first replication study, participants may not have fully appreciated the financial benefits of cheating. In our second replication study, relying on two test sites and offering the task in two languages may have increased error variance. But even for participants who performed the task in their native language, there was anecdotal support for the absence of a time-pressure effect (BF 01 = 2.90). The second possible explanation is that the original study overestimated the true effect size. The use of between-session rather than within-session randomization in the original study makes the experimenter aware of condition assignment and raises the possibility that the experimenter influenced the results (Rosenthal et al., 1963). Also, a single observation (in this case, a single reported die-roll outcome) per participant is likely to provide for a noisy measure. With low reliability, the results are more likely to vary per sample.
The third possible explanation is that the time-pressure effect on cheating is influenced by the context and that differences between the studies explain the different results. Our replications differed in several ways from the original, the most prominent being the country where the study was run, namely Israel in the original versus The Netherlands in the replications. The difference in test site raises the possibility of cross-cultural differences in intuitive dishonesty. Perceived country corruption, for instance, is related to the amount of cheating in the dieunder-the-cup game (Gächter & Schulz, 2016). Then again, the large meta-analysis by Abeler, Nosenzo, & Raymond (2019) found that cheating behavior varies little by country. Still, it seems worthwhile to explore whether the automatic tendency to cheat may vary with culture.
In both our PDRs, people were predominantly honest, and we in fact found no evidence of cheating. 4 Whereas Shalvi et al. (2012) originally reasoned that "time pressure evokes lying even in settings in which people typically refrain from lying" (p. 1268), our findings point to the possibility that the time-pressure effect is bound to settings that produce more pronounced cheating (e.g., when providing justifications for cheating).
In sum, our findings indicate that the original study by Shalvi et al. (2012) may have overestimated the true effect of time pressure on cheating or the generality of the effect beyond the original context. The vast majority of our participants were honest-even under time pressure. This finding casts doubt on whether people's intuitive tendency is to cheat and fits better with a preference for honest behavior. 4. This is far from exceptional, and many studies have found even lower average reports and complete honesty (see Fig. 1 of Abeler et al., 2019 or http://www.preferencesfortruthtelling .com/). The finding that people could cheat to maximize personal gain without any punishment but did not fits with the meta-analytic conclusion of Abeler et al. that people cheat surprisingly little. | 2020-03-12T10:22:51.276Z | 2019-06-20T00:00:00.000 | {
"year": 2020,
"sha1": "ca9d8735059748b712c5b0ee749e66cf8112fb84",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0956797620903716",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5fd540b0ab263ee78159a0145403adf07b7b588",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
73085118 | pes2o/s2orc | v3-fos-license | Dietary fibre and childhood constipation
Dietary fibre is found in plant foods such as fruits, vegetables and grains. Whole-grain breads and cereals, apples, oranges, bananas, berries, prunes, pears, green peas, legumes artichokes and almonds are good sources of dietary fibre. A high-fibre food has 5g or more of fibre per serving and a good source of fibre is one that provides 2.5 4.9g per serving. Half a cup of cooked beans (6.2-9.6g), half a cup of cooked green peas (4.4g), 1 medium baked potato with skin (3.0g), one third cup of bran cereal (9.1g), 1 small apple with skin (3.6g), 1 medium orange (3.1g) and 1 medium banana (3.1g) are good sources of fibre.
Dietary fibre and childhood constipation
Sri Lanka Journal of Child Health, 2014; 43(4): 191-192 (Key words: Dietary fibre; child; constipation) Dietary fibre is found in plant foods such as fruits, vegetables and grains.Whole-grain breads and cereals, apples, oranges, bananas, berries, prunes, pears, green peas, legumes artichokes and almonds are good sources of dietary fibre 1 .A high-fibre food has 5g or more of fibre per serving and a good source of fibre is one that provides 2.5 -4.9g per serving 1 .Half a cup of cooked beans (6.2-9.6g),half a cup of cooked green peas (4.4g), 1 medium baked potato with skin (3.0g), one third cup of bran cereal (9.1g), 1 small apple with skin (3.6g), 1 medium orange (3.1g) and 1 medium banana (3.1g) are good sources of fibre 2 .
A double-blind, randomized, crossover study evaluated the effect of glucomannan, a fibre gel polysaccharide from the tubers of the Japanese Konjac plant, and placebo in children with chronic functional constipation with and without encopresis 2 .After initial evaluation, patients were disimpacted with phosphate enemas if a rectal impaction was felt.Patients continued with their pre-evaluation laxative.No enemas were given during each treatment period.Fibre and placebo were given as 100 mg/kg daily (maximum 5 g/day) with 50 ml fluid/500 mg for 4 weeks each.Parents were asked to keep a stool diary.Age, frequency of bowel movements, presence of abdominal pain, dietary fibre intake, medications, and presence of an abdominal and/or a rectal faecal mass were recorded at recruitment and 4 and 8 weeks later.Children were rated by the physician as successfully treated when they had 3 or more bowel movements/week and one or less soiling/ 3 weeks with no abdominal pain in the last 3 weeks of each 4-week treatment period 2 .Of the 46 chronically constipated children recruited, 31 (67.4%) completed the study.Of the 31 children 18 had encopresis.Significantly fewer children complained of abdominal pain and 45% children were successfully treated while on fibre as compared with 13% on placebo treatment.The authors concluded that glucomannan was beneficial in the treatment of constipation with and without encopresis in children and that symptomatic children already on laxatives still benefited from the addition of fibre 2 .
Using a parallel, randomized, double-blind, controlled trial, an interventional study was conducted to evaluate efficacy of a supplement of cocoa husk rich in dietary fibre on intestinal transit time in children with constipation 3 .After screening, patients were randomly allocated to receive, for a period of 4 weeks, either a cocoa husk supplement or placebo plus standardized toilet training procedures.Total and segmental colonic transit times were determined and bowel movement habits and stool consistency evaluated using a diary.Main variable for verifying efficacy of treatment was the total colonic transit time 3 .Fifty-six children with chronic idiopathic constipation were randomly assigned to the study and 48 (85.7%) completed it.With respect to total, partial colon, and rectum transit time, there was a statistically non-significant trend toward faster transit times in the cocoa husk group than in the placebo group.When we analyzed the evolution of the intestinal transit time throughout the study of children whose total basal intestinal transit time was greater than the 50th percentile, the total transit time decreased by 45.4±38.4hours in the cocoa husk group and by 8.7±28.9hours in the placebo group.Children who received cocoa husk supplements tended to increase the number of bowel movements more than children of the placebo group.A reduction was observed in the percentage of patients who reported hard stools, and this reduction was significantly greater in the cocoa husk group.At the end of the intervention, 41.7% and 75.0%patients who received cocoa husk supplementation or placebo, respectively, reported having hard stools.The authors concluded that this study confirmed the benefits of a supplement of cocoa husk that is rich in dietary fibre on chronic idiopathic constipation in children and that these benefits seemed to be more evident in paediatric constipated patients with slow colonic transit time 3 .
A randomized, double-blind, prospective controlled study was carried out in patients receiving either a fibre mixture or lactulose in a yogurt drink 4 .After a baseline period of 1 week, patients were treated for 8 weeks followed by 4 weeks of weaning.Polyethylene glycol 3350 was added if no clinical improvement was observed after 3 weeks.Using a standardized bowel diary, parents recorded defaecation frequency during the treatment period.In addition, incontinence frequency, stool consistency, presence of abdominal pain and flatulence, necessity for step-up medication, and dry weight of faeces were recorded 4 .Of the 135 participants, 65 were randomized to treatment with fibre mixture and 70 to treatment with lactulose.In all, 97 (71.9%) children completed study.No difference was found between the groups after the treatment period concerning defaecation frequency (P=0.481) and faecal incontinence frequency (P=0.084).However, consistency of stools was significantly softer in the lactulose group (P=0.01).Abdominal pain and flatulence scores were comparable (P=0.395 and P=0.739 respectively).The necessity of step-up medication during the treatment period was comparable (P=0.996).Authors concluded that the fluid fibre mixture and lactulose gave comparable results in the treatment of childhood constipation 4 .
A randomized prospective controlled study was carried out on 61 patients, 31 in the partially hydrolyzed guar gum group, and 30 in the lactulose group 5 .Patients were given lactulose or partially hydrolyzed guar gum for 4 weeks.Using a standardized bowel diary, defaecation frequency, stool consistency, presence of flatulence and abdominal pain were recorded.Bowel movement frequency per week and stool consistency improved significantly in both treatment groups (p<0.05).Percent of children with abdominal pain and stool withholding also decreased significantly in both groups (p<0.05).Weekly defaecation frequency increased from 4±0.7 to 6±1.06 and from 4±0.7 to 5±1.7 in the lactulose and partially hydrolyzed guar gum treated groups, respectively (p<0.05).The authors concluded that treatment with partially hydrolyzed guar gum is as effective as lactulose in relieving stool withholding and constipation-associated abdominal pain, and that its use improves stool consistency 5 .
Tabbers MM et al systematically reviewed nonpharmacologic treatments for childhood constipation 6 .They concluded that there is some evidence that fibre supplements are more effective than placebo but no evidence for any effect for fluid supplements, prebiotics, probiotics, or behavioural intervention 6 .
A meta-analysis of randomized controlled trials concluded that dietary fibre intake can significantly increase stool frequency in patients with constipation but does not significantly improve stool consistency 7 .
From the above studies it is evident that fibre supplements are more effective than placebo and as effective as lactulose in the care of children with constipation. | 2018-10-28T10:25:25.043Z | 2014-12-12T00:00:00.000 | {
"year": 2014,
"sha1": "37b510f63137593d999cac206ac7bf3602230909",
"oa_license": "CCBY",
"oa_url": "https://storage.googleapis.com/jnl-sljo-j-sljch-files/journals/1/articles/7758/submission/proof/7758-1-27353-3-10-20151219.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "37b510f63137593d999cac206ac7bf3602230909",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4581914 | pes2o/s2orc | v3-fos-license | Identification of selective inhibitors for diffuse-type gastric cancer cells by screening of annotated compounds in preclinical models
Background Diffuse-type gastric cancer (DGC) exhibits rapid disease progression and poor patient prognosis. We have previously established an E-cadherin/p53 double conditional knockout (DCKO) mouse line as the first genetically engineered one, which morphologically and molecularly recapitulates human DGC. In this study, we explored low-molecular-weight drugs selectively eliminating mouse and human DGC cells. Methods We derived mouse gastric cancer (GC) cell lines from DGC of the DCKO mice demonstrating enhanced tumourigenic activity in immunodeficient mice and acquired tolerance to cytotoxic anti-cancer agents. Results We performed a synthetic lethal screening of 1535 annotated chemical compounds, and identified 27 candidates selectively killing the GC cell lines. The most potent drug mestranol, an oestrogen derivative, and other oestrogen receptor modulators specifically attenuated cell viability of the GC cell lines by inducing apoptosis preceded by DNA damage. Moreover, mestranol could significantly suppress tumour growth of the GC cells subcutaneously transplanted into nude mice, consistent with longer survival time in the female DCKO mice than in the male. Expectedly, human E-cadherin-mutant and -low gastric cancer cells showed higher susceptibility to oestrogen drugs in contrast to E-cadherin-intact ones in vitro and in vivo. Conclusions These findings may lead to the development of novel therapeutic strategies targeting DGC.
INTRODUCTION
Gastric cancer (GC) is estimated as the third leading cause of cancer-related death in the world. 1 GC is histologically classified into two major subtypes, intestinal-type and diffuse-type. Diffusetype gastric cancer (DGC) in particular demonstrates infiltrative growth, and occasionally metastases to lymph nodes, resulting in worse prognosis. 2 Although several clinical trials of chemotherapeutic drugs for advanced GC have been launched, overall survival rates have not been dramatically improved, approximately 20% in 5 years. [3][4][5] Germline mutations of CDH1 are frequently identified in hereditary DGC, while TP53, CDH1 and RHOA mutations in sporadic DGC, but molecular mechanisms underlying diffuse-type gastric carcinogenesis have not been completely clarified. 6,7 We have recently established a mouse model of DGC, in which E-cadherin (Cdh1) and p53 (Trp53) are inactivated specifically in gastric mucosae. 8 The penetrance is 100% for gastric neoplasm, contributing to the unfavourable mortality of 50% within a year. Poorly-differentiated and signet-ring cell adenocarcinoma cells are mainly distributed from mucosal to serosal layers in these mice. High frequency of lymph node dissemination and tumourigenicity in nude mice indicates the enhanced malignancy. Gene expression profiles of mouse DGC resemble those of human DGC, and mesenchymal markers and epithelial-mesenchymal transition (EMT)-regulators are overexpressed in mouse DGC as previously noted in human DGC. Taken together, the E-cadherin/p53 double conditional knockout (DCKO) mouse line is the first genetically engineered one which morphologically and molecularly recapitulates human DGC. 8 An in vitro system is required to further extend the mouse model-based research, and we therefore derived GC cell lines harbouring biological and molecular traits closely similar to those in vivo from DGC of the DCKO mice. The powerful platform of the cell lines and model mice could facilitate the drug development and preclinical testing for DGC treatment. In this study, considering the poor understanding of targets in E-cadherin-deficient DGC and the easy availability of agents in clinical practice, we performed synthetic lethal screening of a library of wellcharacterised compounds by using these cell lines, and evaluated Tumourigenicity in immunocompromised mice Cells were suspended in 100 μl Matrigel (BD Biosceiences) and subcutaneously injected into the male KSN nude mice. Frequency of tumourigenic cells and P-value was calculated by using extreme limiting dilution analysis. 10 The volume of the growing tumours was monitored once a week, and calculated by the formula; volume = length × width 2 × 0.5. Chemical agents were treated after tumour became palpable (about 500 mm 3 ). Tumour transplantation and drug administration were performed following the condition and schedule described in Supplementary Table 3.
Drug screening Cells were seeded in 100 μl of media including 1 × 10 3 cells into 96-well plates. A compound library composed of 1535 wellcharacterised and off-patent compounds was generously provided from Chemical Biology Screening Center of Tokyo Medical and Dental University (http://www.tmd.ac.jp/mri/SBS/cbsc/). At 12 h, 100 mM of diluted compound solution was transferred from the 96-well stock plates into the 96-well assay plates, resulting in 100 and 10 μM final concentration for all compounds. Forty eight hours after compound treatment, 10 μl of WST-8 Reagent (Dojindo) per well was added, and the absorbance was measured on a microplate reader (Bio-Rad Laboratories) at 450 nm with background subtraction at 630 nm at 4 h.
Cell cycle analysis Cells were plated in 6-cm dishes and grown overnight. Forty eight hours after drug treatment, the cells were harvested, washed with phosphate-buffered saline (PBS) and fixed with 70% ethanol overnight at 20°C. After rinsed with PBS containing 3% bovine serum albumin (BSA), the cells were resuspended in PBS with 50 μg/ml PI solution (Sigma-Aldrich, St. Louis, MO) and 10 μg/ml RNase A (Sigma-Aldrich) for 30 min on ice. The stained cells were counted by a FACSCalibur flow cytometer (BD Biosciences).
Apoptosis analysis Cells were seeded in 6-cm dishes and incubated overnight. Twenty four hours after each drug was administered, the cells were collected, rinsed with binding buffer (10 mM HEPES, pH 7.4; 140 mM NaCl; 2.5 mM CaCl 2 ), and incubated in 100 μl of binding buffer containing 5 μl of Annexin V-FITC Reagent (MBL International Corporation, Woburn, MA) for 30 min on the ice. The labelled cells were sorted by a FACSCalibur flow cytometer (BD Biosciences). For caspase inhibition assays, cells were seeded at a density of 2 × 10 4 cells per well in 12-well plates, and incubated overnight. Forty eight hours after treatment of 20 μM z-VAD-FMK (MedChem Express, Monmouth Junction, NJ) and 100 μM mestranol, cell viability was calculated by using MTT as described above.
Statistical analysis
Unsupervised clustering analysis based on Ward's method and principal component analysis for 505 genes with interquartile range (IQR) values greater than 4.0 were performed by using the R statistical software (version 3.0.3). Statistical analysis was also conducted by using the R statistical software. In cell viability assays, each P-value was calculated with the GraphPad Prism 7 software (San Diego, CA). P-value less than 0.05 was considered statistically significant.
Establishment of mouse E-cadherin/p53-deficient DGC cell lines
We first took into culture three transplanted tumours (#682, 792 and 773) of DGC of the DCKO mice as schematically illustrated in Fig. 1a, and isolated two cell lines rapidly expanding from each tumour (Supplementary Figure 1A). After cultured for a few passages, these six cell lines showed morphological heterogeneity, flat and round; the #682 and #773 tumours induced two flatdominant (MDGC1 and 2) and round-dominant cell lines (MDGC5 and 6), respectively, whereas the #792 did one flat-dominant (MDGC3) and one round-dominant (MDGC4) as presented in Fig. 1b. We also confirmed complete recombination of the Cdh1 and Trp53 loci by performing genomic PCR in all of the six MDGC cell lines (Supplementary Figure 1B).
We next investigated biological and molecular characteristics of these two types of cell lines. Cell proliferation assays (Fig. 1c) and wound-healing assays (Fig. 1d,e) displayed that the rounddominant cell lines exhibited greater mitogenic and motile properties than the flat-dominant ones. The round-dominant cell lines were more refractory than the flat-dominant to two cytotoxic drugs commonly used for treatment of patients with advanced GC, 5-fluorouracil and paclitaxel (Fig. 1f). Similarly to the gene expression signatures of primary DGC of the DCKO mice, the expression levels of mesenchymal markers (Vim and Cdh2) and EMT-regulators (Twist1 and Zeb2) were higher in the rounddominant cell lines than those in the flat-dominant cell lines (Fig. 1g). These findings suggested that mouse E-cadherin/p53deficient DGC might consist of two subtypes of cancer cells with morphologically, biologically and molecularly distinct features.
Evaluation of mouse E-cadherin/p53-deficienct DGC cell lines We succeeded in obtaining three subclones (MDGC4SC1, 6 and 7) composed of only flat cancer cells by limiting dilution, but it was difficult to maintain ones retaining the round phenotype for long periods. We then optimised culture conditions including sera, media and substrates, and newly established three cancer cell lines (MDGC7, 8 and 9) with the round shape from primary GC and lymph node metastasis of DGC of the DCKO mice (Fig. 2a), which harboured complete recombination of the Cdh1 and Trp53 loci as expected (Supplementary Figure 1B). We also possess cell lines derived from stomach mucosae of fetal p53-null mice. 9 We now compared the mouse flat and round Cdh1 −/− ;Trp53 −/− GC cell lines with the mouse Trp53 −/− gastric epithelial (GE) cell lines as control (Fig. 2b). We first cultured these three types of cell lines with serum-free media in ultra-low attachment culture dishes, an in vitro measurement of tumourigenic activity. The round GC cell lines showed a 30-fold increase in sphere-forming capacities relative to the flat GC and GE (Fig. 2c, Supplementary Figure 2A). We next directly assessed the tumourigenic abilities by subcutaneously injecting them into nude mice. Tumours were generated with only 100 cells of the round GC, which were 100fold and 1000-fold less than were required for tumour seeding by the flat GC and GE cells, respectively (Fig. 2d). Tumours of the round GC cell lines grew much more rapidly than those of the flat GC (Fig. 2e). In transplanted tumours derived from the round GC cells, poorly differentiated cancer cells and signet ring carcinoma cells were abundantly distributed, and gland-like structures formed by moderately differentiated cancer cells were also scattered (Supplementary Figure 2B). In contrast, the flat GC cells generated tumours mainly composed of the gland-like structures, indicating that the round GC cells could maintain CSC-like properties and be at the higher level of tumour hierarchy than the flat GC cells. A strong resistance to conventional chemotherapeutic agents is also an important aspect of malignancy. Indeed, 5-fluorouracil and paclitaxel decreased the cell number of the round GC cell lines less than that of the flat GC and GE (Fig. 2f). As predicted from these in vitro results, when intraperitoneally administered into nude mice bearing palpable tumours of the MDGC7 cell line, 5-fluorouracil (50 mg/kg/week) failed to reduce tumour sizes (Supplementary Figure 2C). Unsupervised clustering analysis (Fig. 2g) and principle component analysis (Fig. 2h) of gene expression profiles could clearly distinguish these three cell types from each other, although the round and flat GC cells shared the similar genetic backgrounds. Among 505 differentially expressed genes, 142 genes were up-regulated in the round GC cell lines compared with the GE and flat GC included Twist1, 11 consistent with the above-mentioned data (Fig. 1g). Taken together, we established bona fide cancer cell lines of DGC of the DCKO mice.
First and second screenings of a library of known chemical compounds We speculated that drugs selectively killing the GC cell lines might be a silver bullet toward E-cadherin-deficient DGC. On the basis of this reasoning, we designed a proof-of-concept assay to identify such drugs (Fig. 3a), namely, we screened 1535 well-characterised compounds provided from Chemical Biology Screening Center of Tokyo Medical and Dental University by using the GE (GIF9), flat GC (MDGC4SC1) and round GC (MDGC7) cell lines. Before starting the ATP-based cell viability assays on 96-well plates, we examined the range where the cell number was proportionally correlated with the absorbance (Supplementary Figure 3A), and determined that seeding 1000 cells per well is appropriate for this assay. A screening window coefficient, Z'-factor, is requested to qualify a screening assay. 12 Under exposure of high concentration (5 mM) of 5-fluorouracil as positive control, Z'-factors of the cell lines were calculated as approximately 0.5, assuring the reliability of this screening system (Supplementary Figure 3B). We first screened 1535 test compounds including conventional chemotherapeutic drugs on this platform (Fig. 3b), and highlighted the difference between the GIF9 and MDGC7 cell viability (ΔCV) in a histogram (Supplementary Table 6). We identified 195 chemical compounds outside −1 standard deviation (SD) of the mean of ΔCV as candidates with selective toxicity toward the round GC cell lines. We next reevaluated the 195 hit compounds in duplicate, and created a histogram of ΔCV and a scatter plot (Fig. 3c). In the same way, 27 candidates outside −1 SD of the mean of ΔCV were enumerated in the table of Fig. 3d, which intriguingly contained some classes of clinically available drugs; mefenamic acid and diclofenac (red) are classified as non-steroidal anti-inflammatory drugs (NSAIDs), prophylactic use of which contributes to lower relative risks of gastrointestinal cancer; 13 oxybutynin and orphenadine (orange) are muscarinic antagonists widely prescribed for patients with benign prostatic hyperplasia, and Zhao and colleagues have recently documented that physiological and pharmacological inhibition of muscarinic acetylcholine receptors suppresses gastric tumourigenesis; 14 mestranol and equilin have oestrogen-like structure and function, consistent with the potential role of oestrogen in explaining the male predominance of stomach cancer; 15 betamethasone, desoxymetasone and desoxycorticosterone belong to the glucocorticoid class, and are generally used for palliative care of terminal cancer patients. These similarity could assist the accuracy of our scheme.
Third and fourth screenings of a library of known chemical compounds For further studies, after excluding antibiotics and antiseptics, we selected ten drugs and two alternates, thioridazine (a dopamine D2 receptor antagonist, ΔCV = −0.121) and catechin (an isomer of epicatechin, not included in the compound library), from the 27 candidate substrates listed in Fig. 3d in addition to four drugs, quercetin, salinomycin, flutamide and bicalutamide. Since thioridazine, 16 quercetin 17 and salinomycin 18 have been detected as drugs targeting cancer stem cells (CSCs) by screening of annotated compound libraries, we hypothesised that they could also selectively kill the round GC cell lines harbouring CSC-like properties (Fig. 2). Garnett and collaborators have screened a panel of several hundred cancer cell lines with 130 drugs under clinical and preclinical investigation, 19 and their public data suggest that somatic mutations of CDH1 gene are associated with cellular responses to an androgen receptor antagonist bicalutamide (P = 6.87 × 10 −3 ), categorised in the same class as flutamide (ΔCV = −0.271). Thus, we examined their effects across a range of doses, and confirmed that all of them provided evidence of selective toxicity toward the round GC cell lines (Fig. 4a, Supplementary Figure 4A). These results were not only supportive for the reliability of this screening assay, but also consistent with the previous reports of the drugs targeting CSCs [16][17][18] and Ecadherin-mutant cancer cells. 19 We compared drug sensitivity among three types of cell lines, the GE, flat and round GC, and found that the 12 candidate drugs were subdivided into two groups, ones selectively killing both the flat and round GC cell lines (e.g., mefenamic acid, rosuvastatin, dipyridamole and NADIDE) and ones doing only the round GC (Supplementary Figure 4B). Among the candidates, mestranol, NADIDE and betamethasone in particular had broad therapeutic windows between the GE and GC cell lines. We then performed flow cytometric analysis with propidium iodide (PI) staining with two types of cell lines to determine cell cycle profiles and apoptotic events. Treatment with mestranol, NADIDE and betamethasone increased the sub-G1 population only in the GC cell lines, consistent with the results of dose-response curves (Fig. 4b).
Effects of oestrogen on mouse E-cadherin-deficient GC We thus focused on mestranol, the most potent of the 1535 test compounds through the four steps of the screenings, and one of oestrogen derivatives frequently used for hormone replacement therapy. Observing that the 27 candidate substrates extracted by the second screening contained two oestrogen drugs (Fig. 3d), we retrospectively reanalysed the results of the first screening (Supplementary Table 6), and elucidated that the 195 hit compounds outside −1 SD of the mean of ΔCV included six compounds with oestrogen-like function such as 17β-oestradiol and tamoxifen, while the remainder did twelve, implying the class effect of oestrogen on cell viability of the GC cell lines (P = 0.0193, Fisher's exact test). This hypothesis was encouraged by the correlation between E-cadherin mutation and sensitivity of androgen receptor antagonists in this (Supplementary Figure 4A) and the previous comprehensive drug screenings. 19 We tried to explore the molecular mechanism and to identify oestrogen drugs with the most selective toxicity toward the GC cell lines (Fig. 5a, Supplementary Figure 5A). Exposure to not only 17α-ethinyl oestradiol (a metabolite of mestranol) and 17βoestradiol but also tamoxifen and raloxifene (recently reclassified as selective oestrogen receptor modulators), abrogated cell viability of the round GC cell lines, similarly to the results of mestranol. Two homologues of oestrogen receptors (ERs), ERα and ERβ, have different distribution and function among normal tissues as well as neoplasms including stomach cancer. 15 Propylpyrazole triol (ERα agonist) showed the similar cytotoxicity against the GE and GC cell lines, whereas diarylpropionitrile (ERβ agonist) specifically killed the GC, suggesting that ERβ could mainly mediate cell death of the GC cell lines. These data were supported by the higher expression levels of ERβ in the GC than those in the GE (Fig. 5b), and consistent with preventive effects of ERβ on digestive system carcinogenesis. 15 Compared with normal gastric mucosae of Atp4b-Cre − ;Cdh1 loxP/loxP ;Trp53 loxP/loxP mice, ERβ was overexpressed in DGC of the DCKO mice at the RNA level (Supplementary Figure 5B). Oestrogen-induced cell death occurs through an increase in proapoptotic genes 20 or DNA doublestrand breaks. 21 Mestranol, 17β-oestradiol and tamoxifen triggered cell apoptosis in the round GC cell lines by using flow cytometric analysis with Annexin V-fluorescein isothiocyanate (FITC) and PI costaining (Fig. 5c). These events were accompanied with a significant increase of phosphorylation of H2A.X, not cleavage of caspase 3 (Fig. 5d), and an irreversible caspase inhibitor z-VAD-FMK could not rescue the cytotoxicity of the GC cells (Supplementary Figure 5C). Taken together, oestrogen drugs could induce DNA damage specifically in the GC cell lines via ERβ, and mestranol was the best candidate among them in the end. We then orally administered mestranol (0.5 mg/kg/day) into nude mice with palpable inoculated tumours of the MDGC7 cells, and observed tumour-growth inhibition (Fig. 5e). The effects of mestranol on sphere-forming efficiency were compatible to those on cell proliferation in the round GC cells (Fig. 4a and Supplementary Figure 5D), indicating that mestranol could exhibit toxic activities, but not inhibit stem cell-like properties. Moreover, the median survival time of the female DCKO mice was significantly longer than that of the male (P = 0.001, the log-rank test), implying the tumour-suppressive roles of oestrogen in DGC of the DCKO mice (Fig. 5f).
Effects of oestrogen on human E-cadherin-deficient GC cells
We examined whether oestrogen drugs could exert therapeutic function for human E-cadherin-deficient GC. We initially assessed the distribution and expression of E-cadherin in several human gastric cancer (HGC) cell lines at the protein level (Fig. 6a, Supplementary Figure 6A) as well as the mutation status at the mRNA and DNA levels (Supplementary Figures S6B and C), and divided them into three groups; E-cadherin-intact (MKN74 and MKN7), E-cadherin-mutant (MKN45 and KATOIII) and E-cadherinlow (AGS and HSC58). Treatment with four drugs with oestrogenlike activity including mestranol selectively impaired cell viability of the E-cadherin-mutant and -low HGC cell lines (Fig. 6c). Mestranol triggered cell apoptosis, not cell cycle arrest, in the Ecadherin-deficient HGC cell lines similarly to 17β-oestradiol and tamoxifen, which was preceded by DNA damage (Fig. 6d-f, Supplementary Figures S6D-S6F). Mestranol administration suppressed transplanted tumour growth of the E-cadherin-mutant and E-cadherin-low cell lines, but not that of the E-cadherin-intact ones (Fig. 6g, Supplementary Figure 6G). Thus, cellular responses of E-cadherin-deficient GC cells to oestrogen could be highly conserved across two different species.
To investigate the relationship between gender, histological subtype, E-cadherin/p53 status and prognosis in clinical samples of GC, we used public data provided from the Cancer Genome Atlas Research Network (TCGA), which included 176 tumours (76 DGC and 100 IGC) with gene mutation, copy number alteration, DNA methylation, mRNA expression and pathological data. Surprisingly, the female DGC patients had better prognosis in overall survival than the male (P = 0.006, the log-rank test), although there was no difference between the female and male IGC patients (P = 0.725) as shown in Supplementary Figure 7A. Next, we divided the 176 clinical specimens of TCGA data set into the E-cadherin-deficient and -intact groups with and without somatic mutation, biallelic loss, promoter hypermethylation (βvalue ≥ 0.5) or low expression (RSEM < 2000) of CDH1 gene, respectively. In 76 DGC samples, 10 of 29 E-cadherin-deficient tumours harboured somatic mutation or biallelic loss of TP53 gene, whereas 19 of 47 E-cadherin-intact tumours did, suggesting that E-cadherin/p53-deficiemt GC accounted for approximately 15% of DGC, and therefore was not rare (P = 0.636, Fisher's exact test). Although not in the E-cadherin-intact group (P = 0.362, the log-rank test), overall survival rate of the female patients was higher than that of the male in the E-cadherin-deficient group (P = 0.048), which was consistent with the prognosis of the DCKO mice ( Fig. 5f and Supplementary Figure 7B).
We performed Single Sample Gene Set Enrichment Analysis (ssGSEA) for tumour samples with high and low expression levels of CDH1, termed as CDH1-high and -low, with two gene sets associated with oestrogen signal transduction, ESTROGEN_RE-SPONSE_EARLY and ESTOGEN_RESPONSE_LATE. The ssGSEA scores of the CDH1-low group for both of the gene sets were significantly lower than those of the CDH1-high group (Supplementary Figure 7C), implying that only E-cadherin-deficient GC in which the oestrogen signal pathway was inactivated could survive.
DISCUSSION
The United States National Cancer Institute 60 human tumour cell lines anti-cancer drug screen (NCI60) emerged in the late 1980s as a powerful drug discovery tool. 22,23 However, cellular reactions in vitro are not able to reflect those in vivo due to two failings; cell lines have genetically, epigenetically and biologically changed under culture conditions; cell lines no longer maintain the tumour original properties present in the primary cancer. In an effort to address these shortcomings, patient-derived xenografts transplanted into immunodeficient rodents have recently been used for preclinical modelling. These models are appropriate for validating the in vivo effects of candidate drugs, but not for screening chemical libraries due to difficulty of primary culture. Since tumour genotype and epigenotype variation in tumour and between patients (intratumour and interpatient heterogeneity) is non-negligible, large-scale screenings are required for discovery of potent agents targeting the common molecular mechanism. We then solved these problems by a mouse model-based study, that is, establishing a genetically engineered mouse model recapitulating human cancer, deriving cell lines from the mouse tumour, performing a synthetic lethal screening by using the cell lines, and validating tumour suppressor activity of candidate drugs against human cancer. Early passage cells from cancer of a genetically engineered mouse model harbour the biological and molecular traits of the original tumours, and a few of them are sufficient for screening assays due to similar genetic background. In addition, by using the mice with carcinoma in situ, anti-cancer effects of the Fig. 6 Effects of oestrogen drugs on human E-cadherin-deficient gastric cancer cells. a Representative fluorescence microscopy images of the HGC cells stained with antibodies against E-cadherin (red). Nuclei were counterstained with DAPI (blue). b Representative phase-contrast images. c Dose-response curves of oestrogen drugs against the HGC cell lines. Bars show standard deviations. P-value was calculated from the ANOVA table. NS not significant. d Flow cytometric analysis with PI staining. The left and right panels show the representative histograms and percentage graphs of cells in each cell cycle, respectively. e Flow cytometric analysis with Annexin V-FITC and PI costaining. f Immunoblots of phosphorylated H2A.X after treatment with vehicle (V), mestranol (M) and 17β-oestradiol (E) in the E-cadherin-mutant (MKN45) and -low (AGS) cells. g Tumour-growth curves of the E-cadherin-mutant (MKN45) and -low (AGS) cells in nude mice under treatment with mestranol (0.5 mg/ kg/day, orally administered). Bars show standard errors. P-value was calculated by Welch's t-test Screening of known drugs by using DGC mouse model S Shimada et al.
candidate drugs can be precisely evaluated on the platform mimicking tumour microenvironment including cancer-associated fibroblasts and immune cells.
Currently, most molecularly targeted drugs are inhibitors of driver oncogenes, because it appears more straightforward to repress a hyperactivated oncogene than to restore the function of inactivated tumour suppressor genes. 24 Among several promising strategies of drug development for cancer with mutations in tumour suppressor genes, we here applied a comprehensive approach for E-cadherin-deficient DGC, that is, a screening with a collection of well-established annotated compounds. This process of finding new uses outside the scope of the original medical indication for existing drugs is known as repositioning, 25 and offers two significant advantages over conventional de novo drug discovery and development; firstly, from the molecular function of compounds selectively killing cancer cells, the addicted signal pathway could be predicted; secondly, safer and shorter routes to the clinic are possible because in vitro and in vivo screenings, lead optimisation and chemical toxicology have already been completed. Excellent examples of this concept "repositioning" are thioridazine and salinomycin. Thioridazine, a Food and Drug Administration (FDA)-approved antipsychotic dopamine receptor antagonist, was reprofiled as an anti-CSC drug from libraries of known compounds, and demonstrated that dopamine D2 receptor antagonism could account for the loss of stemness. 16 Since Gupta et al. initially did, several other groups have noted that salinomycin attenuates CSC-like properties in various types of cancer, and identified that this potassium ionophore overcomes ABC transporter-mediated multidrug resistance and inhibits oxidative phosphorylation in mitochondria on which CSCs mainly rely more than on glycolysis. Our screening assay with 1535 wellcharacterised compounds also revealed that differentially expressed ERβ could be a potential target for E-cadherindeficient DGC, and provided convincing evidence of clinical use of oestrogen drugs including mestranol for prevention and treatment of this subtype of GC.
Our mouse model-based study could hint favourable prognosis of female patients with DGC. It sounds odd that oestrogen protects against the development of DGC, which is believed to be encountered in young female patients, although there is a strong and enigmatic male dominance in the incidence of GC with a male-to-female ratio of approximately 2:1. 15 The cumulative risk of hereditary DGC by age 60 was estimated to be higher for females than for males, but not significantly. 26 Patients with DGC younger than 40 years showed a nearly equal male-to-female ratio (1.3:1), whereas patients older than 40 showed a male preponderance (2.3:1) in a series of 66 GC probands for germline CDH1 mutations. 27 In a large-scale cohort study, the diffuse-type is more common in males than in females for nearly all age groups. 28 Females had a significantly lower risk of dying compared to males in patients whose tumours were poorly differentiated. 29 There is also mounting evidence for the clinical potency of exogenous oestrogen on GC. A seemingly protective effect of hormone replace therapy, such as mestranol treatment, on GC risk has been reported in several human studies from different populations. 15 In animal models, administration of oestrogen to N-methyl-N'-nitro-nitrosoguanidine (MNNG)-treated rat 30 and Helicobacter pylori-infected INS-GAS mice 31 decreased their incidence of GC. Thus, an intrinsic tumour-suppressor role of oestrogen in Ecadherin-deficient DGC is assisted by these epidemiological and experimental findings.
Since tamoxifen administration in mice causes extensive parietal cell damage by active acid secretion through H,K-ATPase, 32 it is possible that estrogen analogues are also toxic to the GC cells, which were originated from parietal cells of the DCKO (Atp4b-Cre + ;Cdh1 loxP/loxP ;Trp53 loxP/loxP ) mice. In immunohistochemical analysis, cleaved caspase 3 was not detected in parietal cells of the DCKO mice orally treated with mestranol (0.5 mg/kg/day) for a week, suggesting few adverse effects of mestranol on parietal cells (Supplementary Figure 8A). Because RT-PCR analysis demonstrated that the expression levels of Atp4a and Atp4b encoding H, K-ATPase subunits were extremely lower in the GE and GC cell lines than those in normal gastric mucosae containing parietal cells (Supplementary Figure 8B), it did not seem that mestranol specifically eliminated the GC cells by inducing acid secretion through H,K-ATPase. The two evidences disproved the hypothesis above.
We here screened the compound library composed of offpatent drugs, and highlighted a single potent drug, mestranol, targeting E-cadherin-deficient DGC. It is also of interest to investigate how the other 26 candidates listed in Fig. 3d could exhibit the specific toxicity to this subtype of GC, and to perform screening with other libraries containing novel molecularly targeted agents by using this platform. | 2018-04-03T03:41:25.667Z | 2018-03-12T00:00:00.000 | {
"year": 2018,
"sha1": "4dad49a0d1b7476cf6f159f525269a2957a93ac1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41416-018-0008-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dee3e9e3d88f1c3a5c13f1fa0a04564688a35761",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
244486012 | pes2o/s2orc | v3-fos-license | Research of creating mold for crayons with use of additive manufacturing technique
For crayons manufacturing are often used big mold from steel. These molds are good for mass production, however they tend to be expensive and not suitable for low volume of production. Goal of this paper is to figure out, if these molds can be created by additive manufacturing technique, what materials are suitable for the job and how final products compare with products on the market manufactured with traditional methods.
Introduction
In current times, companies are trying to provide products, that are different from their competitors. In case of crayons that could be, by using more ecological materials, providing different colors, or have interesting shape.
Crayons, as waxed base drawing media are around 200 years old. Over that time there was a lots of development and improvements in used materials, and manufacturing techniques. [1] Based on type of crayon, or any waxed based drawing media, there are multiple ways of manufacturing. Most common technique for our type of crayon is molding or extruding. This is mostly done in big molds for high volume production. [2] In case company just want to manufacture lower quantities to test these variations, creating custom mold from materials like aluminium, or copper is not financially viable option. [3] Additive manufacturing can however create products, that are relatively cheap to manufacture. Designers can also explore new shapes and are no longer limited by manufacturing technique which are relaying on removing material [4]. More advanced devices can also use wide range of materials with big range of properties like strength, melting temperature etc. Tolerances of parts manufactured by additive manufacturing could be also very good [5].
There are however several factors, that we need to consider during the designing the custom mold for the crayons. First is material, that is going to be cast into the mold. Crayons on the market are manufactured from paraffin mixed with color pigment. They can also contain glitter or perfume. They are naturally greasy and do not mix with water. Melting temperature depends on the exact composition but generally is under 60 °C. Material is mixed in factory, and then is pumped into molds of the desired shape. With one mold there can be manufacture up to 2400 crayons at once. Example of this device is on figure 1. [6] For molds there is also requirements of be able to withstand multiple cycle of temperature change. Additional vents and paths for regulating temperature may be also required. Quick changes in IOP Publishing doi:10.1088/1757-899X/1199/1/012094 2 temperature can however cause inherent stress in, which may damage product and can shorten lifespan of a mold. [7]
Figure 1. Example of machine for mass production of crayons
Material for our crayons is from different material. Main ingredients are sunflower oil, Solid Ink and starch. Mixed material is starting to melt around 90°C and for pouring temperature at least 110 °C is required. This will be also our target temperature for creating mold.
Another requirement is dimensions and shape for these crayons, which was given as block with triangle base 50 mm on perimeter, and height of 100 mm. This requirement was later slightly modified by adding small radius on the edges due to manufacturing requirement.
Next requirement was price, where one mold should be manufactured with budget of 200 € without research and development. The number of manufactured crayons is expected from 2000 to 6000 crayons with various colors in 2-3 months.
Material selection
Main problem is melting temperature of casting material. It is unreasonable to use material as ABS or PLA. Even, when their melting temperature is over 200 °C, their translucent temperature is around 60-80 °C. Same limitation apply for material like Nylon, CPE or PET-G. That temperature would change shape of mold after first batch. There is also limitation, what can we manufacture. Considered materials, that are suitable for use in high temperatures and compatible with 3d printers on our department are listed below.
Onyx
Nylon reinforced with micro carbon fibers is the main proprietary material used in Markforge printers. It has higher strength and hire stiffness then Nylon, but more importantly it has higher thermal resistance up to 140 °C. It can also create smooth surface finish along Z axis. It is however expensive to manufacture.
HSHT Fiberglass
By reinforcing Onyx with High Strength High Temperature Fiberglass, we can push temperature resistance up to 150 °C. This material is also improving strength of the printed part. Disadvantage is higher price of the material and also printing time is significantly higher, therefore this is for us considered as final option for manufacturing. [8]
TPU (TPE-U)
Thermoplastic polyurethane is material with high elasticity. Main idea to use this polymer is make it easier to remove final cast from mold and do not damage final shape of the crayon. It has also good oil and grease resistance. This material is elastic in room temperature, and it is similar to silicon-based molds.
Nylon CF15 Carbon
Material from Fillamentum is Carbon Fiber reinforced nylon with high temperature resistance. Another advantage is chemical resistance and low thermal expansion. Other features like high hardness and high stiffness are not important in this case. This material cannot be printed with regular printing head. [9] It requires special printing head because of abrasive damage created by chopped carbon fiber inside the filament. It Also requires closed warm environment while printing. While this material has very good thermal and strength properties, final surface is rough. While it is similar to onyx, it has different composition, and it is used on different printer. It is also using bigger nozzle diameter at 0.6 mm, while onyx is printed with 0.4 nozzle.
PC / PTFE
Polytetrafluoroethylene is compound mostly find in food industry, because is non-reactive, has good natural lubrication and good thermal resistance. For manufacturing on 3d printer it is mostly combined with Polycarbonate as a base. Different manufactures however giving different requirements on printing recommendation, but in general nozzle on 3d printer needs to be able to heat up to 350 ℃, heated bed up to 110 ℃ and good thermal regulation of printing area is also required. [10,11]
Designs
We experimented with multiple options and designs, where each version provided input for improvement in next version. Main factor for making design was to create the mold in a way, that it could be manufactured by additive manufacturing. Price of each mold was then calculated as price of material and price of printing. Price for printing is based on time needed to finish print and specific printer.
Unsuccessful versions
We started to experiment with existing models, that was providing by student company, that is developing these new crayons. First version was tested with Material TPU and with mold as shown on figure 2. Mold would be from two mirrored parts and if successful would cost around 12 € per mold.
Figure 2. First version of mold
Main problem was to manufacture mold with sufficient quality. We were unable to manufacture walls of the mold, that would meet required standards. Second problem was during pouring process, In second version we created 3-part mold, to create required shape. We also change material to Nylon CF15 Carbon.
Figure 3. Second version of mold
With this variant we improve print quality of the walls as the walls was printed parallel with build plate. We also verified if the mold is able to withstand required temperatures. Price for this type of mold is around 35€. Main increase in price is due to more expensive material and different printer.
Figure 4. Tests of second version mold
With these results, we decided change design around manufacturing requirements for this specific material. Reason for changing this mold was that crayon broke in half while removing from mold. Manipulation was also difficult, and this solution was not good for planned volume. This model however proved, that crayon will not stick at walls of mold and the liquid will not solidify too early.
Working prototype
The main change in next variant was to make mold from one part and for removing crayon from mold use custom ejection tool. We also tried to make walls at small angle under 0.5°. That would change final diameter on the end very slightly, but it could help with ejecting. We also tested, if the tip of the crayon could be shaped by the ejection tool, however we were unable to manufactured ejection tool with sufficient quality. Therefore, we used ejection tool without tip. We also tried different printing orientation of the mold. This variant proved to be working reasonably well. There is however requirement to use big force for ejection, that can be reduced by using sunflower oil as a lubricant and ejecting it before it will completely cool down. For now, this is considered as acceptable drawback. We can also smoothen the walls inside the mold by switching material to Onyx. Final crayon is on figure 7. The print orientation does not have significant impact on the surface finish of the crayon. Price of this working prototype is 50€. While there is the similar amount of material as in previous designs, print time is significantly higher due to requirement of small layer height.
Future development
The final challenge was to do multiple crayons at the same time. Based on our experience with one part mold we designed two molds. First proposed model is on figure 8 with custom ejection tool on figure 9. Main advantage is that we can keep flexibility of scalability of manufacturing in the future. This prototype is currently under testing and optimization. Price of this assembly is around 80€ together without screws and bolts from Nylon CF15 Carbon, or 150€ by using Onyx with HSHT Fiberglass. This design was also accepted for future development. We are also planning to use HSHT material as our final solution to ensure longevity of the mold, however the Nylon CF15 Carbon is sufficient for further testing.
Our second design contains 16 chambers, each with spiral for liquid coolant, or heating, based on liquid temperature. This design was created as a concept for better thermal management during crayons casting. It is however expensive to manufacture, and thermal management can be done in different ways, that are cheaper. Design, as a concept can be used for further investigation, if necessary.
Main parts of this mold are input and output for the liquid, distribution pipes and spirals around each chamber. Front and top views with internal walls are on figures 10 and 11.
Conclusion
Aim of this work was to create mold by additive manufacturing technique for crayons. One of the main concerns was consider material, that can withstand required temperatures during casting the material into mold. Then we procced with testing to manufacture mold, where we figure out, that because of difficulties with manufacturing process some materials are not suitable for this specific task. After this selection we started to look for best manufacturing parameters to achieve optimal result so final crayon can be easily ejected from the mold. We experimented with different shapes and different printing directions. After finding acceptable solution we continued with final design for the mold.
Our first successful test was one piece mold with separate ejection tool. This mold was manufactured from Carbon Fiber reinforced Nylon. Main problem for this design seemed to be correct ejecting temperature. This can be managed by time management or by reheating the mold itself. Based on that we continued with two separate designs to scale up production, where one of them was approved for future development.
For manufacturing multiple crayons at once, we decided for hexagon configuration, because shape of the crayons allowed us to create maximum number of chambers per used material. This mold is still in testing. As a main problem is force needed for ejection of final crayons from the mold. This can be however improved by lever mechanism or by temperature management. Using sunflower oil as lubricant also helped with lowering ejecting force. This mold is also good for scaling up production by manufacturing more of them, if necessary. Six chambers per mold is now currently maximum, that is suitable with this specific configuration. For molds with more than six chambers, more research is required.
In comparison with crayons on the market, surface finish on our crayons is not as smooth, but considering crayons will be wrapped in paper, that is not an issue. The surface finish can be also improved by switching material for mold to Onyx. The 3d printer using Onyx can also print with higher accuracy, which will further increase quality of the print. Using PTFE is currently not suitable option due to difficulties maintain correct printing parameters.
For the future development we are expecting only small changes in the overall design, because of proven and tested form. Future development is still required for optimizing for ease of use. However, concept is working, so for now, we are not considering future research. | 2021-11-23T20:07:00.349Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "5ccdfdad2044c824d73709155ec0eeee4580f25b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1199/1/012094",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5ccdfdad2044c824d73709155ec0eeee4580f25b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
1379645 | pes2o/s2orc | v3-fos-license | Pathogenicity of a currently circulating Chinese variant pseudorabies virus in pigs.
AIM
To test the pathogenicity of pseudorabies virus (PRV) variant HN1201 and compare its pathogenicity with a classical PRV Fa strain.
METHODS
The pathogenicity of the newly-emerging PRV variant HN1201 was evaluated by different inoculating routes, virus loads, and ages of pigs. The classical PRV Fa strain was then used to compare with HN1201 to determine pathogenicity. Clinical symptoms after virus infection were recorded daily and average daily body weight was used to measure the growth performance of pigs. At necropsy, gross pathology and histopathology were used to evaluate the severity of tissue damage caused by virus infection.
RESULTS
The results showed that the efficient infection method of RPV HN1201 was via intranasal inoculation at 10(7) TCID50, and that the virus has high pathogenicity to 35- to 127-d old pigs. Compared with Fa strain, pigs infected with HN1201 showed more severe clinical symptoms and pathological lesions. Immunochemistry results revealed HN1201 had more abundant antigen distribution in extensive organs.
CONCLUSION
All of the above results suggest that PRV variant HN1201 was more pathogenic to pigs than the classical Fa strain.
INTRODUCTION
Pseudorabies virus (PRV), also known as Aujeszky's disease virus or Suid herpesvirus type 1 (SuHV-1), is the causative agent of pseudorabies (PR). Belonging to the family Herpesviridae, subfamily Alphaherpesvirinae, and genus Varicellovirus, the virus causes substantial economic losses in the pig industry worldwide [1][2][3] . The PRV genome is a double-stranded linear DNA which is about 143 kb in size and has about 70 ORFs [4,5] . This pathogen can infect numerous mammals, including carnivores, ruminants, and rodents, yet pigs are the only natural host for PRV as the reservoir of the virus [6,7] . PRV infection is characterized by neurologic symptoms and death in newborn piglets, respiratory disorders in elder pigs, and reproductive failure like stillbirths and abortions in sows. Like other alphaherpesviruses, PRV can establish a lifelong latent infection in the peripheral nervous system of infected pigs. Latently infected pigs can be recognized as a source of reinfection when the latent viral genome reactivates spontaneously and the infectious virus is developed [8] . Attenuated live or killed PRV vaccines have played a critical role in the control and eradication of PR. Bartha-K61, a vaccine imported from Hungary, have been widely used in China since the 1970s, and was reported to provide complete protection from field virus infection [2] . Nevertheless, since October 2011, severe PRV outbreaks have occurred on pig farms and spread rapidly to the northern parts of China [9,10] . Most of the infected farms had used Bartha-K61 vaccine according to the manufacturer's instructions, and the serum samples obtained from the infected pigs had a considerable positive rate of gE Ab detected by ELISA (IDEXX Laboratories, Westbrook, United States) [10,11] .
The affected pigs presented with multiple clinical signs, including high fever (usually ≥ 40.5 ℃), depression, anorexia, respiratory distress, shivering, and systemic neurological symptoms [11,12] . Pathologic examination of viscera samples collected from dead pigs from different provinces displayed consolidation, edema, and hemorrhage in the lungs, as well as necrosis in the kidneys, indicating that newly-emerging PRV variants may have higher pathogenicity than the classical strains [13] . The PRV infection in vaccinated pig herds indicates that the traditional Bartha-K61 vaccine could not provide complete protection to the current prevalent PRV variants in China [11,14] . Accordingly, it is imperative to study the pathogenicity of the currently circulating PRV variant strains and develop newly effective vaccines to tackle the problem.
In this study, we first established a PRV variant HN1201 infection model in pigs according to different inoculation routes, virus loads, and pig ages. The characterized PRV variant HN1201 was then compared with the virulent classical PRV strain Fa to determine pathogenicity.
Viruses and cells
The PRV variant HN1201 was previously isolated from the brain of infected pigs in Henan province [12] . Briefly, the infected pig brain sample was homogenized and the supernatant of homogenization was subjected to 0.22 μm filtration. The filtrated supernatant was inoculated on a PK-15 cell monolayer until the appearance of CPE after 3 d. The virus was harvested after two cycles of freeze-thaw and store at -80 ℃ until use. The classical PRV Fa was purchased from the Institute of China Veterinary Medicine Inspection [15] . Permissive PK-15 cells were cultured in Dulbecco's Modified Eagle's Medium supplemented with 5% fetal bovine sera.
Experiment design and animals
To establish a PRV HN1201 infection model in pigs, in the first animal experiment, twenty 60-d old pigs, five 35-d old pigs, and five 127-d old pigs were used to evaluate the pathogenicity of the virus by different inoculating routes, virus loads, and ages of pigs. Twenty 60-d old pigs were randomly allocated into the first four groups (Table 1). Pigs in group 1 and group 2 were inoculated with 10 7 TCID50 PRV HN1201 strain via intramuscular (im) and intranasal (in) routes, respectively. Pigs in group 3 and 4 were inoculated via the intranasal route with 10 6 TCID50 and 10 5 TCID50 of PRV HN1201 strain, respectively.
To test the susceptibility of pigs to the virus at different ages, five 35-d old pigs in group 5 and five 127-d old pigs in group 6 were inoculated via the intranasal route with 10 7 TCID50 PRV HN1201 (Table 1).
In the second animal study, ten 56-d old pigs were randomly divided into two groups with five pigs in each group. Pigs in group Ⅰ were inoculated with 10 7 TCID50 PRV HN1201 via the intranasal route and group Ⅱ were inoculated with classical PRV Fa strain with the same dose and route.
All pigs used in the above two animal trials were free of PRV and excluded by using gB-and gE-ELISA Kits (HerdChek PRV, IDEXX, United States) and PCR method. All pigs were also free of porcine reproductive and respiratory syndrome virus, classical swine fever virus, and porcine circovirus 2. Experimental pigs in different groups were insulated in separate rooms throughout the study. After virus inoculation, rectal temperature and clinical signs were recorded on a daily basis. At 14 d post-inoculation (dpi), all surviving pigs were humanely euthanized and necropsied, and different organ samples were collected. The collected samples were subjected to pathological examination and gently inflated with 10% neutral-buffered formalin for immunohistochemistry examination. All animal trials in this study were approved by the Animal Care and Ethics Committee of the China National Research Center for Veterinary Medicine.
Histopathology and immunohistochemistry
Representative samples were cut from the fixed tissues and processed into paraffin blocks. Sections approximately 3-4 μm thick were cut into slides. Duplicates of the same sections were used for hematoxylin and eosin (H and E) staining and immunohistochemistry staining separately, as previously described [16] . The H and E staining was operated automatically by Leica fully automatic dyeing machine according to standard procedures. Immunohistochemistry staining was performed as below. The prepared paraffin sections were mounted on APES-treated slides and incubated overnight at 37 ℃. The slides were de-waxed via routine method by Leica automatic dyeing machine. The samples were blocked with 3% peroxide-methanol for 20 min at room temperature for endogenous peroxidase ablation and rinsed by phosphate buffer solution (PBS) twice. The following steps were carried out in a moisture chamber: (1) Samples were incubated with blocking buffer containing normal horse serum (Beijing Zhongshan Jinqiao, China) with 1:20 dilution with PBS at 37 ℃ for 20 min; (2) The horse serum was discarded and samples were incubated in PRV monoclonal antibody 3B5 solution (Beijing Tian Tech Biotechnology, China) with 1:800 dilution in PBS (pH 7.3) at 37 ℃ for half an hour and then 4 ℃ overnight; (3) After rinsing with PBS three times, HRP goat anti-mouse IgG (BTI, United States) with 1:100 dilution in PBS (pH 7.3) was added, and the slides were incubated for 1 h at 37 ℃; (4) After rinsing with PBS three times, the slides were incubated with AEC and kept at room temperature without light for 5-10 min; (5) After rinsing with PBS three times, the slides were stained with hematoxylin (freshly prepared) 1:10 dilution for 10 s; (6) The unbound hematoxylin was washed away by running water, and the slides were placed into water for 2 min; and (7) The slides were allowed to dry naturally and then mounted with water-soluble tablet seal before visualization by 200 × microscope photographs. The results were determined by negative (-) and positive (+), with positive signals interpreted as low (+), moderate (++), and intense (+++), according to the intensity of staining.
Animal care and use
The animal protocol was designed to minimize pain or discomfort to the animals. The animals were acclimatized to laboratory conditions (23 ℃, 12 h light/12 h dark, 50% humidity, and ad libitum access to food and water) for two weeks prior to experimentation. All animals were euthanized by barbiturate overdose (intravenous injection, 150 mg/kg pentobarbital sodium) for tissue collection. All procedures involving animals were reviewed and approved by the Institutional Animal Care and Use Committee of the National Research Center for Veterinary Medicine (IACUC protocol number: 2015010402).
Statistical analysis
Differences of body temperature and body weight between two infected groups in the second animal trial were determined by using t-test in GraphPad Prism 5.0 Software (San Diego, CA). Differences were considered statistically significant when P < 0.05.
Experimental infection of PRV HN1201
For routes of infection, all pigs in group 1 and group 2 inoculated with 10 7 TCID50 PRV HN1201 strain via intramuscular and intranasal routes, respectively, showed PRV-specific clinical symptoms such as fever (40.0 ℃-41.5 ℃), respiratory distress, excessive Yang QY et al . Pathogenicity of a Chinese pseudorabies variant salvation, and neurological signs including convulsion and ataxia. All pigs in group 2 were euthanized due to moribund conditions from 5 to 7 dpi. Compared with group 2, three pigs in group 1 were euthanized from 5 to 7 dpi and the other two pigs survived until the end of the study (terminated at 14 dpi, Table 1). All pigs in group 3 (10 6.0 TCID50) showed severe respiratory symptoms and neurological signs, as described above, with two being euthanized 6 dpi. Compared to pigs in group 3, respiratory symptoms such as coughing and shivering were more often observed in group 4 (10 5.0 TCID50). There was one pig out of the five in group 4 that showed neurological signs, and was euthanized by the end of study (terminated at 14 dpi). Young piglets are more susceptible to PRV infection than elder pigs [6] . To determine the pathogenicity of PRV HN1201 in pigs of different ages, 35, 60, and 127-d old pigs were inoculated with 10 7 TCID50 of virus. After virus inoculation, pigs of different ages showed the clinical symptoms as described above. All pigs in group 5 (35-d old pigs) were euthanized from day 4 to day 6 and all pigs in group 6 (127-d old pigs) were euthanized from day 5 to day 8 due to the moribund conditions. Therefore, unlike the classical PRV strains, this PRV variant strain showed high pathogenicity in pigs of different ages.
Comparison of pathogenicity between PRV variant HN1201 and classical Fa strain
Since the above results showed PRV variant HN1201 has high pathogenicity in pigs of different ages, a classical PRV Fa strain was used to compare pathogenicity. To exclude the bias of pathogenicity of two PRV strains due to the age of experimental pigs, ten 56-d-old healthy pigs were randomly assigned to two groups, with five pigs in each group. Pigs in groups Ⅰ and Ⅱ were inoculated with PRV HN1201 and Fa strain, respectively, via intranasal method at 10 7 TCID50.
As expected, all pigs in groupⅠ displayed high fever, anorexia, depression, respiratory symptoms, and neurological signs as described in the first animal study.
In contrast, four pigs in group Ⅱ had no respiratory or neurological symptoms, aside from sneezing (Table 2), while only one pig showed the same clinical signs as pigs in groupⅠ. Gross pathology examination at necropsy showed that PRV HN1201 infection led to severe pulmonary consolidation and necrosis in the lung ( Figure 1A), encephalic hemorrhage in the brain ( Figure 1B), and hemorrhage and necrosis in the tonsil ( Figure 1C). By contrast, pigs infected with PRV Fa showed only slight hemorrhage in the lung tissue ( Figure 1D) and had no obvious changes in the brain or tonsil ( Figure 1E and F). No other obvious pathologic change was found after two virus infection in heart, liver, spleen, and kidney tissues. There was no significant difference in rectal temperature between the two groups in the first 5 d of study ( Figure 2A). Pigs in groupⅠ had significant body weight losses compared to pigs in group Ⅱ at 6 dpi ( Figure 2B). At 5 dpi, two pigs were euthanized in group Ⅰ and one pig was euthanized in group Ⅱ. At 6 dpi, another three pigs were euthanized in group I and all remaining pigs in group Ⅱ survived to the end of the study.
Organ samples of pig tonsil, lung, cerebellum, lymph nodes, kidney, and liver were collected for histological examination and immunohistochemistry staining. Typical PRV infection is characterized by necrosis in multiple organs. As shown in Figure 3, necrosis, congestion, or hemorrhage in all above organs of PRV HN1201-infected pigs were observed after H&E staining ( Figures 3A-G), with neuronal intra-nuclear inclusions also being observed in the brain. Compared to the HN1201 infection, PRV Fa-infected pigs only showed neuronal degeneration, necrosis in the brain, Purkinje cell degeneration, and necrosis in the cerebellum ( Figure 3H and Ⅰ). In accordance with histopathology results, immunohistochemistry staining showed significant strong positive signals in all of the above organs obtained from pigs infected with HN1201 virus, whereas only brain and cerebellum samples of one RPV Fainfected pig revealed positive results (Table 3).
Table 2 Clinical manifestations of pseudorabies virus HN1201 and Fa infection
Each row represents one pig in the corresponding group.
Yang QY et al . Pathogenicity of a Chinese pseudorabies variant losses to the Chinese swine industry [10,13] . Recent studies have shown that PRV variants contributed to the recent outbreaks of PR, and the traditional Bartha-K61 vaccine could not provide complete protection against the emerging PRV strains [11,14] . Similar to classical PR, the disease is characterized by the sudden death of new born piglets, respiratory and neurological symptoms in growing pigs, and stillbirth or the birth of weak piglets from sows. However, the pathogenicity of the new emerging PRV variant was never delineated and compared with classical PRV strains. Therefore, it is necessary to determine the pathogenicity of the current PRV variants before any control measures are implemented to control the disease.
PRV is tropic for both the respiratory and nervous systems of swine. Viral particles enter sensory nerve endings, thereby innervating the infected mucosal epithelium. Morbidity and mortality associated with PRV infection varies with host age, the animal's overall health status, and infectious dose [2] . In this study, we first tested the pathogenicity of PRV variant HN1201 by different routes of virus infection, virus loads for inoculation, and pig ages. Our results showed that intranasal infection is more effective than intramuscular infection when 10 7 TCID50 viruses were used for inoculation. Pigs infected with PRV 1201 by the intranasal route showed more severe clinical symptoms and higher mortality rates than those with intramuscular routes, and virus loads were positively correlated with mortality rates. The pathogenicity of some other PRV variant strains have been studied recently [17] . In a study by Luo et al [17] (2014), pigs infected with the 10 6 TCID50 PRV TJ strain by the intranasal route showed higher mortality than those with a lower dose or were infected by the [10] . Differences in pathogenicity and mortality caused by different PRV viruses could be explained by the virus load for inoculation, viral strain, and breed of pigs, although these three viruses also share a more than 99.0% similarity in their whole genome sequences. Besides routes of inoculation and virus load, PRV HN1201 could infect pigs from 35 to 127 d old with PRV-specific clinical symptoms, indicating that the PRV HN1201 strain is highly pathogenic to pigs.
To compare the pathogenicity of newly-emerging PRV variants with the classical PRV strain, PRV HN1201 and Fa strains were used to infect pigs. Our results showed that HN1201-infected pigs showed more severe clinical signs and higher mortality rates than Fa-infected pigs (5/5 vs 1/5). Pigs in the PRV HN1201-infected group displayed high fever, anorexia, depression, respiratory symptoms, and neurological signs. In comparison, four pigs in the PRV Fa-infected group had no respiratory or neurological symptoms, aside from sneezing. Meanwhile, pigs infected with HN1201 had steady body weight loss as compared with pigs infected with the Fa strain ( Figure 2B). Retarded growth was more often observed in young piglets after PRV infection. However, the loss of body weight of 56-d old pigs after PRV infection was seldom observed, which proves the high virulence of Gross pathological examination at necropsy revealed more severe damage to the lung, tonsil, brain, cerebellum, and lymph nodes in pigs infected with HN1201 strain than in the Fa strain group. In line with pathological results, histopathology examination showed remarkably obvious necrosis in multiple tissues, such as the tonsil, lung, brain, spleen, and liver in HN1201-infected pigs; in contrast, necrosis caused by PRV Fa infection was only limited to the brain and cerebellum. Immunochemistry results also showed that PRV HN1201 infection lead to more extensive virus antigen distribution in different organs with more intense staining, while Fa infection only had one cerebellum sample from one pig that showed positive. Previous studies reported that inoculation of PRV through the nasal cavity resulted in virallyinduced neuropathological lesions [2] . The kinetics and locations of lesion appearance were consistent with a transneuronal spread of PRV from the nasal epithelium to synaptically-connected higher-order structures in the nervous system. The intense PRV antigen location and severe lesions of the brain, tonsil, and lung coincided with the typical respiratory and neurological symptoms, and may be due to intranasal infection. Therefore, the above results further suggest the higher pathogenicity of PRV HN1201 when compared to the classical Fa strain.
In conclusion, PRV HN1201 infection is more effective through the intranasal route than the intramuscular inoculation route, and the virus is highly pathogenic to different ages of pig. Compared with classical PRV Fa strain, HN1201 causes more severe clinical symptoms and pathological lesions, with extensive antigen distribution in different organs.
Background
Highly virulent pseudorabies virus (PRV) variants are circulating in most Chinese pig farms, causing huge economic losses. The pathogenicity of these PRV variants have not been previously compared with classical PRV strains.
Research frontiers
The authors aimed to test the pathogenicity of a newly-emerging PRV variant in pigs of different inoculation routes, virus loads, and ages. Differences in pathogenicity between the newly-emerging PRV variant and the classical PRV strain were also compared.
Innovations and breakthroughs
This study demonstrates that the currently-circulating PRV HN1201 variant has higher pathogenicity in pigs than the classical PRV Fa strain via the manifestation of more severe clinical symptoms and pathological lesions, with extensive antigen distribution in different organs.
Applications
The authors proved the PRV variant to be more pathogenic in pigs as compared to the classical Fa strain, which may partially explain the inefficacy of current commercial PRV vaccines. Thus, a better understanding of the differences of pathogenicity between variant and classical PRV may facilitate the development of more effective vaccines.
Terminology
Pathogenicity of pseudorabies virus is the potential capacity of PRV to cause PR-like syndrome in pigs. Pathogenicity of viruses may change due to virus mutation and/or recombination. Study into the pathogenesis of currentlycirculating field viruses may provide first-hand data for disease control.
Peer-review
This manuscript reports the analysis of the pathogenicity of a new PRV variant that the commonly-used vaccine cannot protect against, and is therefore causing massive economic losses in China. The pathogenicity of this variant and the classical PRV Fa stain is also compared. The experiment design and results were clear and convincing. It will be interesting to see if the authors can further explore the mechanisms of the enhanced pathogenicity of the PRV variant behind these phenomena.
Table 3 Virus antigen distribution and intensity in different organs of pseudorabies virus HN1201 or Fa strain by immu nohistochemistry staining
The positive staining signals were interpreted as negative (-), low (1+), moderate (2+), or intense (3+), according to the intensity of staining. Each row represents one pig in the corresponding group.
COMMENTS
Yang QY et al . Pathogenicity of a Chinese pseudorabies variant | 2018-04-03T03:12:06.678Z | 2016-02-12T00:00:00.000 | {
"year": 2016,
"sha1": "58245bf7dd5e5843c9b4a2e72bf1c4b4d8d7cfd5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5501/wjv.v5.i1.23",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2d24e7e76cf7c7fe56acf93bdb95eb9c4435950e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
25110197 | pes2o/s2orc | v3-fos-license | Trimodal color-fluorescence-polarization endoscopy aided by a tumor selective molecular probe accurately detects flat lesions in colitis-associated cancer
Abstract. Colitis-associated cancer (CAC) arises from premalignant flat lesions of the colon, which are difficult to detect with current endoscopic screening approaches. We have developed a complementary fluorescence and polarization reporting strategy that combines the unique biochemical and physical properties of dysplasia and cancer for real-time detection of these lesions. Using azoxymethane-dextran sodium sulfate (AOM-DSS) treated mice, which recapitulates human CAC and dysplasia, we show that an octapeptide labeled with a near-infrared (NIR) fluorescent dye selectively identified all precancerous and cancerous lesions. A new thermoresponsive sol-gel formulation allowed topical application of the molecular probe during endoscopy. This method yielded high contrast-to-noise ratios (CNR) between adenomatous tumors (20.6±1.65) and flat lesions (12.1±1.03) and surrounding uninvolved colon tissue versus CNR of inflamed tissues (1.62±0.41). Incorporation of nanowire-filtered polarization imaging into NIR fluorescence endoscopy shows a high depolarization contrast in both adenomatous tumors and flat lesions in CAC, reflecting compromised structural integrity of these tissues. Together, the real-time polarization imaging provides real-time validation of suspicious colon tissue highlighted by molecular fluorescence endoscopy.
Trimodal color-fluorescence-polarization endoscopy aided by a tumor selective molecular probe accurately detects flat lesions in colitis-associated cancer 1 Introduction Inflammatory bowel diseases (IBDs) consist of ulcerative colitis and Crohn's disease, which are characterized by uncontrolled inflammation of the gastrointestinal tract in genetically susceptible individuals exposed to environmental risk factors. 1 The incidence and prevalence of IBD has increased in recent years, affecting 1.4 million Americans in 2012 alone. 2 Moreover, the peak onset of this disease occurs in the second and third decades of life, subjecting these patients to a twofold or greater lifetime risk of developing colorectal cancer (CRC) than normal subjects. 2 A meta-analysis of colorectal cancer risk suggests a cumulative incidence of malignancy of 2% by 10 years, 8% by 20 years, and 18% by 30 years postdiagnosis. 1 Sporadic CRC and colitis-associated cancer (CAC) together account for 52,000 mortalities every year in the United States alone, making them the second leading causes of death from cancer among adults. 3 Unlike sporadic CRC, which develops from an adenomatous polyp, CAC follows a dysplasia-carcinoma sequence where the inflamed mucosa gives rise to flat and polypoid dysplasia that lead to invasive cancers. 4,5 The lack of elevated growth component as in polyps creates an enormous challenge in detecting these flat lesions during surveillance colonoscopy. For example, 50% to 80% of the lesions that are missed during surveillance colonoscopy are flat. 6,7 Several murine models have been developed to recapitulate human CAC. 8 In particular, the azoxymethane-dextran sodium sulfate (AOM-DSS) model has been used to demonstrate that advanced cancers develop from flat lesions without transitioning through a polypoid intermediate. 9 Although dysplasia could serve as a marker for malignancy in colitis patients, current surveillance protocols fail to reliably identify early stages of CAC. 10 Recent advances in biomarker and imaging platforms have focused on either morphological or molecular features of these flat lesions, neither of which is sufficient to aid the diagnosis of these pathologies in real time during colonoscopy. Efforts to identify polyps and cancerous lesions using chromoendoscopy, virtual chromoendoscopy, narrow-band imaging, and postprocessing algorithms such as i-SCAN and FICE have failed to reliably delineate flat lesions from surrounding uninvolved tissue. [11][12][13][14] In particular, chromoendoscopy is currently recommended for routine surveillance of IBD patients. Although this technique has improved the detection rates of flat lesions from 28% to 56%, 15 the fraction of undetected lesions remains unsatisfactory. Similarly, optical coherence tomography has been used to differentiate diseased from normal esophageal tissue, but the inhomogeneous backscattering of signal in highgrade dysplasia limits the diagnostic information derived from this approach. 16,17 To improve detection sensitivity, newer fluorescence endoscopic techniques, such as confocal laser endomicroscopy, have been developed. In conjunction with fluorescence contrast agents, these techniques allow visualization of molecular signatures of cancer, but lack a real-time validation strategy and provide only a limited field of view (FOV). [18][19][20] The overarching goal for this study is to develop and to validate a tri-modal endoscopic system consisting of fluorescence, polarization, and color modules for colonoscopy. The color imaging module provides conventional visual feedback of the anatomical structures of tissue. Utilizing a nearinfrared (NIR) fluorescent molecular reporter of flat lesions, the fluorescence module highlights underlying pathologies that are not visible by conventional color imaging methods. Complementary polarization signal highlights tissue that is largely opaque to both fluorescence and color modules, thereby facilitating full characterization and instant cross-validation of suspicious lesions. An unprecedented high sensitivity and specificity approaching 100% were obtained by combining these three imaging modalities.
Fluorescence and Polarization Endoscope
The new endoscopic device shown in Fig. 1 coupler with a focus ring (Karl Storz, Tuttlingen, Germany) focused the CCD camera on the image formation plane at the back of the telescope. In the fluorescence mode, the cold light fountain source was replaced by a 100 mW, 780-nm excitation source (Lasermax, New York, USA). An 800-nm long-pass emission filter (ThorLabs, Newton, New Jersey) was placed behind the telescope in a slit of a custom-made adapter designed to couple the macro lens to the telescope. Images were captured using the Fluorvivo CCD camera equipped with a Sony ICX285 sensor (quantum efficiency of 30% at 800 nm).
In the polarization mode, the same endoscopic cold light fountain used as the light source for the RGB imaging was utilized here. Images were captured using the camera housing the polarization sensor described earlier.
Polarization Endoscopy Calibration
Since the endoscope is composed of a series of optical elements, the polarization state of the input light can be modified as the light travels through the endoscope. Every optical element in the endoscope will perturb the Stokes vector of the incident light and these perturbations need to be carefully examined to ensure that the endoscope preserves the polarization signatures of the incident light. We created an optical setup to evaluate the polarization properties of the endoscope as depicted in Fig. 2(a). The setup is composed of a polarization state generator and a polarization state analyzer.
The polarization state analyzer for the light emerging from the endoscope is measured using a division of time polarimeter. The polarimeter is composed of a zero-order precision wave plate (Newport 20RP34-514.5) with a quarter wave retardance at 514.5 nm, a precision linear polarizer (Newport 20LP-VIS-B) with a wavelength range of 400 to 700 nm, and a USB controlled optical power meter (Thorlabs PM100D) with a visible spectrum calibrated photodiode (Thorlabs S120 V). This instrument quantifies the Stokes vector as described by Eqs. In these equations, Iðθ; ϕÞ is the intensity of the optical beam, θ describes the linear polarizer transmission axis angle in degrees, and ϕ describes the wave plate retardance, which is 90 deg for the wave plates. To measure (45 deg, 90 deg), the light beam passes through a waveplate with the fast axis along the x-axis and then through a linear polarizer with transmission axis rotated at 45 deg; to measure Iðθ; 0Þ, the beam passes through a linear polarizer with its transmission axis rotated θ deg and then through a wave plate with its fast axis along the x-axis. Hence, all measurements are done with both a linear polarization filter and a quarter wave plate to take any optical losses that can occur in either optical element into account.
Formulation of LS301
The NIR dye LS301, which was previously reported, 22 was synthesized in our lab via modification of a previously described method. 23 LS301 consists of a NIR fluorescent dye, cypate (780 ex∕830 em), and a cyclic peptide sequence, D-Cys-Gly-Arg-Asp-Ser-Pro-Cys-Lys (c(DCGRDSPC)K). Figure 3(a) shows the molecular structure, absorption, and emission spectra of LS301.
To formulate LS301 in a sol-gel mixture for topical administration, Pluronic F-127 (1 g; Sigma Aldrich, St. Louis, MO) was dissolved in phosphate buffered saline (PBS) (10 mL; PBS) to obtain a 10% (w/v) stock solution. A solution of LS301 (0.5 mL; 60 μM) was dissolved in 10% w/v aqueous Pluronic solution to form the sol-gel system, which was stored on ice prior to administration to maintain its liquid state.
In Vivo Small Animal Endoscopy
Mice were anesthetized via intraperitoneal injection of a solution of ketamine (85 mg∕kg) and xylazine (15 mg∕kg). They were then placed supine on a z-translation stage prior to the procedure. To avoid unnecessary movements of the endoscope during mode switches and to assess a region of interest within the same FOV, the imaging setup was immobilized on a tripod. To obtain 1-mm translation inside the colon, the endoscope was set on an x-y translation stage. The rigid endoscope could reach as far as 4 cm into the mouse colon, at which point it encountered the splenic flexure.
Prior to topical administration of the LS301 solution, the mouse colon was rinsed with PBS to remove debris and mucous. A novel administration technique enabled interaction of the mouse tissue with our molecular agent for a longer incubation time than would otherwise be achievable. A cold solution of the LS301 in Pluronic F127 sol-gel (0.5 mL) was carefully injected via the examination sheath of the endoscope. The solution turned into a gel instantaneously along the distal colon. After 10 min of incubation, cold PBS was flushed through the examination sheath to reverse the gel into a liquid state. After thorough rinsing of the colon with 3 to 5 PBS washes, fluorescence and polarization endoscopy were performed. Steps involved in this process are illustrated in Fig. 1(b).
In Situ Fluorescence-Polarization Imaging
Postendoscopy mice were sacrificed and the distal colon was isolated from the peritoneum without dislodging it from the animal to locate tumors that were identified during fluorescencepolarization endoscopy. A 780-nm excitation source illuminated the sample for fluorescence imaging and an 800-nm long-pass filter was placed in front of the image capture device. Images were captured with an exposure time of 0.5 s. Polarization imaging was obtained using the system described above.
Simulating Unpolarized Light in Reflectance
Polarization Endoscopy
Simulating reflectance imaging using unpolarized light source for polarization endoscopy
Most polarization methods typically use a polarization filter to illuminate a tissue and an orthogonal polarization filter placed at the detector side to measure polarization contrast information. This imaging modality requires careful optical alignment of the polarization filters and is applicable for microscopy or ex vivo imaging. Due to space constraints in an endoscope, polarization contrast imaging is challenging. In this study, we allowed the intrinsic properties of tissue to linearly polarize the incident unpolarized light and backscattering to depolarize the reflected light, which is captured by our polarization sensor as shown in Fig. 4(a). To ensure that this polarization technique is sensitive to differences in tissue properties, simulations using Muller calculus for reflection and Monte Carlo simulations for backscattering were performed (see below) using tissue refractive index values previously reported in the literature. Diseased tissues have a lower refractive index (1.39) than healthy tissues (1.46), thereby serving as an imaging biomarker for the polarization imaging platform. Given the importance of angle of incidence, the DoLP signature arising from healthy and diseased tissues was plotted against the incidence angle using both Frenel's reflectance and Monte Carlo simulations as shown in Fig. 4(b) and an imaging angle of 15 deg for in vivo endoscopy. At 15 deg, we obtained ∼4% DoLP difference between healthy and diseased tissues, which is sufficient to delineate normal from diseased tissues. While standard endoscopy procedures range in angles of imaging between 15 deg and 35 deg, our ex vivo imaging setup utilized a 35 deg angle, which results in a DoLP change of 20% between healthy and diseased tissues according to Fig. 4(b). Based on the differences in tissue refractive indices, radius and density of nuclei, absorption coefficient between healthy and diseased tissues, 24,25 and backscattering of light, we illustrate the propagation of light in the two stages of CAC development: adenomatous polyp and flat, depressed lesions, respectively This simulation demonstrates that the polarization method is capable of cross-validating CAC and associated flat lesions in real time, guided by enhanced fluorescence in pathologic tissue.
Simulating unpolarized light for polarization endoscopy
To ensure that this polarization technique is sensitive to differences in tissue properties, simulations using Muller calculus for reflection and Monte Carlo simulations for backscattering were performed. The difference in polarization signatures is due to a combination of Fresnel reflectance and backscattered intensity from the unpolarized source. The incident, unpolarized light becomes partially polarized upon reflection as shown in Fig. 4(a). The partially polarized light mixes with the backscattered intensity from the tissue and further depolarizes the incident light due to the optical parameters of the tissue. Assuming that scattering comes mostly from the nucleus, backscatter is simulated as a dispersion of spheres with optical properties similar to those of the nucleus, performed using the algorithm previously reported. 26 In tumor tissue, scattering comes mostly from the large sized, dense nuclei, whereas in uninvolved tissue, scattering primarily arises from scatterers associated with the mucosa and submucosa regions. Verification is done using polarized Monte Carlo simulation software with a polydisperse set of scatterers drawn from a uniform distribution. In order to simulate the differential optical parameters arising from tumor versus unhealthy tissue, differences in nuclear radii, density of nuclei, index of refraction of nuclei, and absorption coefficient were considered and obtained from the literature. [27][28][29][30] Perelman has successfully extracted valuable information about the density and size distribution of mucosal cells and used them as indicators of disease state (neoplastic precancerous changes in biological tissue). Increases in size and density of the nucleus is associated with a later diseased state in these studies. Furthermore, this is supported by Backman et al. The studies conducted illustrate the higher level of backscattered light associated with a denser and larger nuclei concentration. This leads the polarization state of reflected light to be more depolarized when compared to healthy tissue comprising uniformly distributed, less dense, and smaller nuclei. We took a similar approach in modeling the backscattered intensity as the combination of backscatter from the mucosa added to the backscatter from the submucosa after propagation back through the mucosa. 25 Since the light used was a 6000 K Xenon source, the simulations take place at wavelengths between 400 and 700 nm in 25-nm increments, normalized to the intensity of the bulb at these wavelengths. An increase in backscatter from the tumor tissue effectively depolarizes the Fresnel reflected DoLP, resulting in a lower measured DoLP for tumor when compared to the surrounding uninvolved region.
Development of Trimodal Color, Fluorescence, and Polarization Endoscope
The goal of the instrument development was to retain the features of widely used clinical endoscopes while incorporating new reporting strategies for enhanced diagnosis of endoscopeaccessible organs. Toward this goal, we developed a novel endoscope that is capable of presenting imaging data in color (RGB), NIR fluorescence, and polarization modes. Color colonoscopy is the standard of care, while NIR fluorescence and polarization provide new reporting signals for a molecularly targeted imaging agent and tissue pathophysiology, respectively. A detailed description of the endoscopy procedure is available in Sec. 2.6. Briefly, a rigid Hopkins endoscope fitted with a Xenon lamp and a 780-nm laser served as RGB/polarization and NIR light sources, respectively, as shown in Figs. 1(a) and 1(b). We used a visible/ NIR-sensitive CCD camera with an RGB Bayer filter and 25% quantum efficiency at 800 nm to capture both color and NIR images.
Real-time polarization imaging
To capture the light polarization information, a division of the focal plane polarization imaging sensor was developed, where the incoming light was filtered via a polarization filter array consisting of four nanowire polarization filters offset by 45 deg prior to absorption by a silicon photodiode as shown in Fig. 1(d). 31,32 The amplitude of the filtered light wave was recorded by photodetectors underneath the aluminum nanowire polarization filters. The tissue was illuminated with unpolarized light via the light port of a Karl Storz rigid endoscope and the reflected light was collected via our custom-built polarization sensor. The polarization state of the reflected light from the tissue was determined by both the relative position of the camera and the index of refraction of the imaged tissue, [Eq. (4)]. The use of an unpolarized light source simplified the imaging setup and allowed the intrinsic morphological and physiological properties of the tissue to dictate the captured polarization signature. This imaging method differs from typical polarization contrast imaging, where linear polarized illumination is used to illuminate a tissue and a detector with cross linear polarization is used to record the reflected light. Due to complex alignment requirements of the polarization filters on both the illumination and detection components, application of traditional polarization contrast imaging is limited for in vivo endoscopy.
Polarization performance of the endoscope: calibration study
In order to examine how the polarization state of the input light is modified as the light travels through the endoscope, we created an optical setup as shown in Fig. 2(a). The polarization properties of light emerging from the endoscope when illuminated with linearly polarized light between 0 and 180 deg are presented in Figs. 2(b) and 2(c). Figure 2(b) presents the measured degree of linear polarization, degree of circular polarization, and degree of polarization (DOP) of the light exiting the endoscope. As light is transmitted through the endoscope, the degree of linear polarization drops to around 80% and the circular polarization increases equally. Hence, the DOP remains around 1, which indicates that there are very low optical losses, scattering events, and sources of depolarization in the endoscope. We believe that the introduction of elliptically polarized light in the endoscope is probably due to retarders that are placed inside the probe. Once the retardance of the endoscope is known, calibration routines can provide the correct input Stokes vector. 33 Furthermore, Fig. 2(c) represents the angle of polarization of the output Stokes vector that is not affected by the optical elements of the endoscope.
Targeting Adenomatous Tumor and Dysplasia
in Azoxymethane-Dextran Sodium Sulfate Tissue Sections with NIR Fluorescent Dye-Labeled Octapeptide (LS301) Some colon lesions lie below the mucosa, requiring imagining techniques with depths beyond a few millimeters. Capitalizing on the ability of NIR light to penetrate deeper into tissue with low background autofluorescence compared to visible light, we used a NIR fluorescent dye-labeled octapeptide, LS301; see Figs. 3(a) and 3(b), which has been shown to selectively accumulate in malignant tumors. 22 We explored the feasibility of using LS301 to detect inflammation-driven colon carcinogenesis using the clinically relevant AOM-DSS murine model. Precancerous and cancerous lesions from AOM-DSS mouse colons were identified and sectioned before incubation with LS301, and the slides were imaged by fluorescence microscopy. Figure 3(c) demonstrates that LS301 uptake was highly specific for adenomatous tumor, with minimal fluorescence from the surrounding uninvolved regions. The contrast between adenomatous tumor and surrounding uninvolved tissue was 56 AE 9 (standard error of the mean, SEM was used in this study) with an average contrast-to-noise-ratio (CNR) of 7 AE 1.13. This impressive level of specificity of LS301 was maintained when evaluating tissues exhibiting features sensitive to precancerous lesions, such as the aberrant crypt foci as illustrated in Fig. 3(d). For example, the contrast between dysplastic lesions and the surrounding uninvolved regions as shown in Fig. 3(e) was 37.4 AE 3.6 with an average CNR of 8.72 AE 0.88. In contrast, the dye cypate alone did not show any selective uptake in the colon lesions. Therefore, we used LS301 for subsequent in vivo studies of the AOM-DSS model.
Near-infrared fluorescence endoscopy utilizing novel topical administration method in azoxymethanedextran sodium sulfate and wound-healing models
Having demonstrated the feasibility of detecting CAC and dysplasia by fluorescence imaging and established the polarization contrast between diseased and normal tissues, we next assessed the use of these combined techniques for in vivo imaging. Previous small animal imaging of tumors with LS301 have relied on the intravenous administration of the molecular probe. 22 Typically, a wait time of 24 h is needed to obtain excellent contrast between tumors and uninvolved surrounding tissues. While this imaging time point is acceptable for many clinical applications, a fast-acting approach would be preferred for colonoscopy. Toward this goal, we explored a topical delivery method in which the molecular probe is sprayed on the colon during screening endoscopy. We found that poloxamer (Pluronic F127), which is approved for human use, can solubilize LS301 [see Fig. 1(b); Methods, Formulation of LS301]. More importantly, the formulation undergoes a reversible thermoresponsive sol-gel transition, with a critical transition temperature of 20°C. This previously unexplored topical application of NIR molecular probes allowed endoscope-mediated spraying of the cold formulation, resulting in a rapidly formed thin layer of gel around the colon tissue. With this method, we observed a mean CNR of 20.64 AE 1.65 between the adenomatous tumor and the surrounding uninvolved tissue, and 12.1 AE 1.03 between flat lesions and the surrounding uninvolved regions as displayed in Figs. 5(a) and 6(a), respectively. These outcomes represent a six-fold and three-fold increase in mean fluorescence intensity from the adenomatous tumor and flat lesions, respectively, relative to the surrounding uninvolved tissues. Inflamed tissues generally retain optical contrast agents by several mechanisms, including nonspecific retention or entrapment by activated macrophages. Peyer's patches are the sites of the host defense system where macrophages, dendritic cells, Blymphocytes, and T-lymphocytes reside. We observed that LS301 did not accumulate in Peyer's patches in the mucosa as shown by Fig. 6(c). The lack of LS301 fluorescence in this tissue indicates the high selectivity of the imaging agent for cancer-associated lesions. To further assess the feasibility of distinguishing wound-repair associated inflammation from colon cancer and dysplasia, we induced colon injury in mice by removing single full thickness areas of the mucosa and submucosa using a flexible biopsy needle. The leading edge of the injured epithelium is known to form cables of actin filaments extending from cell to cell, forming a ring around the wound circumference and facilitating wound closure. 34 Topical administration of LS301 resulted in a barely detectable mean CNR of 1.62 AE 0.41 between the inflamed regions and surrounding uninvolved regions in this wound-healing model using our current fluorescence endoscope as shown in Fig. 7. Ex-vivo sections of exteriorized colon from the tumor and inflamed regions were examined as shown in Figs. 5(b) and 6(b). Consistent with in vivo results, identified adenomatous tumor and flat lesions showed high fluorescence intensity as seen in Figs. 5(c) and 6(c). A graded fluorescence intensity pattern was observed within the adenomatous tumor mass as shown in Fig. 5(c), illustrating the diffusion and retention of LS301 in the lesions within 10 min of administration. LS301 also showed high specificity toward dysplastic lesions possessing features such as aberrant crypt foci as shown in Fig. 6(c). Figure 7(a) demonstrates minimal fluorescence in the lumen of colonic crypts and features of inflammation, such as regions of neutrophil activation. H&E staining confirmed the presence of these morphological features as shown in Figs. 7(c) and 7(d). Ex vivo studies relatively demonstrated higher fluorescence intensities from regions exhibiting higher epithelial proliferation, which is a consequence of wound repair as shown in Fig. 7(c).
Polarization endoscopy utilizing degree of linear polarization contrast in azoxymethane-dextran sodium sulfate and wound healing inflammation models
The fluorescence molecular imaging approach deployed above does possess certain limitations. It does not furnish structural information that is useful for validating the presence of cancer or dysplasia; hence, this information must be obtained via ex vivo histologic validation. Ex vivo histology requires tissue biopsy and offline analysis that could delay clinical decisions and result in repeat hospital visits. Although the molecular fluorescence method can distinguish inflamed from cancerous tissue or flat lesions, a complementary method that can instantly validate the negative fluorescence contrast in suspicious, but noncancerous tissues would facilitate rapid clinical decision making during colonoscopy. Finally, the topical application of fluorescence molecular probes and the subsequent washing step occasionally leave residual dye and gel in the vicinity of uninvolved tissue, which could be misinterpreted as cancerous.
To resist oversampling of the colon through purely fluorescence guided biopsies and to provide real-time confirmation of suspicious lesions, we incorporated the DoLP contrast obtained from reflectance polarization imaging into our fluorescence endoscope. Both the fluorescence and DoLP contrasts are orthogonal, providing complementary positive fluorescence and DoLP signals in cancerous and uninvolved colon tissues, respectively, but negative fluorescence and DoLP signals in uninvolved and cancerous colon tissues, respectively. To rule out a false positive DoLP signature caused by the irregular contours of the colon, a small FOV was used to interrogate tissue highlighted by molecular fluorescence imaging. Postacquisition, regions of interest were selected to quantify the DoLP signature as shown in Fig. 8. For adenomatous polyps, a mean DoLP value of 0.0414 AE 0.0142 was obtained compared to 0.0816 AE 0.0173 from the surrounding uninvolved regions. We found a similar trend for flat lesions, with a mean DoLP of 0.0225 AE 0.0073 compared to 0.0924 AE 0.0284 from the surrounding wall as shown in Fig. 9. While the fluorescence imaging was able to detect pathologic tissue, the high DoLP signal in uninvolved tissue provides an anatomical landscape of the colon and identifies different types of uninvolved colon tissue. For example, Peyer's patches, which were not detected by fluorescence, had a high mean DoLP signal of 0.1064 AE 0.0104, compared to 0.0872 AE 0.022 from the surrounding wall (Fig. 8). These differences in DoLP signal are probably due to the higher structural integrity of Peyer's patches caused by the high density of lymphoid tissue. In the wound-healing model, the increase in newly formed actin filaments resulted in a high birefringence signal, thereby polarizing the incident light in the process. 35 For example, a high mean DoLP signal of 0.1363 AE 0.0379 was obtained along the epithelium surrounding the wound bed as demonstrated in Figs. 7(a) and 7(b). Thus, the combined high polarization and low LS301 fluorescence signals in nontumor tissue provide real-time validation of suspicious lesions during endoscopy, which reduces the number of false positives and unnecessary biopsies.
To examine colon tissue under controlled conditions and to provide maximum contrast, ex-vivo polarization imaging was conducted at 35 deg using the tissue from the in vivo study. Consistent with the in vivo studies, Fig. 6(b) exhibits a lower DoLP signature that was obtained for flat lesions compared to the surrounding uninvolved regions. Patches of higher DoLP signature were observed around the flat lesion, clearly identifying the nontumor Peyer's patch.
Discussion
We have developed an integrated fluorescence and polarization endoscope for detecting and providing real-time confirmation of CAC, dysplasia, and associated flat lesions that are difficult to detect without ex vivo histologic validation. Although depressed cancerous lesions and sessile serrated adenomas have different pathologies, we used the term flat lesions for both flat depressed carcinomas and adenomas to reflect the similarity in the level of difficulty in detecting these lesions during conventional colonoscopy. Unlike current endoscopic techniques that interrogate molecular or structural signatures of CAC and dysplasia, the combined fluorescence and polarization contrasts create a new paradigm for identifying colonic lesions with high accuracy. The murine AOM-DSS model used in this study recapitulates the molecular pathways and morphological features of CAC from dysplasia to carcinoma. 36 Similarly, the murine wound-healing model exhibits key features of inflammation that are reminiscent of malignancy. These include wound-associated epithelial cell proliferation and stromal neutrophil and macrophage recruitment in the wound bed.
Our fluorescence imaging approach is similar to chromoendoscopy, where nontumor targeted dyes are topically applied for improved visualization of dysplastic lesions. 11,37 Previous studies attempted to use the FDA-approved NIR dye, indocyanine green (ICG), to enhance the detection of submucosal colon lesions, and to minimize the tissue autofluorescence. 38 Unfortunately, the high background ICG fluorescence in the subserosa around the tumor confounded data analysis. In a recent study, we showed that LS301 selectively accumulates in tumors and achieves high tumor-to-background fluorescence at 24-h post intravenous injection. 22 Using the AOM-DSS treated mouse model of CAC, we demonstrated in this study that ex-vivo staining of colon tissue with LS301 successfully identified adenomatous tumors and dysplastic lesions in the colon tissue as shown in Figs. 3(c) and 3(d). Ex vivo histologic validation confirmed the in vivo imaging analysis. These results demonstrate that molecular fluorescence endoscopy, aided by LS301, detects multiple stages of oncogenesis with a high potential to improve the management of colorectal cancer.
However, the intravenous administration of LS301 and the long wait time (24 h) to achieve high fluorescence contrast between tumors and uninvolved surrounding colon tissue are not suitable for colonoscopy. In addition, the excretion of LS301 through the hepatobiliary pathway after intravenous injection increases background NIR fluorescence in the colon, compounding the detection of flat lesions and dysplasia. Based on our ex vivo study, we postulated that the rapid and prolonged retention of LS301 in pathologic tissues lends it to topical administration during colonoscopy. The results demonstrate that the formulation of LS301 in a poloxamer sol-gel transition system allowed us to topically administer the imaging agent with a spray catheter during endoscopy. In contrast to the use of generic dyes for chromoendoscopy, the selective uptake of LS301 in CAC and dysplastic colon tissue minimized nonspecific uptake in uninvolved mucosa, ushering in a new procedure for topical administration of tumor-targeted molecular probes. The commercially available poloxamer not only improved the solubility of the hydrophobic LS301, but also aided in the uniform coating of the tissue, thereby increasing the interaction of the molecular probe gel formulation with the colon tissue under investigation. The observed high mean CNR obtained from adenomatous tumors and flat lesions compared to surrounding uninvolved tissue is a function of the improved molecular probe selectivity for tumors, reduced autofluorescence in the NIR imaging window, and enhanced incubation time provided by the sol-gel formulation. This sol-gel approach can be extended to previously reported methods for imaging colon cancer and dysplasia. These include Cy5.5 labeled cathepsin B substrate used to identify polyps in adenomatous polyposis coli (Apc min) mice 39 and fluorescein-labeled VRPMPLQ heptapeptide that was shown to preferentially bind dysplastic rather than normal mucosa. 19 Unlike these molecular probes which are confined to identifying only specific stages of tumorigenesis, 40,41 LS301 uniquely captures multiple stages of cancer development. A limitation of the sol-gel method is the lag time between topical application and imaging. Our current protocol uses about 10 min to optimize tumor uptake. A future goal is to optimize the procedure to shorten the incubation time for rapid assessment of the colon.
Although the topical administration of LS301 has enormous benefits for screening endoscopy, we found that polarization contrast can provide real-time cross-validation of suspicious lesions with an orthogonal but complementary signal. This approach is expected to minimize the need for ex vivo biopsy, to accelerate the medical decisions, and to improve the clinical outcomes with minimal recall rates. In addition, false-positive fluorescence arising from residual gelatinous materials in healthy colon can readily be identified by the low polarization contrast in healthy tissue. Previous studies have reported the use of polarized light scattering spectroscopy and reflectance spectroscopy for tissue analysis. 27,42,43 In these studies, ex vivo colonic tissue specimens were illuminated with polarized light and morphological maps of the specimens were constructed based on varying nuclear sizes, population density, and refractive index. 27,42,43 However, the system configuration and challenges in real-time mapping of the polarization signal undermine their use for in vivo applications. We circumvented these challenges by developing an imaging configuration and sensor setup capable of reporting DoLP and AOP changes in the reflected light, while taking the relative positions of the camera with respect to the tissue into account. As DoLP reports on the fraction of reflected light that is linearly polarized, we expect differences between the DoLP signature in epithelial layer of diseased and healthy tissues. Precancerous and cancerous lesions within the epithelial layer possess distinct morphological features, such as increased nuclear size and increased nuclear-cytoplasmic ratio, which result in significant multiple scattered components of light and a corresponding decrease in DoLP signature. A mean DoLP change of 4% was obtained for polypoid tissue in the surrounding uninvolved region. This change was validated by simulation results as depicted in Fig. 4(b), where we predicted a change of 4% in DoLP signature when imaging at 15 deg. In the case of flat lesions, we obtained a DoLP difference of 7% relative to the surrounding Peyer's patch regions. The exceptionally high DoLP signal in Peyer's patches stems from the formation of firmly matted fibrotic bands within the serosa and mesentery as shown in Fig. 8. The increase in concentration of linearly birefringent material, such as actin, results in a DoLP difference of 10% in the inflamed regions compared to the surrounding uninvolved colonic tissue. We demonstrated our ability to rapidly interrogate regions of interest in real time (at 40 fps) without distortion from motion artifacts arising from respiration and peristalsis as shown in Figs. 8 and 9.
In summary, the combination of a tumor-targeting molecular probe, topical application of a contrast agent with a biocompatible sol-gel formulation, and the development of a multimodal color, NIR fluorescence, and polarization endoscope provides a new paradigm for the accurate detection of colonic lesions, including CAC, dysplasia, and the associated flat lesions. An extension of this approach to a wound-healing inflammation model 44 demonstrated a reversal of the fluorescence-polarization signal, highlighting the complementary nature of both techniques. The ease of incorporating both fluorescence and polarization fibers into an existing endoscope allows seamless integration into current screening colonoscopy protocols. A similar approach can be envisaged for other forms of epithelial cancer such as esophageal, cervical, bladder, skin, and stomach tumors that account for over 65% of the noncolorectal cancer deaths.
Tauseef Charanya is a biomedical engineering graduate student at Washington University in St. Louis. He received his BS degree in biomedical engineering from Texas A&M University in 2010. His research interests include endoscopy, surgical margin assessment tools, and fluorescence and polarization microscopy methods. He is a member and serves as the president of the WU Chapter of SPIE. Sharon Bloch is a senior scientist at Washington University School of Medicine. She received her PhD degree in cell and molecular biology from Saint Louis University, where she studied bone remodeling. Her research interests include regulation of differential gene expression and molecular imaging techniques.
Gail Sudlow is a research assistant since 2008 at Washington University in St. Louis. She received her BS degree in biology from South Dakota State University. Her professional skills include cell culture, animal modeling, in vivo animal imaging, histology, immunohistochemistry, and fluorescence microscopy. She also serves as the lab manager.
Kexian Liang is a senior research technician at Washington University in St. Louis. She holds a BS degree in chemistry. Her professional skills include peptide synthesis, as well as conjugation and analytical chemistry.
Missael Garcia is a computer engineering graduate student at Washington University in St. Louis. He received his MS degree in electrical engineering from SIUE in 2013 and his BS degree in mechatronics engineering from ITESM in 2012. His research interests include imaging sensors, polarization optics characterization, and polarization applications such as three-dimensional (3-D) reconstruction.
Walter J. Akers is assistant professor of radiology. He combines his experience as a veterinarian and biological engineer to develop novel optical and multimodal molecular imaging approaches to diagnose, stage, and monitor disease processes through detection of molecular events in living systems and to translate optical imaging methods to clinical applications in human medicine.
Deborah Rubin is professor of medicine and developmental biology in the Division of Gastroenterology at Washington University School of Medicine. She is the director of the Advanced Imaging and Tissue Analysis Core of Wash U.'s Digestive Diseases Research Core Center, and chair of the MA/MD Program for Medical Student Research. She is a fellow of the American Gastroenterological Association. Her research interests include regulation of epithelial regeneration and stem cell therapy of short bowel syndrome, and epithelial-mesenchymal interactions in colitis and colitis associated cancer.
Viktor Gruev received his MS and PhD degrees in electrical and computer engineering from Johns Hopkins University, Baltimore, MD, USA, in May 2000 and September 2004, respectively. After finishing his doctoral studies, he was a postdoctoral researcher at the University of Pennsylvania, Philadelphia, PA, USA. Currently, he is an associate professor in the Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO, USA. His research interests include imaging sensors, polarization imaging, bioinspired circuits and optics, biomedical imaging, and micro/nanofabrication. Samuel Achilefu, PhD, is a professor of radiology, biomedical engineering, and biochemistry and molecular biophysics. He serves as the chief of Optical Radiology Laboratories, director of Washington University Molecular Imaging Program, co-leader of the Oncologic Imaging Program of the Siteman Cancer Center, and editor-in-chief of Current Analytical Chemistry. His research interests include the development of molecular imaging probes and therapeutic molecules, and new methods and devices for biomedical applications. He is a fellow of SPIE. | 2018-04-03T06:22:11.555Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "8d8eb52b04d8ee3d64177dfac3b60d28ebffc83a",
"oa_license": "CCBY",
"oa_url": "https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics/volume-19/issue-12/126002/Trimodal-color-fluorescence-polarization-endoscopy-aided-by-a-tumor-selective/10.1117/1.JBO.19.12.126002.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4c6cf8ad0c4e93e08cb5ad5bf60d2587f1f196fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Engineering"
]
} |
18155849 | pes2o/s2orc | v3-fos-license | Paying for Health Care: Quantifying Fairness, Catastrophe, and Impoverishment, with Applications to Vietnam, 1993-98
The authors compare egalitarian concepts of fairness in health care payments (requiring that payments be linked to ability to pay) and minimum standards approaches (requiring that payments not exceed a prespecified share of prepayment income or not drive households into poverty). They develop indices for both sets of approaches. The authors compare the "agnostic" approach, which does not prespecify exactly how payments should be linked to ability to pay, with a recently proposed approach that requires payments to be proportional to ability to pay. They link the two approaches using results from the income redistribution literature on taxes and deductions, arguing that ability to pay can be thought of as prepayment income less deductions deemed necessary to ensure that a household reaches a minimum standard of living or food consumption. The authors show how both approaches can be enriched by distinguishing between vertical equity (or redistribution) and horizontal equity, and show how these can be quantified. They develop indices for "catastrophe" that capture the intensity of catastrophe as well as its incidence and also allow the analyst to capture the degree to which catastrophic payments occur disproportionately among poor households. Their measures of the poverty impact of health care payments also capture both intensity and incidence. To illustrate the arguments and methods, the authors use data on out-of-pocket health spending in Vietnam in 1993 and 1998-an interesting application, since 80 percent of health spending in that country was out-of-pocket in 1998. They find that out-of-pocket payments had a smaller disequalizing effect on income distribution in 1998 than 1993, whether income is measured as prepayment income or as ability to pay (that is, prepayment income less deductions, regardless of how deductions are defined). The underlying cause of the smaller disequalizing effect of out-of-pocket payments differs depending on whether the benchmark distribution is prepayment income or ability to pay. The authors find that the incidence and intensity of catastrophic payments-in terms of both prepayment income and ability to pay-declined between 1993 and 1998, and that both the incidence and the intensity of catastrophe became less concentrated among the poor. They also find that the incidence and intensity of the poverty impact of out-of-pocket payments diminished over the period. Finally, they find that the poverty impact of out-of-pocket payments is due primarily to poor people becoming even poorer rather than the nonpoor becoming poor and that in Vietnam in 1998 it was not expenses associated with inpatient care that increased poverty but nonhospital expenditures.
I POfL;Y RESEARCH WORKING PAPER 2715 Summary findings Wagstaff and van Doorslaer compare egalitarian To illustrate the arguments and methods, tbe authors concepts of fairness in health care payments (requiring use data on out-of-pocket health spending in 'Vietnam in that payments be linked to ability to pay) and minimum 1993 and 1998-an interesting application, rince 80 standards approaches (requiring that payments not percent of health spending in that country w;S out-ofexceed a prespecified share of prepayment income or not pocket in 1998. They find that out-of-pocket payments drive households into poverty). They develop indices for had a smaller disequalizing effect on income listribution both sets of approaches.
in 1998 than 1993, whether income is measured as The authors compare the "agnostic" approach, which prepayment income or as ability to pay (that is, does not prespecify exactly how payments should be prepayment income less deductions, regardless of how linked to ability to pay, with a recently proposed deductions are defined). The underlying caus-of the approach that requires payments to be proportional to smaller disequalizing effect of out-of-pocket payments ability to pay. They link the two approaches using results differs depending on whether the benchmark distribution from the income redistribution literature on taxes and is prepayment income or ability to pay. deductions, arguing that ability to pay can be thought of The authors find that the incidence and int,t :sity of as prepayment income less deductions deemed necessary catastrophic payments-in terms of both prep ayment to ensure that a household reaches a minimum standard income and ability to pay-declined between 1993 and of living or food consumption.
1998, and that both the incidence and the int.ensity of The authors show how both approaches can be catastrophe became less concentrated among -he poor. enriched by distinguishing between vertical equity (or They also find that the incidence and intensit) of the redistribution) and horizontal equity, and show how poverty impact of out-of-pocket payments dirinished these can be quantified. They develop indices for over the period. Finally, they find that the pov erty "catastrophe" that capture the intensity of catastrophe as impact of out-of-pocket payments is due primarily to well as its incidence and also allow the analyst to capture poor people becoming even poorer rather tha i the the degree to which catastrophic payments occur nonpoor becoming poor and that in Vietnam r0 1998 it disproportionately among poor households. Their was not expenses associated with inpatient carc that measures of the poverty impact of health care payments increased poverty but nonhospital expenditures. also capture both intensity and incidence. This paper-a product of Public Services for Human Development, Development Research Group-is part of a larger effort in the group to investigate the links between poverty and health. Copies of the paper are available free from the W)orld Bank, 1818 H Street NW, Washington, DC 20433. Please contact Hedy Sladovich, room MC3-607, telephone 202 '73-7698, fax 20)2-522-1154, email address hsladovich @worldbank.org. Policy Research Working Papers are also posted on the Web at http://econ.worldbank.org. The authors may be contacted at awagstaff@ worldbank.org or vandoorslaer(a econ.bmg. eu r.nl. November 2001. (48 pages) 1.
Introduction
Much has been written recently about equity or fairness in health financing, the financial protection function of health systems, "catastrophic" health care costs, and the impoverishment associated with health care outlays. The World Health Organization (WHO), for example, in its 2000 World Health Report (WHR) Health Systems: Improving Performance (World Health Organization 2000) proposed and estimated values of a fairness of financing contribution (FFC) index, and argued that providing financial protection to households is an important goal of any health system. The International Labour Organization (ILO), in a forthcoming report Toward Decent Work: Social Protection in Health for all Workers and their Families (Baeza et al. 2001) discusses the importance of considering "catastrophic" health care costs and of modifying insurance systems to provide protection against them. Reflecting the importance of the theme in its Voices of the Poor consultative exercise (Narayan et al. 2000), the World Bank in its 2000/2001 World Development Report (WDR) Attacking Poverty (World Bank 2000) emphasized the impoverishing effects of ill health in general and of the costs of health care in particular. Furthermore, the 1997 strategy paper for its health sector (World Bank 1997) committed the Bank to "working with countries to reducing the impoverishing effects of ill health...." Two distinct strands of thinking are evident in this debate. One is based on egalitarian notions of equity or fairness. A common theme here is that payments for health care ought to be linked not to usage of health services but rather to ability to pay, and the concern is with the degree of inequality in one or other variable. The other focuses on minimum standards. Here there is some divergence of view, but in each case the concern is not with inequality in any variable but rather with a variable exceeding or falling short of a threshold. One approach sets the threshold in terms of proportionality of income. The concern is to ensure that households do not spend more than some prespecified fraction of their income on health care (call it z). Spending in excess of z is labeled "catastrophic". The idea is, in effect, to ensure that households have at least (I -z) of their income to spend on things other than health care. The other approach sets the minimum in terms of the absolute level of income. The concern here is to ensure that spending on health care does not push households into poverty-or further into it if they already there. These two approaches are fundamentally different-neither is "right", and the choice between them must be made on normative and ideological grounds.
Our purpose in this paper is not to advocate a particular position, but rather to shed some new light on the measurement issues involved and to explore the interrelationships between the various measures and the approaches. We present measures of fairness, catastrophe in health spending and impoverishment, relate them to the previous literature, and compare them with one another. We illustrate the various measures empirically using data on out-of-pocket payments for health care in Vietnam. This is not an uninteresting case study. In 1998, around 80% of health spending in Vietnam was paid out-of-pocket. Unsurprisingly, in the World Bank's recent Voices of the Poor consultative exercise (Narayan et al. 2000), payments for health care came across as a major concern of poor people in Vietnam. Three key changes occurred in Vietnam during the 1990s which make the study of Vietnam and the period chosen additionally interesting (World Bank et al. 2001). First, user fees in the public sector rose. The increase was especially pronounced for hospital care, where fees appear to have risen by over 1000% in real terms between 1993 and 1998, but were also noticeable in commune health centers even though these were still supposed to be free in 1998. Second, there was a large rise in fees for private clinics and doctors. These apparently rose by nearly 600% over the period 1993-98. Third, expenditures on drugs actuallyfell over the period 1993-98, due to a 30% fall in the real price of medicines during the period in question. The latter seems to have been due in part to deregulation of the pharmaceutical sector and in part to increased donor assistance in drug supplies. Fourth, social health insurance was introduced in 1993 (World Bank et al. 2001). Initially, this was on a compulsory basis for formal sector workers and civil servants. However, more recently the scheme has been opened up to others on a voluntary basis-including the family members of insureds. By 1998, 12% of the Vietnamese population was covered by social insurance, a little over half of these being covered on a voluntary basis. Compulsory social insurance covers some of the costs of both inpatient and outpatient care, and also pays for drugs used in inpatient treatment. The voluntary scheme has two levels of coverage, the less generous (and less expensive) of which covers only inpatient care, while the higher-priced more generous package includes outpatient care and some drug costs. Most voluntary enrollees have opted for the less costly package. Insurance coverage is most common among the higher income groups.
It is important to be clear what we are not doing in this paper. Any assessment ol the fairness of a health care system requires looking not just at what people pay for health services but also at how much they use services (van Doorslaer, Wagstaff, and Rutten 1993). Health care payments and health service utilization are, in other words, both key "focal" variables whose distributions have to be examined in any assessment of the fairness of a health care system. For each focal variable there is a distribution that is considered to be fair (the "target distribution"). The actual distribution of each focal variable reflects the characteristics of both the health care financing system and the health care delivery system. For example, the split between pre-payment and out-ofpocket payments influences not only the distribution of the prices people pay at the point of use for their health services (and hence the distribution of payments), but also their use of health services (and hence the distribution of utilization). Likewise, most characteristics of the health care delivery system (e.g. whether there is a GP who plays a gatekeeper function) influence not only the amount of health services people use (and hence the distribution of utilization) but also which type of services they use and hence how much they pay for them (and hence the distribution of payments). An assessment of whether a distribution of payments is fair is not therefore an assessment of whether the financing system is fair, any more than an assessment of whether a distribution of utilization is fair is an assessment of whether the delivery system is fair. Rather these exercises ought to be seen simply as assessments of "equity in health care payments" and "equity in health care utilization" respectively. In this paper, our focus is exclusively on the former. It therefore sheds light on only one of the two issues that need exploring in any analysis of equity in health care financing. Elsewhere we have suggested (Wagstaff, Van Doorslaer, and Paci 1991;Wagstaff and Van Doorslaer 2000) and employed Van Doorslaer et al. 2000) methods for assessing equity in the utilization of health care.
It is also worth being explicit about the rationales that underpin concerns over the two focal variables-health care utilization and payments for health care-since these are often not considered self-evident. Concern over the first can be thought of as deriving in part from the fact that health is considered a precondition for people to survive and flourish as human beings, in part from the fact that health is subject to potentially large "shocks" which are unforeseen and are rarely the result of a deliberate choice by the individual concerned, and in part from the presumption that health care is the appropriate way to restore health status following such a "shock" (Culyer and Wagstaff 1993). The rationale for the concern over the second focal variable also appears to derive in part from the fact that health care utilization is a response to an unforeseen and unsolicited "shock", but also in part from the fact that health care utilization can be sufficiently costly to represent a threat to a household's ability to purchase other goods and services that may, like health care, make a difference to its members' ability to survive flourish as a human beings (Culyer 1993). The most obvious example of these other goods and services is food. But clothing, shelter and energy are other important examples. Thus irrespective of whether a particular treatment enables a person to regain his or her former health status following a health "shock", if the expenditure associated with it compromises the household's ability to feed itself, this in itself is a matter for concern.
The paper is organized as follows. We start in sections 2-4 with the egalitarian approach. The common theme here is that payments for health care ought to be linked not to usage of services but rather to ability to pay (ATP). The first strand of this literature we explore-in section 2-acknowledges the ATP principle and the motivation for it, bul takes the view that since policy-makers rarely if ever specify either how ATP is to be defined or how payments should be linked to ATP, the best way forward is simply to measure the degree of progressivity of existing payments on gross income (Wagstaff elt al. 1992;Wagstaff, van Doorslaer, van der Berg et al. 1999) or the degree of income redistribution resulting from this progressivity (Wagstaff and Van Doorslaer 1997;Van Doorslaer et al. 1999). Since no target distribution is specified for payments, this approach does not generate any information on the degree of inequity in the distribution of payments for health care. We call this approach the "agnostic" approach. The secon(d strand of literature, which is more recent and which we explore in section 3, is more ambitious and tries to quantify inequity (World Health Organization 2000). It both defines ATP and stipulates what the relationship between payments and ATP should be. In sections 2 and 3, we employ the methods developed in the literature on the progressivity and redistributive effect of taxes (Lambert 1993;Pfahler 1990;Wagstaff and van Doorslaer 2001). These have been widely employed in the literature we cover in section 2 and have the advantages of being informative and having properties that are well understood. As one of us has argued elsewhere (Wagstaff 2000), these methods have advantages over the index proposed by WHO in its WHR and used to date in the second strand of the egalitarian literature. One of the aims of the present paper is, in fact, to ground the ATP approach in a sounder measurement methodology. Having done this in section 3, the paper then moves to section 4 where it is argued that although the methocds employed in sections 2 and 3 are attractive, they have the disadvantage of focussing oll vertical differences. They ignore the fact that much of the inequity in payments for health care arise from horizontal inequity, not least because people on a given income can spend quite different amounts depending on whether they are struck by illness. In section 4, wwe show how the measurement in both sections 2 and 3 can be improved by use of an approach that allows vertical and horizontal inequities to be quantified (Aronson, Johnson, and Lambert 1994;Aronson and Lambert 1994;Wagstaff and Van Doorslaer 1997;Van Doorslaer et al. 1999).
Sections 5 and 6 then address the minimum standards approaches. In section 5 fve explore the idea that health care payments above a threshold can be considered "catastrophic" and we propose and implement a variety of measures that capture the incidence and intensity of catastrophe in health spending. We also present measures that capture the degree to which catastrophic health spending is concentrated among the poor. Section 6 addresses the issue of impoverishment-the extent to which people are made poor-or more poor-by health spending. We present measures that capture the impoverishing effects of health spending, distinguishing between the incidence and intensity of impoverishment, and showing how one can assess the extent to which greater intensity is due to people being made even poorer by health spending or by people becoming poor through such spending. In our coverage of both catastrophic health spending and impoverishment, we illustrate the measures with data on out-of-pocket payments from Vietnam for both 1993 and 1998. In the case of impoverishment, we show the differential impacts of hospital costs and other health care spending. Section 7 contains a summary and offers some conclusions.
Progressivity and income redistribution
One approach, then, is simply to measure the degree of progressivity of the payments distribution and the income redistribution associated with it. Some theoretical results from the tax literature help clarify the relationship between these concepts, as well as the link between them and ability to pay.
Progressivity
Let pre-payment income (the analogue of pre-tax income in the tax literature) be x, and health care payments be T (the analogue of taxes). There are two useful results from the tax literature. The first concerns progressivity. We can measure the progressivity using Kakwani's (1977) index. Denote Kakwani's index of progressivity of health care payments on pre-payment income by z', which is defined as twice the area between the Lorenz curve for pre-payment income, Lx(p), and the concentration curve for health care payments, Lc(p). (Thep in parentheses here indicates the person's or household's rank in the pre-payment income distribution.) The concentration curve for payments is formed by plotting the cumulative share of payments on the vertical axis against the cumulative proportion of households (or individuals) ranked by pre-payment income on the horizontal axis ( Figure 1). Thus we have: where Gx is the Gini coefficient for pre-payment income and CT is the concentration index for health care payments. iTK is positive if the concentration curve for payments lies below the Lorenz curve for pre-payment income, indicating that payments are progressive on pre-payment income. A zero value of 7TT indicates proportionality, w?iile a negative value indicates regressiveness.
Redistributive effect and the link with progressivity
Progressivity of payments on pre-payment income implies that payments exert an equalizing effect on the income distribution. The income distribution will, in other words, be more equal "after" payments than "before". This can be seen from the second relevant result from the tax literature, which concerns redistributive effect. We can measure the redistributive effect as the reduction or increase in income inequality associated with the move from the pre-payment to post-payment income distributions. If we ignore any reranking of households in this process (an issue to which we return in section 4 below), we can measure redistributive effect using the Reynolds-Smolensky (RS) index (Reynolds and Smolensky 1977). Denote the RS index of redistributive effect of health care payments by 7TTRS, which is defined as twice the area between the Lorenz curve for prepayment income, Lx(p), and the concentration curve for post-payment income, LX (P) (Figure 1). Thus we have: (2)
fLX-T (P)-Lx (P)PP = GX -CX-T,
where CX-T is the concentration index for post-payment income. ifT is positive if the concentration curve for post-payment income lies above the Lorenz curve for prepayment income, indicating that payments reduce income inequality. A zero value of RCs indicates zero redistributive effect, while a negative value indicates pro-rich income redistribution. The T RS index is linked to the Kakwani index )TT by the following relationship: where t is the payment share-i.e., the share that payments make up, on average, of prepayment income. Thus redistributive effect is an increasing function of progressivity, so that payments that are progressive on pre-payment income make for a distribution of post-payment income that is more equal than the distribution of pre-payment income. This redistributive effect is larger the more progressive payments are on pre-payment income, and the larger is the payment share, t.
The measurement of progressivity and redistributive effect thus responds to the concern identified above with the distribution of health care payments, namely that redistributive effect tells us how much more unequal (or equal) health care payments make the distribution of income. This is clearly of interest if our concern is with the level and distribution of income households have available for purchasing food and other "'necessities" after they have paid for their health care. But it does not tell us whether payments are equitably distributed. The second-sub-strand of literature covered in section 3 tries to do this.
Progressivity and redistributive effect of out-of-pocket payments in Vietnam
Before turning to this strand of literature, we present results on the progressivity and redistributive effect of out-of-pocket payments in Vietnam in the years 1993 and 1998. The data we use are taken from the 1992-93 and 1997-98 Vietnam Living Standards Surveys (VLSS) undertaken jointly by the government of Vietnam and the World Bank. For the purpose of this exercise, the household is taken as the sharing unit for income and payments (both being assumed to be shared equally across household members), but the individual is taken as the unit of analysis. In the case of the 1997-98 survey (which is not nationally representative) the sample is weighted using sampling weights. Household pre-payment income is measured by total household consumption, gross of out-of-pocket payments for health services. Household post-payment income is simply pre-payment income so defined net of out-of-pocket payments. Pre-payment and post-payment income are both defined to be gross of food consumption. Both prepayment and post-payment income are defined on a per capita basis. Out-of-pocket payments are derived in both years from two questions on health spending over the last 12 months, one specifically on inpatient care, the other on all other goods and services associated with the treatment and diagnosis of illness and injury. Table I shows, for each of the two years, the values of x (pre-payment income), T (out-of-pocket payments), t (the income share of out-of-pocket payments), Gx (the Gini coefficient for pre-payment income), CT (the concentration index for out-of-pocket payments), 7rK (the Kakwani index of progressivity of out-of-pocket payments on prepayment income), CX-T (the concentration index for post-payment income vis-a-vis prepayment income), and ArTS (the Reynolds-Smolensky index of redistributive effect for out-of-pocket payments vis-a-vis pre-payment income). It shows that the income share t of out-of-pocket payments fell because income rose faster than out-of-pocket payments. Out-of-pocket payments were regressive on pre-payment income in 1993, but were close to proportional in 1998. Inequality in pre-payment income fell very slightly between 1993 and 1998, but inequality in out-of-pocket payments rose. The degree of redistributive effect was negative (i.e., pro-rich) in both years but was much smaller in 1998 than 1993, in part because of the reduction in regressivity but in part because of the reduced share of out-of-pocket payments in pre-payment income (the reduction in t).
3.
How much progressivity and income redistribution is fair?
Measuring the progressivity and redistributive effect of health care payments on pre-payment income does not tell us whether or not they are equitable per se. To answer this question one needs to adopt positions with respect to both the definition of ATP and the appropriate link between payments and ATP.
The WHO's 2000 WHR (World Health Organization 2000) does both. It argues that ATP should be defined as the household's non-food spending, this being argued to be a good indicator of a household's long-term "normal" living standards. One can think of this approach as taking the household's pre-payment income, deducting its food expenditure (as a proxy for non-discretionary expenditure), and then deducting (or adding) any income windfalls (or shortfalls) compared to the household's "normal" income. Denote ATP by y and any deductions allowed in moving from pre-payment income to ATP by D(x). Thus we have: Using some results from the tax literature, we can explore this issue further and link the concept of ATP to the concepts of progressivity and redistributive effect.
Progressivity and ability to pay
Following Pfahler (1990), the index of progressivity of health care payments on pre-payment income, .rT can be decomposed into two parts: a part capturing the progressivity of payments on ATP; and a part capturing the progressivity of deductions on pre-payment income: Here ;TR measures the progressivity of payments on ATP, defined as (6)
7R = 2 tLy ('p) Tu (p)0p = 2 L pX-( LT (p'PLp CrT CX-D'
so that nR is positive-and hence payments are progressive on ATP-if the concentration curve for ATP, y, lies above the concentration curve for payments, T. In eqn (5), Sis the average deduction rate; i.e., deductions, D, expressed as a proportion of pre-payment income, x. ,z in eqn (5) measures the progressivity of deductions on prepayment income, and is defined as which is positive if the Lorenz curve for pre-payment income lies above the concentration curve for deductions.
From eqn (5), it is evident that the progressivity of payments on pre-payment income reflects not just the progressivity of payments on ATP, but also the progressivity of deductions on pre-payment income. Thus if deductions are a higher proportion of prepayment income for the better-off than the poor (i.e., if D is progressive or incomeelastic), z, will be positive and deductions will exert a dampening effect on the progressivity of payments on pre-payment income. By contrast, if deductions are a smaller proportion of pre-payment income for the better-off than the poor (i.e., D is regressive or income-inelastic), rD will be negative and deductions will exert an enhancing effect on the progressivity of payments on pre-payment income. Payments will be more progressive on pre-payment income the higher is S(deductions as a proportion of pre-payment income) and the more income-inelastic deductions are.
One of the implications of this is that if one's interest is in seeing whether payments are appropriately linked to ATP, a progressivity analysis of payments on prepayment income will not help. WHO (World Health Organization 2000) argues that payments for health care should be proportional to ATP. In other words ,;< ought to be zero, or equivalently there should the same degree of inequality in payments as there is in ATP. In this sense, then, levying payments for health care in proportion to ATP is egalitarian. From eqn (5), it is clear that estimates of the progressivity of payments on pre-payment income cannot help us discern whether this condition is satisfied.
Redistributive effect and ability to pay
Similar problems arise in the context of redistributive effect. Following Pfdhler RS (1990), the RS index of health care payments, ITT can also be decomposed into two parts. The first part captures the redistributive effect deriving from the payment structure (vis-a-vis ATP), while the second captures the redistributive effect brought about by the deductions. We have: where ZRS measures the redistributive effect of payments attributable to the relationship between payments and ATP. This is defined as: so that 1Rs is positive-and hence the link between payments and ATP has a pro-poor redistributive effect-if the concentration curve for ATP lies below the concentration curve for income after health care payments and deductions, Y-T. In other words, If RS is positive if there is more income inequality before payments (but after deductions) than after payments (and after deductions). In eqn (8), zR measures the redistributive effect associated with the deductions, and is defined as which is positive if the Lorenz curve for pre-payment income lies below the concentration curve for ATP.
From eqn (8), it is evident that the redistributive effect of payments is an increasing function of the redistributive effect deriving from the link between payments and ATP (assuming 1-94t>O), and is a decreasing function of the redistributive effect brought about by the deductions. The link with progressivity can be made clear by noting that by analogy with eqn (3), we have: iR=D which upon substitution into eqn (8) yields: so that the redistributive effect of payments is an increasing function of the progressivity of payments on ATP and a decreasing function of the progressivity of deductions on prepayment income.
If ATP and fairness are defined along the lines proposed by WHO, and a system achieves these desiderata, payments for health care in that system will bring about an amount of income redistribution equal to -[t&(l-t)(l-K)] g . This is positive-i.e., postpayment income inequality will be less than pre-payment income inequality-if deductions are income-inelastic. Thus pro-poor income redistribution in the move from pre-payment to post-payment income is compatible with equity in the sense defined by WHO. But, of course, such redistribution could be due also-at least in part-to progressivity of payments on ATP, which would violate WHO's definition of equity. Simply knowing how redistributive health care payments are on pre-payment income (i.e., the value of zrS ) does not allow one to distinguish between these two scenarios.
Fairness of out-of-pocket payments in Vietnam
In section 2.2, it was established that over the period 1993-98 in Vietnam out-ofpocket payments became less regressive (indeed became mildly progressive) and the redistributive effect became less pro-rich (indeed became mildly pro-poor). These changes might be interpreted as equity-enhancing changes. But the Pfahler-type decompositions using the WHO definitions of ATP and fairness tell a less optimistic story (see column [a] of Table 2).
Over the period 1993 to 1998, food spending became less concentrated among the better-off (CD fell). Looked at in terms of deductions and ATP, this means that poorer households had to shoulder a larger share of the burden of food expenses in 1998 than in 1993. Equity requires that this be borne in mind. Payments would need to have a less disequalizing (or more equalizing) effect on income to compensate for the shift in the distribution of food costs to the disadvantage of the poor. Thus the aforementioned evidence that out-of-pocket payments had a smaller pro-rich redistributive effect in 1998 than in 1993 does not necessarily mean that equity in the payments distribution increased. Some reduction in pro-rich redistributive effect would have been required simply to allow the poor to stand still-relatively speaking. To some degree, this imperative is reduced by the smaller share of food costs in 1998-reflected in the (slight) reduction of 5from 50.8% to 49.7%. Looking at 7 and IR , we see that out-of-pocket payments becamne less regressive on ATP in 1998 compared to 1993, and that this reduced regressiveness of out-of-pocket payments on ATP was associated with less income redistribution in 1998. But the changes were smaller than the changes vis-a-vis the pre-payment distribution.
Furthermore, as to be expected give the income-inelasticity of the food spending distribution, out-of-pocket payments are more regressive and produce a larger redistributive effect when assessed vis-a-vis the distribution of ATP than when assessed vis-a-vis the distribution of pre-payment income.
The upshot is that from the point of view of out-of-pocket payments, equity-defined a la WHO-improved between 1993 and 1998 but not by as much as is suggested by the progressivity and redistributive effect indices vis-a'-vis pre-payment income. Tlhe reason is that over the period 1993-98 food spending became less concentrated among the better-off, so that although the distribution of pre-payment income became slightly more equal, the distribution of ATP became more unequal.
Some unresolved issues concerningfairness and ATP
The attraction of defining ATP and stipulating a target relationship between payments and ATP is that one ends up with a clear-cut answer to the question of whether a distribution of health care payments is equitable or not. The usefulness of adopting this approach is entirely contingent, however, on the acceptability of the value judgments made-that ATP can be defined as pre-payment income (or rather total household consumption) less food spending; and that equity requires that payments be proportional to ATP. Both are open to debate.
Should food deductions be flat rate?
The first is, in effect, the issue of how deductions, D(x), ought to be defined to move from pre-payment income to ATP. One obvious question is whether one ought to deduct actual food spending or a food allowance indicating the cost of reaching a target level of nutrient intake (say, 2100 calories a day). Some people, of course, are so poor they have too little income to meet even such basic requirements. In Vietnam, in 1993, for example, 23% of individuals had too little money to purchase enough food to reach 2100 calories a day. In such cases, it seems sensible to set ATP equal to zero, in just the same way as someone whose pre-tax income is lower than the tax allowance is deemed (in the absence of a negative income tax system) to have zero taxable income. I Deducting an allowance for food costs will clearly alter the average of ATP and its distribution, as well as the deduction rate d.
l Alternatively, the full cost of reaching 2100 calories could be deducted leaving such individuals w'ith a negative ATP. Proportionality in this case would require that health care payments be negative, which is clearly an unhelpful benchrnark.
Applying this idea to Vietnam in 1993 and 1998 produces the results indicated in column [b] of Table 2. The costs of reaching 2100 calories a day have been calculated to be 750 and 1287 thousand Dong respectively (current prices) (Glewwe, Gragnolati, and Zaman 2000). Column [a] for each year shows the effect of defining D(x) as the per capita food spending of the individual's household, while column [b] shows the effect of deducting a food allowance corresponding to 2100 calories but constraining ATP to be non-negative. Unsurprisingly, the second case produces a distribution of deductions that is less pro-rich than the first case (cf. the values of CD). The value of S(the average deduction rate) falls in the move from full deductibility to the food allowance. The element of progressivity of payments on pre-payment income attributable to the deductions is higher for case [a] than case [b]. Unsurprisingly, because the progressivity of payments on pre-payment income remains the same, the regressiveness of payments on ATP rises. We conclude, therefore, that payments appear more regressive on ATP when the latter is defined as pre-payment income less a flat-rate food allowance than when it is defined as pre-payment income less actual food spending.
Should deductions reflect only food costs?
With respect to deductions, there is, of course, the issue of whether D(x) should reflect food costs only or whether it should reflect other costs that might be considered to be non-discretionary. The costs of shelter (e.g. rent), clothes, heating and energy are obvious examples. But what about the costs of, say, water, garbage disposal and education? Again, there is the issue of whether one should deduct actual expenses incurred or whether one should deduct an allowance. The latter approach is less straightforward than in the case of food, where it is relatively easy to agree on a target level of food intake (say, 2100 calories a day) and then compute the cost of reaching it. The obvious alternative is to adopt the national or international poverty line as the appropriate value for D(x). The difficulty with this is that it is intended to cover not just the costs of food and other key non-food items such as shelter, energy, clothing, and so on, but also the costs of health care. This is not a trivial issue in countries like Vietnam where around 5-6% of household consumption is devoted to out-of-pocket payments for health care. Clearly, one would need to adjust the national or international poverty line downwards to reflect this when coming up with a figure for D(x).
We have done this exercise for Vietnam for 1993 and 1998, using the national poverty lines computed by the World Bank and the Government of Vietnam (Glewwe, Gragnolati, and Zaman 2000). These were constructed by computing the annual cost of reaching 2100 calories per person per day (in current prices 750 and 1287 thousand Dong in 1993 and 1998 respectively), and then adding to this amount a sum to cover non-food consumption. In the case of 1993, the amount added was the average non-food spending of households in the third quintile (411 thousand Dong), this being the quintile whose average food intake came closest to 2100 calories per person per day. In the case of 1998, the figure of 41 1 thousand Dong was simply inflated by the value of the price index for non-food items with 1993 as the base year (1.225), giving a non-food element to the poverty line for 1998 of 1287 thousand Dong. We then took out from the non-food elements of the 1993 and 1998 poverty lines amounts to cover the costs of health care. In the case of 1993, people in the third quintile averaged 70 thousand Dong (current prices) per person per year on out-of-pocket payments for health care. We then computed a Laspeyres price index for the health sector for Vietnam for 1998, using data for 1993 and 1998 on contacts per person per year and out-of-pocket payments per contact, broken down by provider type and by quintile of per capita consumption (World Bank et al. 2001). For all quintiles combined, this gave a figure for 1998 of 1.289.2 This compares to a figure for all non-food items of 1.225 and a figure for the overall CPI of around 1.430.3 Applying this index value to the health spending component of the poverty line for 1993 gives a figure for 1998 of 90 thousand Dong (=70x1.289). The non-health poverty lines for 1993 and 1998 were thus 1091 and 1700 respectively, which were then used as values for D(x). As in the case of the deductions for food costs, individuals with a negative ATP were assigned a zero ATP.
The results of this exercise are shown in column [c] of Table 2. Evidently, deductions are less regressive on pre-payment when defined in terms of an allowance for all goods and services (except medical care) than when defined in terms of simply an allowance for food ( irK is less negative). However, since 6is much larger when the more generous deduction is used, the progressivity-enhancing effect of deductions is larger. Out-of-pocket payments emerge as more regressive on ATP when deductions cover nonfood as well as food items, and more regressive than when deductions are set equal to actual food spending. However, the pattern across the two years is the same whichever of the three deductions is used-out-of-pocket payments became more regressive on ATP despite becoming less regressive (in fact becoming progressive having been regressive) on pre-payment income.
Should payments be proportional to ATP?
In principle, then, requiring that payment be proportional to ATP has the attraction of providing an answer to the question how progressive payments ought to be on pre-payment income, or equivalently how much narrower or wider income inequalities ought to be post-payment than pre-payment. In practice, however, as has been seen, there is the problem that how one defines ATP-i.e., how one defines the "deductions" D(x)appears to have an important influence on one's conclusions concerning the fairness of the distribution of health care payments and changes in equity.
Quite aside from this issue, there is the issue of whether policymakers everywhere would endorse the value judgment that health care payments ought to be proportional to ATP. Although the WHO claims that this value judgment seems to be the one that receives majority support in an opinion survey from a convenience sample (Murray et al. 2001), it is obvious that one might argue that-in much the same way as those with zero ATP are defacto exempt from contributing-ceilings or maximum contributions could be set at a certain level of ATP above which payments are not to required to rise any further. Irrespective of the-inevitably arbitrary-choice of a target distribution of payments as a finction of ATP, the framework presented in this section is helpful to unravel the various factors that have an influence on the difference between the actual distribution and desired distribution.
Vertical vs. horizontal inequity
So far in the paper the focus has been on vertical issues-how people with different prepayment incomes or different abilities ought to pay for their health care relative to their income. In the case where payments are required to be proportional to ATP, measurement proceeds by searching for departure from proportionality in the vertical relationship between payments and ability to pay (as captured by TK), or by comparing inequality in income after deductions and before health care payments with inequality in income after deductions and health care payments (as captured by ?r S ). In the case where the requirement of proportionality to ATP is not assumed, measurement proceeds by searching for departure from proportionality in the vertical relationship between payments and prepayment income (as captured by <K ), or by comparing inequality in pre-payment income with inequality in post-payment income (as captured by ;J,RS). In each case, the focus is on vertical differences, and, in the case of the ATP approach, on vertical equity.
There is another aspect of equity, namely horizontal equity-the issue of how far people with similar abilities to pay end up spending similar amounts on health care. In the context of health financing, and especially out-of-pocket payments, this is especially important, since the randomness of ill health makes it highly likely that people with similar incomes will end up paying very different amounts, with some paying nothing and others paying very large amounts. Indeed, it seems likely that these horizontal inequities-if that is what they are-may well dominate the vertical differences. This contrasts with, say, the case of the personal income tax for which the techniques developed above have been developed. There, it is differential treatment of people with different incomes that is likely to be more important than unequal treatment of people with similar incomes (Wagstaff, van Doorslaer, van der Burg et al. 1999).
Horizontal inequity matters for two reasons. First, it may give rise to people having different positions in the income distribution "before" and "after" health care payments. If everyone at a given income paid the same, people's rank in the pre-payment and post-payment distributions would be identical. If, on the other hand, people at a given income pay different amounts, some reranking will occur. This "reranking" came out in the Bank's Voices of the Poor exercise in Vietnam. In Lao Cai-in the mountainous north of the country-one 26-year old man revealed how the hospital costs associated with his daughter's severe illness had resulted in him moving from being one of the richest in his community to being one of the poorest. Reranking matters in part because it might be considered unfair in its own right, but also because it violates the assumption of no reranking that underlies the framework above and the empirical results based upon it. But there is a second reason for wanting to get to grips empirically with horizontal inequity, which is that even if reranking is of no special ethical significance per se, horizontal inequity most certainly is. Furthermore, the causes of horizontal inequity ard the policy responses to it are different from those relating to vertical differences. Muddling up vertical and horizontal inequities is unhelpful for both understanding the causes of inequity and thinking about policies to reduce it. This section outlines a framework that allows one to distinguish empirically between the two and also allows the phenomenon of reranking to be incorporated and indeed quantified.
Decomposing redistributive effect
In eqn (2) above, we assumed away the possibility of reranking. If reranking occurs, redistributive effect needs to be measured as: where GX-T is the Gini coefficient for post-payment income and the p' in parentheses indicates the ranking in the post-payment distribution. RE is positive if the Lorenz curve for post-payment income lies above the Lorenz curve for pre-payment income, indicating that payments reduce income inequality. RE will coincide with ;RS only if there is no reranking in the move from the pre-payment to the post-payment income distribution. RE has been shown by Aronson, Johnson and Lambert (AJL) (Aronson, Johnson, and Lambert 1994) to depend on four key factors and to be decomposable as follows: and
R Gx-T -CX-T
In eqn (15), households are divided into groups of pre-payment equals, and redistributive effect is partitioned into three components: a vertical component, V, capturing the different payments made by the various groups of pre-payment equals; a horizontal inequity component, H, capturing the different payments made by households with similar pre-payment incomes; and a reranking component, R, capturing the movements of households up and down the income distribution in the transition from the pre-payment to post-payment income distributions. V is measured by [t/(lt)]7 , where the Kakwani index of progressivity is computed using the average payments made by members of the household's pre-payment income group rather than each household's actual payments. V thus indicates the amount of income redistribution attributable to the fact that, on average, households at different points in the income distribution do or do not pay different amounts for their health care. H is classical horizontal inequity. Inequality in post-payment income is measured in each group of pre-payment equals via a Gini coefficient, GF(X). A weighted sum of these Gini coefficients is then computed, with the ax as weights, defined as the product of the population share and post-payment income share of households with pre-payment income X. The final term R is measured by the difference between the Gini coefficient for X-T and the concentration index for X-T, where in the latter case households are ranked by the pre-payment income.
In principle, reranking and horizontal inequity are distinct concepts. However, in practice, they are hard to separate not least because the more likely reason for reranking is, in fact, the existence of horizontal inequality. This is shown in Figure 2 in the case where payments are progressive on pre-payment income, X, and hence post-payment income, X-T, increases in pre-payment income but at a decreasing rate. The average postpayment income at any level of pre-payment income can be read off the function in Figure 2. There will, however, be variations around this mean. These variations are reflected in a "fan" emanating from the point on the post-payment income function corresponding to the pre-payment income level in question, branching out to the postpayment income axis. For example, a household with a pre-payment income of $1100 might pay $250 in health care payments, ending up in the post-payment distribution behind the average household with a pre-payment income of $1000, which spends only $1000. Thus reranking is caused by horizontal inequity. Given this, it seems unwise to :ry to make too much of the distinction between R and H. This is reinforced by the fact that. although in the population at large there will be households on the same pre-payment income; in a household survey such instances are rare. In empirical work, it therefore becomes necessary to define equals by reference to bands of pre-payment income, within which, for the purpose of the exercise, households are deemed to be equal. The choice of bandwidth inevitably affects the computed value of H, but also affects the computed value of R. Specifically, it seems to be the case that as the bandwidth is narrowed, H falls and R rises, though their does not seem to change much. In what follows we emphasize the sum of H and R, rather than their individual values.
The sources of redistributive effect of out-of-pocket payments in Vietnam
RE can be computed simply as the difference between Gx and Gx-T. To compute <' (or more precisely the concentration index for out-of-pocket payments, CT) and CX-T one has to decide on appropriate groups of pre-payment equals. In this illustration, prepayment equals were defined by expressing pre-payment income as a multiple of the overall poverty lines for 1993 and 1998. Households below the poverty line z' were divided into eight groups, the first comprising households with a pre-payment income between 0% and 12.5% of the poverty line, the second comprising households with a prepayment income between 12.5% and 25% of the poverty line, and so on. Households with a pre-payrnent income of between 100% and 200% of the poverty line were divided into just four groups, along similar lines, while those with pre-payment incomes in excess of 200% of the poverty line were divided into just three groups. To put this into perspective, nearly 60% of households fell below the poverty line in 1993, and nearly 40% did in 1998. With groups of prepayment equals defined, it is straightforward to compute CT on the grouped data, and to form the ranking variable to compute CX T. Using the former and Gx, one can compute XT, and using the latter and Gx-T one can compute R. This leaves H, which can be computed as a residual. Table 3 shows the decomposition results of RE on pre-payment income for 1993 and 1998. In 1998, the redistributive effect of out-of-pockets was less than half of what it was in 1993. Although all four components-i.e., t, ,rr, H and R-were reduced in absolute value, it is clear from the percentage distributions that most of the reduction is due to the reduced regressiveness of the out-of-pocket payments. Whereas in 1993, the vertical component V accounted for about 47% of total RE, its share of RE in 1998 was reduced to only 5.7%.
The AJL decomposition and the A TP approach-results for Vietnam
The AJL decomposition can also be applied to the ATP approach. The approach outlined in section 3 is useful if all deviation from proportionality of payments to ATP arises from vertical inequity. In this case, G'R and 7r'S will convey the information required. But if there is horizontal inequity, ;RS will reflect this as well as vertical inequity. By employing the AJL decomposition, one can quantify: (a) the extent to which people with different abilities to pay end up paying similar proportions of their ATP toward health care (V): (b) the extent to which people with similar abilities to pay end up paying similar proportions of their ATP toward health care (H); and (c) the extent to which people change positions in the income distribution of as a result of health care payments (R).
We applied the AJL methodology to the ATP approach, using per capita prepayment income (i.e., consumption) less actual food spending as the measure of ATP. Equals were defined by in the same way as with pre-payment income but now using multiples of the poverty line exclusive of food payments (i.e., zy 0 ,,) to generate the groups of ATP "equals". Table 4 shows the results of this exercise for 1993 and 1998. As in the case of pre-payment income, the total redistributive effect decreased between the two years. In contrast to the previous table, however, the percentage contribution to RE of the vertical component V increases from 42% in 1993 to 63% in 1998, despite the reduction in t from 12.6% to 10.7%. This is due to the increased regressiveness of out-of-pocket payments on ability to pay, as shown by the decrease in )rTK.
Minimum standards and catastrophic health care costs
The egalitarian approach, through the measurement of redistributive effect, captures the share of pre-payment income being spent on health care (captured by t in .qn (3) or (15) for example), as well as how unequal this share is across the income distribution (captured by gT in eqns (4) and (14)). But it does not respond to the concrn that payments might be "too large". It is to this concern that the minimum standards approach responds. Two sub-strands of literature can be identified, both of which are built up around the notion that a focal variable ought not to exceed or fall short of a threshold. One sub-strand sets the threshold in terms of proportionality of income. The concern in this case is to ensure that households do not spend more than some prespecified fraction of their income on health care, and spending in excess of this threshold is labeled "catastrophic". The second sub-strand sets the minimum in terms of the absolute level of income. The concern here is to ensure that spending on health care dees not push households into poverty-or further into it if they are already there. We consider each in turn, beginning in this section with catastrophic expenses.
The ethical position underlying this sub-strand of literature is that no one ought to spend more than a given fraction (say zcat) of their income on health care. A figure for Zcat is inevitably arbitrary, and it would clearly depend on whether income was defined in terms simply of pre-payment income, x, or in terms of some measure of ATP, y=x-D(x). If the latter, clearly one ought to consider the various issues discussed above concerning how D(x) is to be defined. If D(x) is to cover only food expenditures, should it cover actual expenses or should it be a flat-rate allowance? If the latter, what should be done with individuals whose pre-payment incomes fall short of the allowance? In this exercise, these last two strategies are problematic, since y could become zero or negative. In the case where y is zero, the ratio of health care spending to income is undefined, and individuals with negative values ofy will end up with smaller (in numerical size) values of Tly than those with small health spending and/or large incomes.
Measuring the incidence and intensity of catastrophic health care costs
Suppose one has settled on whether x or y will be used, on the definition of D(x) in the event the latter is to be used, and on an approach to circumvent the problems noted above. Suppose too that a threshold Zcat has been agreed for Tlx or Tly above which expenses are to be considered "catastrophic". The obvious summary measure of the extent to which a given sample of individuals has been exposed to catastrophic expenses (defined along these lines) would be the number (or fraction) of individuals whose health care costs as a proportion of income exceeded the threshold. The horizontal axis in Figure 3 shows the cumulative share of the sample, ordered by the ratio Tlx, beginning with individuals with the largest ratio. Reading off this parade at the threshold zc,at one obtains the fraction Hcat of the sample whose expenditures as a proportion of their income exceed the threshold Zcat. This is the catastrophic payment headcount. Thus let Oi be the catastrophic 'overshoot', equal to Tl/Xi-zcat (or TJ/yl-z, 0 ) if Tl/xi>zc,t and zero otherwise, and let Ei=l if Oi>O. Then the catastrophic payment headcount is equal to: where N is the sample size and pE is the mean of Ei.
The difficulty with this measure is that this fails to capture the height above which individuals exceeding the threshold actually exceed it. This presumably matters. By analogy with the poverty literature, one could define not just a catastrophic payment headcount but also a measure analogous to the poverty gap, which we call the catastrophic payment gap (or excess). This captures the height by which payments (as a proportion of income) exceed the threshold Zcat. We divide this through by the sample size to get the average excess Gca,. Thus we measure the intensity or severity by defining the average 'gap' (or excess) of catastrophic payments as where dU0 is the mean of Oi. The mean positive 'gap' is:
E / II E; = p IO/E '
We therefore have: In other words, the overall mean catastrophic 'gap' equals the fraction with a positive gap times the mean positive gap.
Incidence and intensity of catastrophic out-of-pocket payments in Vietnam
We measured Oi by the ratio Tlx (i.e., out-of-pocket payments as a fraction of prepayment income), and set thresholds (i.e., Zcat) at 2.5%, 5%, 10%, and 15%. Table 5 (a) presents these results. We then re-did the exercise with Oi defined as the ratio T/y (i.e., out-of-pocket payments as a fraction of ATP), where y was defined as pre-payment income less actual food spending. The ratio Tly thus gives the share of non-food consumption absorbed by out-of-pocket payments. In this second case, we used thresholds of 10%, 15%, 20%, 25%, 30% and 40% and the results are in Table 5 The tables show that in 1993, for instance, as much as 38% of the sample recorded out-of-pocket payments in excess of 5% of their pre-payment income and that 34% of the sample spent more than 15% of their non-food consumption on out-of-pocket expenditure. Inevitably, in both years, and for both income shares, both the proportion ofthe sample exceeding the threshold (Heat) and the mean excess (Gcat) fall as the threshold (Zcat) is raised. More interesting is the fact that for both income shares and for all the thresholds in the range explored, both the proportion exceeding the threshold and the mean excess were lower in 1998 than in 1993. This suggests that, in general, the catastrophic character of out-of-pocket payments was reduced over the period in question. In Table 5 (a), the mean positive gap MPGcat has decreased (slightly) for the first two thresholds, but increased (slightly) for the two highest thresholds. It is therefore clear that most of the decline in the mean overall gap Gcat is due to the decline in the headcount Hea, . In Table 5(b), the MPGCa,t for ability to pay is always lower in 1998.
Measures that reflect that catastrophic costs matter more for the poor
There is a difficulty with the approach outlined above, namely that it is blind as to whether it is poor or better-off individuals who exceed the threshold. It seems likely most societies will care more if it is an individual in the lowest decile whose spending (as a share of its income) exceeds the threshold than if it is one in the top decile. One way of shedding light on this is to see how the proportions of those exceeding the threshold vary across the income distribution. This can be done formally using a concentration index for Ei, which we define as CE. A positive value of this will indicate a greater tendency for the better-off to exceed the payment threshold, whilst a negative value will indicate a greater tendency for the worse-off to exceed the threshold.
A difficulty is that the headcount, ,pE, and the concentration index, CE, could move in different directions over time. Or the former might be higher in country A than country B, but the latter might be lower in country A than B. In such circumstances, it would be useful to have an index trading off the two dimensions. We can do this by constructing a weighted version of the headcount that takes into account whether it is mostly poor people who exceed the threshold or better-off people. We do this by weighting the variable indicating whether the person has exceeded the threshold, Ei, by the individual's rank in the income distribution. Let ri denote person i's absolute rank. This is equal to I for person 1, 2 for person 2, and N for person N. Then define (20) wi =2 N Thus wi is equal to 2 for the most disadvantaged person, declines by 2/N for each oneperson step up through the income distribution, and reaches 2/N for the least disadvantaged person. Thus the difference in w, between the most disadvantaged person and the second most disadvantaged person is the same as the difference between the second most advantaged person and the most advantaged person. If we weight the Ei by the wi, we get: We have the following result (the proof of which is in the Appendix): Result 1. Given the weighting used in (21), the index W. can be written as:
(22) WE, = H_E (1 CE )
Thus we can modify the catastrophic payments headcount by weighting the dummy status indicator, Ei, by the person's rank in the income distribution, giving larger weights to poorer people. The weighting scheme chosen results in an attractive and simple summary measure that is simply the catastrophic payment headcount multiplied by the complement of the concentration index. If those who exceed the threshold tend to be poor, the concentration index CE will be negative, and this will raise WCE,, above SUE. Thus the catastrophic payment problem is worse than it appears simply by looking at the fraction of the population exceeding the threshold, since it overlooks the fact that it tends to be the poor who exceed the threshold. By contrast, if it is better-off individuals who tend to exceed the threshold, CE will be positive, and lIE will overstate the problem of the catastrophic payments as measured by U't.
We can apply the same logic to the catastrophic payment excess. We define a concentration index for the overshoot variable, Oi, which we denote by Co. Then we can define an analogue of WcE,, which can be shown to be equal to: (21) WG., = po (1 CO ).
A tendency for large excesses to be concentrated among poorer individuals results in a negative value of CO, which will raise WJGJ above go-the "excess payment problem" is worse than it appears simply by looking at the mean catastrophic payment excess, since 23 this overlooks the fact that the large catastrophic payments are concentrated among the worse off. By contrast, if it is the better-off individuals who have the largest excesses, Co will be positive, and go will overstate the severity of the catastrophic payment problem as measured by W
The poor and catastrophic out-of-pocket payments in Vietnam
Table 5 (a) shows that at the lower thresholds, the incidence of "catastrophic" health costs is more concentrated among the poor in both years, though more so in 1998 than in 1993. By contrast, at the higher thresholds the incidence of "catastrophic" health costs is more concentrated among the rich in both years, and more so in 1998 than in 1993. The better-off are more likely to overshoot the threshold by a larger amount in both years whatever the threshold, and for each threshold there is more concentration of "overshooting" among the better-off in 1998 than in 1993. This coupled with the results mentioned above indicates that whilst at low thresholds it is the poor who are more likely to exceed them, they do not spend so far above the threshold as do the better-off. Since the concentration indices are all positive, the index WG,, is smaller than the mean catastrophic excess, pG,. Catastrophic costs are thus less of a "problem" in both 1993 and 1998 than they would have been if the large "catastrophes" had been concentrated among the poor.
The story is somewhat different in terms of ability to pay (or non-food consumption). First, Table 5 (b) shows that the incidence of "catastrophe" is always more concentrated among the poor, in both years, and for all thresholds. Another difference with respect to the same exercise based on prepayment income is that the magnitude of the "catastrophic overshoot" of ability to pay is more concentrated among the poor, but much more so in 1993 than in 1998. Only at higher thresholds in 1998 does it become more concentrated among the rich. Because most concentration indices are negative, the rank-weighted indices tend to be higher than the headcount-based measures. In general, both the x-based and the y-based approaches give very similar results in terms of the rank-weighted welfare measures: when taking into account people's location in the income ranking in either the incidence ( W ) or intensity (WG, ), the measures decrease with rising thresholds but the index values are always higher in 1993 than in 1998. In other words, the catastrophic out-of-pocket expenditure "problem" has unequivocally lessened over the period in question.
Minimum standards and impoverishment
There is still a difficulty with the "catastrophic" payment approach, namely that it is blind as to how far "catastrophic" payments cause hardship. It seems likely most societies will be more concerned about someone exceeding the threshold by, say, five percentage points if their income is $0.75 a day than if it is $30 a day. An alternative perspective is that of impoverishment, the core idea being that no one ought to be pushed into poverty-or further into poverty-because of health care expenses. This position is evident in the discussions in the World Bank's 2000WDR (World Bank 2000 and in its Voices of the Poor consultative exercise (Narayan et al. 2000). In a sense, this approach gets to the heart of the concerns over health care payments-that health care utilization is a response to an unforeseen and unsolicited "shock" and can be sufficiently costly to represent a threat to a household's ability to purchase other goods and services that may, like health care, make a difference to its members' ability to survive flourish as a human beings. Figure 4 provides a simple framework for examining the impact of out-of-pocket payments on the two basic measures of poverty-the headcount and the poverty gap. It also allows us to relate progressivity and redistributive effect to poverty impact. The figure is a variant on Pen's parade. The two parades plot income (before and after out-ofpocket payments) along the y-axis against the cumulative percentage of individuals ranked by pre-payment income along the x-axis. Reading off each parade at the poverty line gives the fraction of people living below poverty, while the area below the poverty line above each parade gives the poverty gap. It is assumed in Figure 4 that the poverty line is the same for post-payment income as for pre-payment income-this is an issue we return to in a moment.
Measuring the impoverishing effects of health care costs
Formally, the relevant concepts and measures can therefore be defined as follows.
Let zPe be the pre-payment poverty line (which may be different from the post-payment poverty line for reasons discussed below) and xi be individual i's pre-payment income.
Then define Pi'e =1 if x, < z". Then the pre-payment poverty headcount is equal to: where N is the sample size. Denote by gPre the pre-payment poverty gap, which is equal tox,-zpov if x, <z, and zero otherwise. The average pre-payment poverty gap is defined as:
Impoverishment, progressivity and redistributive effect-the links
What deternines the poverty impact of out-of-pocket payments? And what are the links between poverty impact, on the one hand, and progressivity and redistributive effect, on the other? In this sub-section we present some results for the case where the poverty line remains the same before and after health care payments.
Intuitively, one would expect that poverty impact would be linked to progressivity. This is indeed the case. Figure 5 compares three post-payment income distributions corresponding to three different payment structures, each with the same value of t-one proportional on pre-payment income, one progressive, and the other regressive. In all three cases, and for all income levels, post-payment income is less than the pre-payment income, and therefore poverty (however measured) is higher after outof-pocket payments than before. However, the three payment structures will, in general, give rise to different poverty impacts. At a certain income level-the break-even pointthe three structures give rise to the same post-payment income. Up to this income level, post-payment income is highest under the progressive payment structure and lowest under the regressive structure. Thus if the poverty line is below this break-even income level, the poverty impact of out-of-pocket payments is smallest under the progressive payment structure and greatest under the regressive payment structure. Inevitably, beyond the break-even income level, post-payment income is highest under the regressive structure and lowest under the progressive structure. Thus if the poverty line is above the break-even level, the poverty impact of out-of-pocket payments is greatest under the progressive structure and smallest under the regressive structure. In general, then, providing the poverty line is not too high (i.e., not higher than the break-even income level), the poverty impact of out-of-pocket payments will be greatest if out-of-pocket payments are regressive and smallest if they are progressive.
Like redistributive effect, poverty impact depends not just on progressivity but also on the share of income absorbed by health care payments. Figure 6 shows the effect of raising the value of t, holding the index of progressivity constant. The effect is to push the post-payment Pen parade downwards by the same percentage of pre-payment income at each income rank. Thus, like redistributive effect, the poverty impact is larger, for a given value of Kakwani's progressivity index, the larger is t.
Given the various influences on poverty impact discussed above, it ought to be clear that looking at progressivity alone might give a misleading picture of poverty impact. Figure 7 shows two alternative health care payment structures-one progressive, the other regressive. In the progressive structure, health care payments absorb a fairly high proportion of pre-payment income, while in the regressive structure, they absorb, on average, only a very small fraction of pre-payment income. There is, as before, a breakeven income level, which-given the differences in t and the progressivity index in this example-occurs at a relatively low income rank (a little over 30%). Below this breakeven level, pre-payment income is higher under the progressive structure-the percentage of income absorbed by health care payments on average is high but the structure is progressive. Above the break-even level, pre-payment income is higher under the regressive structure-the structure is regressive but the percentage of income absorbed by payments is small on average. If the poverty line is below the break-even level, the poverty impact is greater under the regressive payment structure, but if the poverty line is above the break-even level, the poverty impact is greater under the progressive structure. Thus, if a progressive structure absorbs a large proportion of pre-payment income, it may well give rise to a larger poverty impact than a regressive structure absorbing a small proportion of pre-payment income.
How do out-of-pocket payments add to poverty in Vietnam?
There are two obvious candidates for the poverty line that emerge from our earlier discussion of the deductions D(x). The first is a food poverty line giving the cost of reaching 2100 calories a day. This is often termed an extreme poverty line. Clearly, this is applicable whether income is pre-payment or post-payment. In each case, one is asking whether the person's pre-or post-payment income is sufficient for them to purchase enough food to produce 2100 calories per day. Clearly, some individuals may cross such a poverty line as the result of spending on health care, and some may sink further below it. By comparing the headcounts and poverty gaps before and after health care spending, one can get a sense of its impoverishing effects, whether in terms of additions to the number of people classified as extremely poor or in terms of deepening poverty among the extreme poor.
The second obvious poverty line is the amount used above in the more generous deduction for food and non-food items. The difficulty here is that the poverty line for prepayment income ought to include an element for health spending, whilst the poverty line for post-payment income ought not. As in computing the deduction D(x), one needs to extract an amount from the poverty line corresponding to health spending to arrive at the post-payment poverty line zP"'. This means that whilst some people may not be poor before health spending and be poor after it, there will be some who are marginally poor before health spending but not poor after it (they spend nothing on health care or they spend appreciably less than the health spending component of the pre-payment poverty line). Thus, whereas in the case where the extreme poverty line is used poverty will necessarily be higher "after" health spending than "before", in the case where the poverty line covers food and non-food items, poverty may, in fact, be higher pre-payment than post-payment.
In applying these methods to the data on out-of-pocket payments in Vietnam, we employed a food (extreme) poverty line and a broader-based poverty line using the amounts used in the deductions in the fairness analysis above. In the case of the food poverty line, the same amounts were used for the pre-payment and the post-payment lines-750 and 1287 thousand Dong for 1993 and 1998 respectively. In the case of the broader poverty line, a lower line was set for post-payment income, reflecting the fact that health care payments have to be met from pre-payment income but have already been met at the post-payment stage. The pre-payment and post-payment poverty lines for 1993 were set at 1160 and 1091 respectively, while the corresponding lines for 1998 were set at 1790 and 1700 respectively. Figure 8 shows the chart of Pen's parade for households (individuals are used in the analysis that follows) for pre-payment income and extreme poverty in 1998. Overlaid on the chart are the out-of-pocket payments of each household. In some cases, households are clearly pushed further into extreme poverty by out-of-pocket payments, whilst in others they are pushed below the extreme poverty line having started out "before" out-of-pocket payments above it.
Table 6(a) shows that in the case of the food poverty line, out-of-pocket payments increase the headcount ratio by 4.4 percentage points in 1993 and by 3.4 percentage points in 1998. The poverty gap comparisons across years are most meaningful when normalized poverty gaps are used (i.e., poverty gaps are divided through by the poverty line). Out-of-pocket payments increase the normalized gap by only 1.4 percentage points in 1993 and by only 0.8 percentage points in 1998. In both years, around three quarters of the addition to the poverty gap was from previously poor people being further impoverished by out-of-pocket payments, and only one quarter was attributable to previously non-poor people being pushed into extreme poverty as a result of out-ofpocket payments.
From Table 6(b) it is clear that out-of-pocket payments have a smaller impact on the headcount in the case of the broader-based poverty line. This reflects the lower poverty line for post-payment income. Indeed, there is no assurance-as indicated above-that the impact of out-of-pocket payments on the headcount will be positive in this case. In the event, out-of-pocket payments increase the headcount ratio but by only 0.4 percentage points in 1993 and 0.5 percentage points in 1998. These low increases reflect the fact that the percentages of the sample becoming poor through out-of-pocket payments (1.9% in 1993 and 2.3% in 1998) are almost matched by the percentages of persons who were among the pre-payment poor but not among the post-payment poor (1.5% of the sample in 1993 and 1.7% in 1998). The need for the use of the normalized poverty gap is, of course, even greater in this case than in the case of the food poverty line, given that the poverty line is different pre-payment and post-payment, as well as across years. In 1993, the normalized poverty gap is 0.4 percentage points higher post-payment, while in 1998 out-of-pocket payments increase the normnalized gap by 0.2 percentage points.
The impoverishing effects of hospital vs. other health costs in Vietnam
The impoverishment measurement methodology can be used to quantify the different poverty impacts of hospital and other health spending. In the 1998 Vietnam data, we separated hospital expenses (defined as costs associated with inpatient care over the previous 12 months) and all other health care costs over the previous 12 months. On average, the former account for around 20% of the total. Table 7 shows the results of an analysis of the poverty impacts of these two categories of expense, using the extreme food-based poverty line in order to explore which of the two types is the main source of impoverishment. Looking at hospital costs, the increase in the headcount (p1H ) is a mere 0.5 percentage points, while the value of PI1 associated with non-hospital expenses is 2.9 percentage points. The values of the impact on the mean poverty gap( plG ) are 1.07 and 8.54 respectively. Clearly, and perhaps in contrast to prior expectations, non-hospital expenditure has a larger poverty impact in Vietnam than hospital expenditure. What is striking for hospital costs, however, is that although most of the rise in the poverty gap is still due to poor people getting poorer, this element is proportionally less than in the case of non-hospital expenses. In other words, the share of the rise in the poverty gap accounted for by deepening poverty among the pre-payment poor is smaller in the case of hospital costs than in the case of non-hospital costs.
Summary and conclusions
As noted in the Introduction, much has been written recently about equity or fairness in payments for health care, "catastrophic" health care payments, and the impoverishing effects of health care outlays. The aim of this paper is to clarify the meaning of these termns, to show how each might be measured, and to compare the different measures. We illustrate each using household data on annual out-of-pocket expenditures on health care taken from the 1992-93 and 1997-98 Vietnam Living Standards Surveys (VLSS).
We distinguish between two strands in the literature-approaches inspired by egalitarian concepts of equity, and approaches focusing on "minimum standards". Underlying the egalitarian approach is the view that payments should be linked directly and continuously to ability to pay (ATP) rather than to usage of health services. The minimum standards approach has an element of this idea, but focuses on the extent to which payments exceed a "catastrophe" threshold, or force households below a poverty line or further below it if already there.
We label the first of the egalitarian approaches we consider the "agnostic" approach, since the linkage of payments to ATP is simply measured by the degree of progressivity of such payments on pre-payment income and by the degree of income redistribution they generate. In Vietnam out-of-pocket payments were regressive on prepayment income and widened the income distribution (i.e., were associated with pro-rich redistributive effect). By contrast, in 1998, out-of-pocket payments were mildly progressive and associated with a small amount of pro-poor redistributive effect.
The "agnostic" approach does not tell us how equitable payments are-only how progressive they are on pre-payment income. The second egalitarian approach we consider-proposed recently by the World Health Organization and labeled in the paper as the ATP approach-suggests a target distribution for out-of-pocket payments and hence allows one to quantify inequity. The equity "yardstick" proposed by WHO is that payments should be proportional to ATP, and one aspect of inequity to be measured is the extent to which payments deviate from proportionality with respect to ATP. This can measured by examining the progressivity of payments on ATP (a zero value of the progressivity index of payments on ATP, ;, being the equity goal) or by examining the redistributive effect of payments with respect to ATP (a zero value of the index of redistributive effect of payments with respect to ATP, 7TRS , being the equity goal). The two conditions are equivalent, since proportionality with respect to ATP leaves the ATP distribution intact. ATP can be thought of as pre-payment income less a deduction for expenses deemed necessary to achieve a minimum subsistence level of consumption. Setting up the analysis along these lines allows us to obtain some useful decomposition results from the tax literature. These shed light on the relationship between the progressivity and redistributive effect of payments on pre-payment income and on ATP. For instance, eqn (13) in the paper shows that the degree of redistributive effect of payments with respect to pre-payment income depends on the payments share, t, the progressivity of payments on ATP, <lR, the deductions share, A, and the progressivity of deductions on pre-payment income, ZD. If payments are proportional to ATP (IC' = O ), dedutios onprepayent ncoe ~D Ifpyet r rprinlt T R~ ) there will still be some redistributive effect if the deductions are non-proportional to prepayment income. If, for example, deductions are income-inelastic (making up a higher share of pre-payment income for the poor than the better-off), equity---defined in terms of payments being proportional to ATP-will result in some pro-poor income redistribution.
In the empirical illustration of the ATP approach, we show how by varying the definition of deductions, the progressivity of payments on ATP and on pre-payment income varies. We defined deductions first as actual food expenditure (a la WHO), second as a flat rate poverty line deduction for food only, and third as a flat rate poverty line deduction for all goods and services other than health care. The results showed that, irrespective of the deduction definition used, equity-as measured by (% changes in) . 7 rR improved between 1993 and 1998, but the improvement was greater in the case of the flat rate food poverty line and overall poverty line deductions than in the case of actual food expenditure.
We then consider a third approach within the egalitarian tradition that further broadens the scope of the analysis by not focusing exclusively on vertical differences but instead incorporating horizontal differences as well. The previous two approaches do not adequately capture this as they implicitly assume that there is no reranking when going from the pre-payment to the post-payment income distribution, and assuming that all of the redistributive effect is due to progressivity. In practice, people do change ranking as a result of payments and different people at the same pre-payment income end up paying dramnatically different amounts of their income toward health care. As the tax literature shows, the total redistributive effect in such cases (RE'-L ) ought to be computed as the difference between a vertical equity component (1) attributable to the degree to which, on average, payments are progressive, and the sum of a horizontal equity component (IH) and a reranking component (R). This decomposition can be applied to the agnostic approach or to the ATP approach. One implication for the latter is that payments could be proportional to ATP on average (V=0), and yet payments could produce redistributive effect because H and R are non-zero (households at a given level of ATP pay different amounts on health care). The ATP proportionality yardstick could thus be re-interpreted to mean that V, H and R all be zero, or that RE (= V-H-R) is zero. The latter would allow for the possibility that a positive V due to payments being progressive on ATP is exactly offset by the (pro-rich) redistribution induced by horizontal inequity (the negative value of H+R is exactly equal to -V).
In the case of Vietnam, the total redistributive effect of out-of-pocket payments with respect to both pre-payment income and ATP is negative in both years, but RE fell (in absolute size) between 1993 and 1998. In the case of RE with respect to pre-payment income, no equity interpretation can be given to the reduction of RE", whereas in the case of RE with respect to ATP, the reduction in RE'JL can be interpreted as moving closer toward an equitable distribution which leaves ATP unchanged. The reasons for the reduction (in absolute size) of RE is different in the two cases. In the first (RE with respect to pre-payment income), payments became less regressive in Vietnam over the period 1993-98 and absorbed a smaller share of pre-payment income. V thus unambiguously fell in absolute size (i.e., became less negative). H and R also fell but by much less. Most of the reduction in RE with respect to pre-payment income was thus due to reduced vertical income redistribution. Indeed, in 1998 V accounted for only 6% of the total value of RE with respect to pre-payment income compared to 47% in 1993. In the case of RE with respect to ATP, out-of-pocket payments became more regressive, but because the share of ATP absorbed by them fell substantially, V once again fell in absolute size, but only by a small amount. By contrast, H and R fell quite considerably. Most of the reduction in RE with respect to ATP was thus due to reduced horizontal inequity (and consequent reranking) rather than to the reduction in the absolute size of V. Indeed, the share of RE accounted for by vertical redistribution actually increased in this case.
We then turn in the paper to minimum standards (or threshold) approaches. In the first, the threshold is in terms of payments, and set as a proportion of pre-payment income. In the second, the threshold is set in terms of post-payment income, in terms of a poverty line. Payments resulting in people crossing the first threshold are classified as "catastrophic" while payments resulting in people crossing the second are classified as "impoverishing". For both approaches, we define indices which can be used to measure both the incidence and intensity of the catastrophic or impoverishing impact. For the catastrophic impact measure, we also show how it can be made sensitive to the location of its occurrence in the income distribution-"catastrophic" payments presumably matter more for poor households than better-off ones.
In general, using the minimum standards "yardstick", things appear to have improved in Vietnamn over the period considered. Both the incidence and the intensity of "catastrophic" payments fell, whether defined in terms of pre-payment income or ATP. The incidence and intensity also became less concentrated among the poor. Furthermore, the incidence and intensity of the poverty impact of out-of-pocket payments were both much lower in 1998 than in 1993. We also show how the methods can be used to see to what extent the poverty impact is due to poor people getting poorer or previously nonpoor people falling into poverty, and which types of out-of-pocket expenditure can be held responsible for most of the impact. We found that in the case of Vietnam most of the poverty impact is due to the poor getting even poorer and to non-hospital care outlays rather than payments for hospital care. | 2014-10-01T00:00:00.000Z | 2001-11-30T00:00:00.000 | {
"year": 2001,
"sha1": "b9dfc814f948d0ed19269ca7e6102d1e8c9fe3fd",
"oa_license": "CCBY",
"oa_url": "https://openknowledge.worldbank.org/bitstream/10986/19429/1/multi0page.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "b9dfc814f948d0ed19269ca7e6102d1e8c9fe3fd",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Economics"
]
} |
221949284 | pes2o/s2orc | v3-fos-license | Investigating episodic accretion in a very low-mass young stellar object
Very low-mass Class I protostars have been investigated very little thus far. Variability of these young stellar objects (YSOs) and whether or not they are capable of strong episodic accretion is also left relatively unstudied. We investigate accretion variability in IRS54, a Class I very low-mass protostar with a mass of M$_{\star}$ ~ 0.1 - 0.2 M$_{\odot}$. We obtained spectroscopic and photometric data with VLT/ISAAC and VLT/SINFONI in the near-infrared ($J$, $H$, and $K$ bands) across four epochs (2005, 2010, 2013, and 2014). We used accretion-tracing lines (Pa$\beta$ and Br$\gamma$) and outflow-tracing lines (H$_2$ and [FeII] to examine physical properties and kinematics of the object. A large increase in luminosity was found between the 2005 and 2013 epochs of more than 1 magnitude in the $K$ band, followed in 2014 by a steep decrease. Consistently, the mass accretion rate ($\dot{M}_{acc}$) rose by an order of magnitude from ~ 10$^{-8}$ M$_{\odot}$ yr$^{-1}$ to ~ $10^{-7}$ M$_{\odot}$ yr$^{-1}$ between the two early epochs. The visual extinction ($A_V$) has also increased from ~ 15 mag in 2005 to ~ 24 mag in 2013. This rise in $A_V$ in tandem with the increase in $\dot{M}_{acc}$ is explained by the lifting up of a large amount of dust from the disc of IRS54, following the augmented accretion and ejection activity in the YSO, which intersects our line of sight due to the almost edge-on geometry of the disc. Because of the strength and timescales involved in this dramatic increase, this event is believed to have been an accretion burst possibly similar to bursts of EXor-type objects. IRS54 is the lowest mass Class I source observed to have an accretion burst of this type, and therefore potentially one of the lowest mass EXor-type objects known so far.
Introduction
The young stellar object (YSO) phase represents a very important stage in the life of a star and influences its subsequent evolution. YSOs can be divided into four classes (Class 0, I, II, and III), where Class I to Class III are defined by their spectral index (α) measured from the near-to mid-infrared (NIR to MIR) portion of the spectrum (Lada 1987). Class 0 stars are normally observed only at millimetre (mm) and radio wavelengths and represent the earliest phase when over 50% of the mass is still contained in an envelope surrounding the protostellar core. Although Class I YSOs are deeply embedded, they are nevertheless observable in the NIR. Since they are still strongly accreting and generating powerful outflows, it is possible to study both accretion and ejection processes at this relatively early stage through IR spectroscopy and imaging using state-of-theart ground-based telescopes.
Young stars have been known to exhibit episodic variability in their accretion and ejection over the course of their evolution (see e.g. Audard et al. 2014, and references therein). It is important to note that in this case, an increase in accretion is usually associated with an increase in luminosity as more material is accreted, producing strong shocks onto the stellar photosphere and additional radiation. Two evident forms that this variability can take are FU Orionis-type outburts (FUors, named after the prototype FU Ori) and EXor outbursts (named after EX Lupi), which were first discovered in the context of optical observations (Herbig 1966(Herbig , 1977(Herbig , 1989) and since then have had their definition broadened to include more embedded types of young stars (e.g. Connelley & Greene 2010).
FUors are YSOs that exhibit accretion bursts of several orders (3-4) of magnitude, reachingṀ acc ∼ 10 −4 M yr −1 for a relatively short timescale (∼ 10 2 years) and might accrete up to ∼ 30−40% of their final mass during these bursts over the course of their formation (Fischer et al. 2019). It is believed that these kinds of dramatic bursts preferentially occur during the early stages of star formation when mass is still falling onto the disc from an envelope (e.g. Vorobyov & Basu 2015), even though the phenomenon was first discovered in pre-main sequence (PMS) stars.
EXor bursts are phenomena that occur over shorter timescales (∼1 -2 years) and are less violent (Ṁ acc increases of 1 -2 orders of magnitude typically up to 10 −7 − 10 −6 M yr −1 ) Article number, page 1 of 14 arXiv:2009.12281v1 [astro-ph.SR] 25 Sep 2020 A&A proofs: manuscript no. 38897corr than their FUor counterparts (Audard et al. 2014). The brightness of these objects can increase by a few magnitudes over mere months, according to photometric observations (e.g. Audard et al. 2010). Their frequency is also higher than FUors, with bursts occurring potentially only a few years apart (Herbig 2008). Similar to FUors in the quiescent state, most EXors are optically observable classical T Tauri stars. However, there is evidence that earlier stage protostars also exhibit episodic bursts (see, e.g. Audard et al. 2014, and references therein), and it has been found that, in Class I YSOs, this eruptive variability is at least an order of magnitude more common than in Class II YSOs (Contreras Peña et al. 2017). Certainly the increased use of IR observations has helped to shed light on this phenomenon at earlier phases in stellar evolution.
Here, we investigate the variability of a single object (IRS 54) over 9 years using NIR spectroscopic and photometric data. IRS 54 (YLW52) is located in the Ophiuchus star-forming region at a distance of ∼ 137 pc (Sullivan et al. 2019). It is a Class I very low-mass star (VLMS) (M ∼ 0.1 − 0.2 M ) of estimated spectral type M (Garcia Lopez et al. 2013, hereafter GL13) with a bolometric luminosity of L bol = 0.78 L (van Kempen et al. 2009). Observations of this YSO have revealed an accretion disc and a H 2 molecular jet (Khanzadyan et al. 2004, GL13) typical of protostars at an early evolutionary phase (see, e.g. Lee 2020, and references therein). IRS 54 in fact is one of the lowest luminosity sources where an H 2 outflow has been spatially resolved (GL13). Moreover, it is an ideal candidate for studying variability due to its edge-on disc geometry that allows us to view the red-and blueshifted components of its outflow, and because multi-epoch spectra and imaging are available spanning almost a decade.
Observations and data reduction
The Class I protostar IRS 54 was observed over four epochs (2005, 2010, 2013, and 2014) in the NIR, as reported in Table 1. Epochs 2005 and 2013 were obtained with the Very Large Telescope (VLT) at the European Southern Observatory (ESO) Paranal Observatory in Chile using the Infrared Spectrometer and Array Camera (ISAAC, Moorwood et al. 1998). ISAAC employed medium spectral resolution (R ∼ 10000, see Table 1), a slit width of 0.3 , and a slit length of 120 , with a pixel scale of 146 milli-arcseconds (mas). The K band data in 2005 cover a larger wavelength range than in 2010, because two contiguous spectral segments were acquired in 2005. The seeing values for each individual night are included in Table 1. To correct for the atmospheric response, telluric standard stars were also observed (see Column 6 in Table 1).
The 2014 data were acquired over two separate nights with the VLT using the Spectrograph for Integral Field Observations in the Near Infrared (SINFONI) (Eisenhauer et al. 2003), an integral field unit (IFU). SINFONI observations in the H and K bands had a pixel scale of 100 mas, with a corresponding field of view of 3 ×3 , and a spectral resolution of ∼ 4000. The seeing measurements for each night are included in Table 1. As with the ISAAC data, to correct for atmospheric effects, telluric standard B-type stars were observed (see Table 1).
The data reduction was completed with the GASGANO 1 data file organiser to run the standard SINFONI pipeline recipes. These were used to apply dark and bad pixel masks, flat field correction, optical depth correction, and a wavelength calibration 1 GASGANO is maintained by ESO. https://www.eso.org/sci/software/gasgano using either OH lines (in the case of the H band) or arc lamps (in the case of the K band, where there were not enough strong OH lines) to the data cubes. A spectrum was then extracted from the telluric cube using IRAF 2 (Image Reduction and Analysis Facility). The region to extract (about the central region on the cube) was determined using CASA 3 (Common Astronomy Software Applications) Viewer. Hydrogen-recombination lines were manually removed from the spectrum of the telluric standard star before this spectrum was used to correct for telluric absorption. The H band reduction of the 22 May 2014 SINFONI data required a further manual sky subtraction because OH line residuals were present in the datacube. This was done by selecting a region of sky in the field of view with little to no emission from the source and subtracting this from the science cube. The resulting spectra were extracted on source.
ISAAC spectroscopic data were reduced in the standard way using IRAF. Wavelength calibration relied on the OH atmospheric lines in each frame. Spatial distortion and curvature caused by the long slit were corrected using the calibration file STARTRACE. An average wavelength accuracy of 2 Å was achieved. As in the case of the SINFONI data, hydrogenrecombination lines were removed from the telluric standard spectra before telluric corrections were applied.
The photometric data obtained with ISAAC (epochs 2005 and 2013) were reduced with IRAF. Flat-fielding of the raw data, sky subtraction, and cosmic ray corrections were all performed. Approximately five nearby stars in the field of view were used to flux calibrate the final science images using their known 2MASS catalogue values. However, the J band flux calibration was completed using the J band acquisition image as photometric science images were not available.
When calculating the line velocities, the spectra were corrected to the parent cloud velocity of ∼ 3.5 km s −1 (Wouterloot et al. 2005;André et al. 2007).
The 2010 VLT/SINFONI archival data were taken from GL13.
Morphology
Detected features in IRS 54 include a disc, jet, and illuminated outflow cavity walls (see Figure B.2 of GL13). The geometry of the system is such that the disc is seen roughly edge-on (GL13). This geometry poses challenges from an observational perspective, specifically in viewing the inner disc where most of the accretion activity takes place. Nevertheless, the edge-on disc configuration of IRS 54 also provides good conditions in which to trace its bipolar jet back to the source (see below and from 1.6440 µm (11 km s −1 ) to 1.6443 µm (66 km s −1 , see Fig. 1b). This emission traces the jet of the YSO and is extended with respect to the source position, which is indicated with the black and white contours in Fig. 1a and Fig. 1b, respectively. It is spatially asymmetric about the central source, with much stronger blueshifted than redshifted emission. In summary, the IRS 54 jet predominately emits [Fe ii] in the blueshifted lobe.
In contrast, most of the H 2 emission comes from the redshifted lobe: it traces not only the bright redshifted jet but also what appear to be cavity walls that straddle the source. Figure 2 shows the red-and blueshifted continuum-subtracted images of IRS 54 at the H 2 1-0 S(1) emission line in the K band. In Fig. 2a four spectral channels were averaged from 2.12087 µm (-125 km s −1 ) to 2.12160 µm (-22 km s −1 ), and in Fig. 2b four spectral channels were averaged from 2.12185 µm (30 km s −1 ) to 2.12258 µm (134 km s −1 ). The blue contours represent the location of the central source and its continuum emission. The behaviour of this H 2 emission traces a different spatial component of the jet than that of the [Fe ii] emission. The redshifted component of the jet is primarily radiating H 2 at 2.122 µm. The molecular jet was already observed to be asymmetric by GL13, with a redshifted molecular jet component and also possibly a blueshifted atomic jet component. Here, we observe this asymmetry as well and also observe the atomic component. Our observations therefore adhere to the morphology sketch presented by GL13 (their Figure B.2). (GL13). It is apparent from Fig. 3 that not only the flux intensities of both line and continuum have changed from one epoch to another, but also the shape of each spectral energy distribution (SED). Between 2005 (blue), 2010 (red), and 2013 (orange) the flux increased and the shape of the SED went from being approximately flat (especially in the K band) to having a much steeper slope. The SED of 2014 (green) receded to a flux below that of 2010 (red), becoming less steep in the K band. Because 2010 data were only available in the K band, it is impossible to say definitively whether this was the case in the J and H bands as well. Emission lines at different epochs have been identified and labelled in Fig. 3; Table 2 provides a list of the main lines detected along with their full width at half maximum (FWHM), fluxes, radial velocities and full width at zero intensity (FWZI). These quantities are analysed further in the coming sections to derive visual extinction (A V ) and mass accretion rates at different epochs.
Spectroscopy and IFU on source
The spectra from IRS 54 display multiple hydrogenrecombination lines, the brightest of which are the Brγ and Paβ emission lines in the K and J band, respectively. These lines are primarily accretion signatures (e.g. Muzerolle et al. 1998). Forbidden iron ( [Fe ii] 1.257 µm and 1.644 µm) and molecular hydrogen (H 2 2.122 µm) emission lines, which trace the jet, are also visible. The [Fe ii] emission is also useful in understanding physical properties of the surroundings of the YSO, such as the amount of foreground extinction in the observations. Also present are the R(0-15) and P(1-9) (between 2.31 and 2.37 µm) rotational lines (J) of the v = 2 − 0 CO band head, which trace relatively mild temperatures (a few hundred Kelvin). These lines are seen in absorption in the 2010 and 2014 data (see Fig. 4). Many of these line measurements are below 3σ, however the signal-to-noise ratio is higher in the 2014 spectrum, making these absorption features easier to identify in this epoch. The 2010 epoch also contains these CO lines, less clearly, but just barely visible in absorption. Notably, a star of spectral type M would have CO photospheric absorption lines, including the high rotational lines (i.e. those that actually pile up, producing the band heads), which typically trace gas at a few thousand Kelvin. The high-J lines are not seen here. Moreover, such a young source as IRS 54 should present very high veiling and thus photospheric lines should not be detected. Therefore these features in absorption most likely originate from the outer disc or from the envelope, seen against a much hotter inner disc gas. These features are observed in emission during outbursts of EXors (Audard et al. 2014).
Variability in visual extinction
[Fe ii] transitions can be used to determine visual extinction (A V ), however, uncertainties are still prevalent when estimating the radiative transition probabilities used in the calculation (Giannini et al. 2015). The ratio of two bright NIR lines, 1.644/1.257 µm, is useful because they originate from the same upper level and are optically thin. The line ratio 1.644/1.320 µm can similarly be used to calculate A V , however the signal-to-noise ratio of the 1.320 µm line in our observations is below 3σ. Because these transitions originate from the same upper level, their theoretical intensity ratio depends not on the physical conditions in the emission region, but on the frequencies and transition probabilities. The observed ratio is: lowing equation, along with an extinction law (namely, Rieke & Lebofsky 1985), allows for the calculation of A V : where A λ is the extinction at a specific wavelength λ.
To study if and how the visual extinction changes with time, line ratios were measured at different epochs. Visual extinction (A V ) was calculated for the ISAAC data (2005 and (2005, 2010, 2013, and 2014). We note that the J band has two overlapping spectra from 2005 (blue) and 2013 (orange).
Notes. The errors propagated for theṀ acc for both Paβ and Brγ are underestimates, as the errors present in the conversion from line flux to accretion luminosity (the a and b values found in Alcalá et al. 2017) were omitted in order to compare the values found in the different epochs (which would be affected the same way by this conversion). This remains valid when comparing the values ofṀ acc found from the same line over two different periods. However, in order to compare between the values ofṀ acc found from two different lines, these errors on a and b would need to be taken into consideration, and the result would be a much larger error bar for theṀ acc values that would bring the measurements from Brγ and Paβ into agreement. Here, the former is preferred. Table 3). These values are similar to those found in other studies towards Class I protostars (Davis et al. 2011). It is important to note that these values represent lower limits for the extinction on source, because the [Fe ii] lines originate in the jet.
Accretion-Tracing Lines
The Brγ and Paβ lines are understood to primarily trace the accretion activity of the young star, rather than outflow activity further out (e.g. Muzerolle et al. 1998;Calvet et al. 2004;Alcalá et al. 2017). An empirical correlation exists between the accretion luminosity (L acc ) and the luminosity of these hydrogenrecombination lines (L line ). As expected of lines tracing the same processes, both of these lines follow similar trends of increasing flux in the accretion burst between 2005 and 2013. In 2014, a decrease is seen in the Brγ line (Fig. 5). The values of the fluxes of these lines can be found in Table 2. Figure 5a shows of ISAAC and SINFONI, the ISAAC spectra were re-sampled to match the lower resolution of the SINFONI data for comparison purposes. Subsequently, this line was found to have an average FWZI of ∼765 km s −1 . The intensity of the Brγ line flux increased by about a factor of five from 2005 (blue) to 2013 (orange), and then in 2014 (green) dropped to an intensity of only ∼ 40% of the peak of 2013 (orange). As a tracer of accretion processes, this decrease in Brγ emission suggests that accretion decreased dramatically between the 2013 and 2014 epochs, a much sharper change than the peak increase from 2005 to 2013. The integrated flux and radial velocity of the Brγ emission across all four epochs can be found in Table 2. In
Outflow-tracing lines
Jets provide an environment where shocks can break up dust grains releasing, for example, refractory elements like Fe into the gas phase (see, e.g. Nisini et al. 2002;Nisini 2008). Two [Fe ii] lines in particular (1.644 µm and 1.257 µm) are understood to be tracing the outflow activity in IRS 54 (Connelley & Greene 2014). They are consistent in their trends of flux increase and eventual decrease, as for the Brγ and Paβ lines, as seen in Fig. 6. This trend suggests that the YSO ejection activity close to the source follows the same path as that of accretion.
These forbidden emission lines can exhibit multiple and often complex velocity components (Davis et al. 2001). The highvelocity component (HVC) is generally associated with the jet at higher velocities on larger scales, while the low-velocity component (LVC) originates from a more compact and dense region at the base of the jet (see, e.g. Garcia Lopez et al. 2009). Both [Fe ii] lines clearly show a HVC and a LVC, which are both blueshifted. A redshifted component of the HVC is also visible in the 1.257 µm line at ∼+100 km s −1 . In the 1.644 µm line, the emission at ∼+100 km s −1 could potentially be the redshifted component of its HVC, but it is strongly blended with the LVC. Our difficulty in separating the different velocity components is not unexpected given the (almost edge-on) geometry of the disc.
In addition to atomic emission, molecular hydrogen (H 2 ) emission is also detected. This traces dense molecular gas of relatively low excitation (n H 2 ≥ 10 5 cm −3 , T ∼ 2000 K Caratti o Garatti et al. 2006). The brightest transition detected is the 1-0 S(1) line at 2.122 µm (other H 2 lines are also present in the data, but their signal-to-noise ratio values are much lower). Figure 6c shows the line profile at different epochs. Notably, between 2013 and 2014 the intensity dropped to ∼ 20% of the peak, below even that of the earliest (2005)
Photometry
In order to put our observations in perspective and investigate the variability of IRS 54 across a broader time frame and at different wavelengths, we combine our SINFONI and ISAAC photometry with archival and literature photometric data for IRS 54 obtained by 2MASS (2 Micron All Sky Survey), the SQIID (Simultaneous Quad-Color Infrared Imaging Device) at Kitt Peak National Observatory, DENIS (Deep Near Infrared Survey), NSFCAM (NASA), the Anglo-Australian Telescope (AAT), and the Widefield Infrared Survey Explorer (WISE). These data can be seen in Fig. 7 and Table 4, where their respective sources are cited. The WISE data (MIR) were used to obtain an idea of how the object has varied at these longer wavelengths where extinction is less influential. J, H, and K bands all show a similar trend in luminosity variability, while the MIR observations (bands W1 and W2 at 3.4 and 4.6 um, respectively) show the brightness peaking in 2010. Their steep rise suggests that the maximum has happened around the epochs of 2010 or 2011 rather than in 2013. However, due to the gaps in photometry between 2010 and 2013, it is impossible to define when the maximum of each light curve takes places. As monitoring data were available from NEOWISE between 2014 and 2019, it can be seen from the photometry in 2014 that W1 and W2 magnitudes dropped with respect to 2010, and after that there is a smooth secondary maximum followed by an erratic dimming of the source.
Multi-epoch archival data in the J, H, and K bands show a large variability of up to two magnitudes during the decade preceding our 2005 observations. The observed variability in IRS 54 seems therefore to be episodic rather than being a single event we happened to witness with our observations. 2MASS NIR archival data indicate that in 1999 the object was ∼1 mag brighter than the peak seen in this study. However, it is worth noting the differing spatial resolution and likely contamination from the surrounding nebulosity in the region in the 2MASS data. Nevertheless, as there are also DENIS data (in J and K bands) from the 1999 epoch that agree with the 2MASS data points, we can trust that IRS 54 brightened around 1999 and then a quiescent
Accretion variability
The mass accretion rate (Ṁ acc ) can be derived from measuring the release of accretion energy (L acc ) in the form of UV continuum and line emission ). The relation between L acc andṀ acc is expressed in Equation 3 Hartmann et al. 1998): where R is the radius of the YSO and R in is the inner (truncation) radius. We adopted the values M ∼ 0.15 M and R ∼ 2 R (GL13). The truncation radius is assumed to be R in = 5 R .
To calculate L acc , we used the empirical relation between line luminosity and accretion luminosity given by Alcalá et al. (2017) for the hydrogen-recombination lines Paβ (1.282 µm) and Brγ (2.166 µm). The observed line flux is reddened by the dust in the YSO system. The degree to which this reddening occurs can be calculated using the following equation: where F 0 λ is the line flux emitted from the source corrected for extinction, F obs is the measured line flux (affected by extinction), References.
(1) Barsony et al. (1997); (2) and A λ is the extinction measured at a specific wavelength. It is especially important to take this reddening into account when studying these early stages of star formation, as protostars are deeply embedded in their parent cloud and the value of A λ is not only non-negligible but can be quite large. With the extinctioncorrected flux and assuming d = 137±5 pc (Sullivan et al. 2019), we derive the line luminosities and use the following relation to derive L acc : where a and b are the parameters derived in Alcalá et al. (2017) and depend on the particular line being measured: the Brγ line has corresponding values of a = 1.19 ± 0.10 and b = 4.02 ± 0.51 and the Paβ line has corresponding values of a = 1.06 ± 0.07 and b = 2.76 ± 0.34. The derivedṀ acc values can be found in Table 3. We derive an average from these two lines to obtain a value ofṀ acc = (1.7 ± 0.5) × 10 −8 M yr −1 in 2005 and (2.6 ± 0.5) × 10 −7 M yr −1 in 2013. This shows that the mass accretion rate increases by one order of magnitude between 2005 and 2013. Notably, the A V increases in tandem with theṀ acc , although an increase in extinction is normally associated with a decrease in flux. However, in the case of IRS 54 the accretion burst is so strong that the flux increase is apparent even with the extinction dampening it.
To determine A V , we used the [Fe ii] line ratio, which originates from the jet rather than directly from the accretion region, where one would expect the HI lines to originate. As the jet is further from the surface of the star, the extinction would only decrease from the central source. However, as we extracted spectra on source, the extinction measured is on a part of the jet that is very close (within 1 -1.2 i.e. within ∼150 au of the source) to the central object, and is therefore a reasonable estimate. Even so, our values of A V should be taken as lower limits. Extinction dampens the apparent accretion luminosity, and therefore the ap-parentṀ acc . As a lower limit for A V has been calculated, the ac-tualṀ acc may be higher than reported. This is explored further in Sect. 4.
For the 2010 archival data, GL13 estimate extinction on source using a different method from ours, exploiting the measured versus expected ratio of H 2 lines. This leads to an estimate of A V of ∼30 mag. Using this value, GL13 estimateṀ acc ∼ 3 × 10 −7 M yr −1 from the relation between L line and L acc derived by Calvet et al. (2004). In order to compareṀ acc in 2010 with what was found in the other epochs (2005 and 2013), we recalculated it using the same Alcalá et al. (2017) relation between L line and L acc used throughout this study. The result isṀ acc = 4.3 × 10 −7 M yr −1 , which is slightly higher than that found in Table 4. Our data from SINFONI and ISAAC are also included.
GL13. This result would then strengthen the idea of the burst maximum being closer to the 2010-2011 epochs rather than the 2013 one.
Accretion and extinction variability
From our observations, it is clear that IRS 54 underwent significant changes in luminosity, accretion, and extinction during the period from 2005 to 2014. We interpret this sharp increase in flux as an accretion burst that peaked between 2010 and 2013.
TheṀ acc between 2005 and 2013 (Table 3) increases by a factor of ∼ 20, while A V increases by nine magnitudes (flux increases by ∼ 4000 in the V band and by ∼ 2.4 in the K band). While both of these are significant changes, it is clear that the accretion appears to have the larger effect on the SED of IRS 54 between this time period because we see an overall increase in luminosity, especially in the K band where the continuum flux increases by a factor of approximately six. The photometry also reflects this increase in flux during the burst (Fig. 7), supporting the idea of an accretion burst. The MIR data show that there is a large increase of over a magnitude in 2010 followed by a decrease and with subsequent small fluctuations from 2014 to 2019. The MIR shows a sharper increase in flux within 2010 than seen in the NIR which is likely due to extinction affecting these latter wavelengths to a lesser extent. To measure the effect of visual extinction on a given wavelength, the ratio A V /A λ can be used. In the case of the MIR, A V /A L is ∼ 17, and A V /A M is ∼ 43 (Rieke & Lebofsky 1985). Namely, A V = 15 mag and A V = 24 mag (the values found in this study) translate to 0.88 mag and 1.41 mag in the L band, and to only 0.35 mag and 0.56 mag in the M band, respectively.
As the extinction was seen to increase between 2005 and 2013, it is important to note that the spectra shown in Fig. 3 are reddened due to extinction. A V was thus calculated in the data where appropriate emission lines were available for the analysis (see Sect. 3.3). Between 2005 and 2013, we find a large increase in A V (from ∼15 mag to ∼24 mag). As line and continuum fluxes are also seen to increase in this time frame, this change cannot be accounted for solely by variable extinction. All increases of line fluxes that we measure, especially in the K band, are in fact more pronounced in the de-reddened spectra (see Figs. A.1 and A.2). The de-reddened spectra indicate that the 2010 epoch was close to the peak of the burst, as its flux is similar to that of 2013 in the K band, when both are corrected for extinction. However, it is important to remember that the A V in 2010 was estimated in GL13 using H 2 lines rather than the [Fe ii] lines which were used for 2005 and 2013, and therefore we must be cautious when comparing these values.
Both extinction and accretion can affect the shape of the SED of a protostar, but in different ways. An increase in accretion would result in a more pronounced increase in luminosity at shorter wavelengths. For example, in the NIR, it would increase the flux in the J band more than in the K band, effectively flattening the SED. Alternatively, an increase in extinction would result in a steeper slope of the SED in this wavelength range, decreasing the observed flux more at the shorter wavelengths (J band) than at the longer wavelengths (K band). Our results indicate that the change in SED shape observed is not due solely to one or the other of these phenomena, but to a combination of the two. We see an increase in steepness in the slope with increased extinction, but also an increase of flux in the H and especially the K bands. This combination of both accretion and extinction is representative of the complex processes at work and how they are related to one another in IRS 54. The combined effect of these two processes is quite unique, because it tends to flatten the light curves more at shorter wavelengths. This can be seen qualitatively in Fig. 7, where the J band light curve is much flatter with respect to the K, W1, or W2 bands.
A possible explanation for this tandem increase of both extinction and accretion is that the increase in accretion and ejection lifts a large amount of dust from the disc, which crosses the line of sight and therefore produces more extinction. The edgeon geometry of the system supports this interpretation as any dust lifted as a result of an accretion or ejection burst would easily intersect the line of sight between the observer and the source. Therefore, this system demonstrates an accretion or ejection burst activity that also increases the visual extinction along the line of sight. A similar increase of extinction was observed for RW Aur, whose photometric and polarimetric variability were explained by the presence of dust in the disc wind (Dodin et al. 2019;Koutoulaki et al. 2019).
The mass accretion rates derived in this study are likely lower limits, as the method used to determine A V utilised outflowtracing lines, which originate further from the source where the extinction is lower. As can be seen in Table 3, the values ofṀ acc derived from the two different lines are slightly different. Incorporating a higher A V value into the calculation ofṀ acc would provide a higher mass accretion rate. The accuracy of our A V estimates is thus investigated further by plottingṀ acc derived from the Paβ and Brγ lines and the difference in accretion luminosities measured using the Paβ and Brγ lines (∆L acc ) as a is strongest in the low extinction limit, and it diverges for A V values larger than ∼ 20 mag. In the 2013 case, the difference betweenṀ acc calculated using either line is largest at A V values ∼25 mag, which is where our measurement of A V ∼24 mag resides. However, this strengthens our assumption that this is a lower limit for extinction, as values approaching 30 mag provide better agreement. The red star in Fig. 8 represents the value for A V that minimises the disparity between theṀ acc values derived from Brγ and Paβ measurements, which is at A V ∼ 18.5 mag in 2005 and ∼ 29.5 mag in 2013. This is most likely close to the actual extinction toward the source in 2013. These A V values correspond toṀ acc ∼ 3.1 × 10 −8 M yr −1 in 2005 andṀ acc ∼ 7.3 × 10 −7 M yr −1 in 2013, respectively.
Jet variability and asymmetry
Examining the H 2 line emission maps in Fig. B.1, the 2014 epoch exhibits lower luminosity in line emission than that of the 2010 epoch. We sought to investigate whether the luminosity in the region of the jet was also changing. In principle, the jet would not be expected to change instantaneously with a change in accretion. Although accretion and ejection are linked, the observed jet emission originates from further out in the system and consequently takes longer to reflect an increase in accretion. Therefore, between 2010 and 2014, where accretion was observed to change, the jet would not necessarily change accordingly. To determine this, spectra from multiple regions of the same size across the SINFONI image were extracted in 2010 and 2014 and compared. Unexpectedly, we found the flux of the regions extracted along the jet to vary between the two epochs. A possible explanation is that these regions are contaminated by scattered light, which does reflect the changes in accretion luminosity. The cavity walls surrounding the central source are also illuminated by this scattered light and their flux also increases during the burst. It is clear that the jet of IRS 54 has both an atomic and a molecular component, as seen in Figs. 1 and 2 where the H 2 line emission is predominantly redshifted and the [Fe ii] line emission is predominantly blueshifted. This implies different excitation conditions and possibly different velocities of the jet material ejected in the two lobes. This asymmetry could be due in part to the inhomogeneity of the interstellar medium (ISM) in the region, or is potentially an effect of a misalignment in the magnetic fields of the protostar, but investigating this issue is beyond the scope of this study.
It is also interesting to note that during the 2014 epoch dimming, the flux from the jet-tracing lines (H 2 and [Fe ii]) dropped to a flux lower than that of 2005. In contrast, the flux from accretion-tracing lines (Paβ, Br12 and Brγ) during 2014 dropped only to a flux between that of 2005 and 2010, which reflects how the continuum changed during this period across all bands (J, H, and K). This difference in how the flux changed in 2014 is most obviously seen in Fig. 6b where Br12 line is adjacent to the [Fe ii] 1.644 µm emission line. This implies that the processes of accretion and ejection are not behaving synchronously following the burst, as we would expect considering they originate from different regions.
A new EXor object?
The increase of mass accretion rate seen in IRS 54 is consistent with that observed in EXor objects. In addition, EXor burst timescales and frequencies are also in line with our observations of IRS 54. As our observations are at NIR wavelengths, the observed increase in luminosity of IRS 54 is lower than what would be observed in typical, much less embedded EXors, which are usually observed at optical wavelengths and are therefore expected to be larger. If one considers this and the fact that IRS 54 is a very low-mass star, its luminosity increase is likely on par with typical Exor-type bursts. The brightness of IRS 54 increases by more than one magnitude in the K band (and varies by more than two magnitudes over 20 years in the J, H, and K bands) anḋ M acc increases by at least an order of magnitude.
We also note a similar behaviour of EX Lup and IRS 54 light curves in the MIR in terms of strength, duration and shape (see Figure 1 of Ábrahám et al. 2019, and our Fig. 7). As for IRS 54, EX Lup W1 and W2 light curves (during the 2011 burst) have a similar shape, showing a steep rise at the beginning and a smoother flickering decline lasting a few years, as indeed seen in IRS 54 (see top panels of Fig. 7). In both objects the MIR brightness increases by a couple of magnitudes.
Both the duration (a few years) and intensity of the burst hint at an EXor-type event, although some of the typical disc spectral features are missing, such as the CO band head lines in emission and Na lines. This might be due to the system geometry of IRS 54, which prevents us from seeing the inner disc, as these signatures originate from that region. On the other hand, the hydrogen lines detected in IRS 54 that trace accretion are likely seen in scattered light. Indeed, the low-J lines from the v = 2-0 CO band head are seen in absorption, suggesting a cold gas (with temperature of a few hundred Kelvin), likely originating from the outer disc atmosphere, absorbing emission coming from a hotter (inner) region emitting CO. This might possibly hint at CO band heads in emission, typical of EXor bursts, being present in the inner gaseous disc of IRS 54. While it is not possible at this time to definitively decipher whether or not IRS 54 is an EXor-type object, the issue is worthy of further investigation, as this source would be the first very low-mass protostar where an EXor-type burst is observed.
Conclusions
In the course of this study, data obtained with ISAAC and SIN-FONI over four epochs (2005, 2010, 2013, and 2014) were reduced and analysed to assess variability in IRS 54. This was done over the wavelength range 1.24 µm to 2.45 µm (J, H, and K bands). Significant changes in flux were found between the epochs, reflecting an increase between 2005 and 2013 and a drop in 2014. The lightcurves in the W1 and W2 bands show a similar behaviour, showing a steep rise in 2010 and possibly a secondary maximum after 2014. This increase in luminosity is accompanied by a burst in the mass accretion rate,Ṁ acc , which increases from ∼ 1.7 × 10 −8 M yr −1 in 2005 to ∼ 2.6 × 10 −7 M yr −1 in 2013. This burst is consistent with the photometric data that we have been gathering during these same epochs. Going back to archival data from 1999, the photometry illustrates that this protostar went through a previous change in luminosity between 1999 and 2005 of approximately two magnitudes in the J, H, and K bands. These two large changes in flux suggest these bursts may be episodic.
Specific emission lines were analysed for this variability and to calculateṀ acc and A V , which demonstrated an increase in tandem with one another. Maps of the jet-tracing emission (H 2 and [Fe ii]) were also generated to examine how the emission varied spatially about the central star of the YSO. The [Fe ii] was found to be emitting predominantly from the blueshifted component of the jet, and the H 2 was found to be illuminating the cavity walls with scattered light, while primarily originating from the redshifted component of the jet. This asymmetry is notable and may help understand the inner mechanism at work in the YSO. Two of the possible causes for this asymmetry in emission from the jet are that the interstellar medium in the region may be inhomogeneous or that there may be misaligned magnetic fields in the protostar.
Examining the SED of each epoch and its variability, we deduce that the changes it exhibits reflect a combination of both an increase in accretion and extinction. A possible explanation for this tandem increase in these two parameters is that the increased accretion and ejection activity during the burst lifts up material into the line of sight and obscures the YSO.
The timescales of the burst seen in IRS 54 are reminiscent an EXor-type object, and its increase in luminosity is exceptionally large over a short period of time, especially considering it is a VLMS. Further investigation of IRS 54 as a potential EXor-type object is warranted, as if found to be true, IRS 54 would be the lowest mass Class I source observed to have this type of violent bursts in accretion and ejection. | 2020-09-28T01:01:07.168Z | 2020-09-25T00:00:00.000 | {
"year": 2020,
"sha1": "2564d5fbfee4798a41b4a20971dbd7f2cf689b39",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2020/11/aa38897-20.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "4d8566efc910db6e4576d2b156253a4fda38a0e3",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
371801 | pes2o/s2orc | v3-fos-license | The six operations for sheaves on Artin stacks I: Finite Coefficients
In this paper we develop a theory of Grothendieck's six operations of lisse-\'etale constructible sheaves on Artin stacks which are locally of finite type over suitable regular basis of dimension at most 1.
Introduction
We denote by Λ a Gorenstein local ring of dimension 0 and characteristic l. Let S be an affine regular, noetherian scheme of dimension ≤ 1 and assume l is invertible on S. We assume that all S-schemes of finite type X satisfy cd l (X) < ∞ (see 1.0.1 for more discussion of this). For an algebraic stack X locally of finite type over S and * ∈ {+, −, b, ∅, [a, b]} we write D * c (X ) for the full subcategory of the derived category D * (X ) of complexes of Λ-modules on the lisse-étale site of X with constructible cohomology sheaves.
In this paper we develop a theory of Grothendieck's six operations of lisse-étale constructible sheaves on Artin stacks locally of finite type over S 1 . In forthcoming papers, we will also develop a theory of adic sheaves and perverse sheaves for Artin stacks. In addition to being of basic foundational interest, we hope that the development of these six operations for stacks will have a number of applications. Already the work done in this paper (and the forthcoming ones) provides the necessary tools needed in several papers on the geometric Langland's program (e.g. [17], [15], [11]). We hope that it will also shed further light on the Lefschetz trace formula for stacks proven by Behrend ([6]), and also to versions of such a formula for stacks not necessarily of finite type. We should also remark that recent work of Toen should provide another approach to defining the six operations for stacks, and in fact should generalize to a theory for n-stacks. The main tool is to define Rf ! , f ! , even for unbounded constructible complexes, by duality.
One of the key points is that, as observed by Laumon, the dualizing complex is a local object of the derived category and hence has to exist for stacks by glueing (see 2.3.3). Notice that this formalism applies to non-separated schemes, giving a theory of cohomology with compact supports in this case. Previously, Laumon and Moret-Bailly constructed the truncations of dualizing complexes for Bernstein-Lunts stacks (see [16]). Our constructions reduces to theirs in this case. Another approach using a dual version of cohomological descent has been suggested by Gabber but seems to be technically much more complicated. Remark 1.0.1. The cohomological dimension hypothesis on schemes of finite type over S is achieved for instance if S is the spectrum of a finite field or of a separably closed field. In dimension 1, it will be achieved for instance for the spectrum of a complete discrete valuation field with residue field either finite or separably closed, or if S is a smooth curve over C, F q (cf. [4], exp. X and [21]). In these situations, cd l (X) is bounded by a function of the dimension dim(X). Notice that, as pointed out by Illusie, recent results of Gabber enables one to dramatically weaken the hypothesis on S. Unfortunately no written version of these results seems to be available at this time.
1.1. Conventions. Recall that for any ring O of a topos, the category of complexes of Omodules has enough K-injective (or homotopically injective). Recall that a complex I is Kinjective if for any acyclic complex of O-modules A, the complex H om(A, I) is acyclic (see [22]).
For instance, any injective resolution in the sense of Cartan-Eileberg of a bounded below complex is K-injective. This result is due, at least for sheaves on a topological space to [22] and enables him to extend the formalism of direct images and Rhom to unbounded complexes. But this result is true for any Grothendieck category ( [20]). Notice that the category of O-modules has enough K-flat objects, enabling one to define L ⊗ for unbounded objects ( [22]).
All the stacks we will consider will be locally of finite type over S. As in [16], lemme 12.1.2, the lisse-étale topos X lis-ét can be defined using the site Lisse-Et(X ) whose objects 2 We will often write f * , f ! , f * , f ! for Lf * , Rf ! , Rf * , Rf ! . are S-morphisms u : U → X where U is an algebraic space which is separated and of finite type over S. The topology is generated by the pretopology such that the covering families are finite families (U i , u i ) →(U, u) such that U i → U is surjective andétale (use the comparison theorem [2], III.4.1 remembering X is locally of finite type over S). Notice that products over X are representable in Lisse-Et(X ), simply because the diagonal morphism X → X × S X is representable by definition ( [16]).
If C is a complex of sheaves and d a locally constant valued function C(d) is the Tate twist and C[d] the shifted complex. We denote C(d)[2d] by C d . Let Ω = Λ dim(S) be the dualizing complex of S ( [9], "Dualité"). (iii) For every n the map I n → I n−1 is surjective with kernel K n a bounded below complex of flasque O-modules.
(iv) For any pair of integers n and i the sequence (2.1.0.1) 0 → K i n → I i n → I i n−1 → 0 is split.
Remark 2.1.1. In fact ( [22], 3.7) shows that we can choose I n and K n to be complexes of injective O-modules (in which case (iv) follows from (iii)). However, for technical reasons it is sometimes useful to know that one can work just with flasque sheaves.
We make the following finiteness assumption, which is the analog of [22] finite type over S and F ∈ S , one has H n (U, F) = H n (Ué t , F U ) 3 which is zero for n bigger than a constant depending only on U (and not on F). Therefore, one can take the trivial covering in this case. We could also take O = O X and C to be the class of quasi-coherent sheaves.
With hypothesis 2.1.2, one has the following criterion for f being a quasi-isomorphism (cf. [22], 3.13). To see that H j (M) → H j (I) is surjective, let U ∈ S be an object and γ ∈ Γ(U, I j ) an element with dγ = 0 defining a class in H j (I)(U). Since I = lim ← − I n the class γ is given by a compatible collection of sections γ n ∈ Γ(U, I j n ) with dγ n = 0. Let (U = {U i → U}, n 0 ) be the data provided by 2.1.2. Let N be an integer greater than n 0 − j. For m > N and U i ∈ U the sequence is exact. Indeed K m is a bounded below complex with H j (K m ) ∈ C for every j and H j (K m ) = 0 for j ≥ −m + 2. It follows that H j (U i , K m ) = 0 for j ≥ n 0 − m + 2.
Since the maps Γ(U i , I r m ) → Γ(U i , I r m−1 ) are also surjective for all m and r, it follows from ( [22], 0.11) applied to the system is an isomorphism.
Then since the map H j (M) → H j (I m ) is an isomorphism it follows that for every i the restriction of γ to U i is in the image of H j (M)(U i ).
Next consider a fibred topos T → D with corresponding total topos T • ( [3], VI.7). We call T • a D-simplicial topos. Concretely, this means that for each i ∈ D the fiber T i is a topos and that any δ ∈ Hom D (i, j) comes together with a morphism of topos δ : T i → T j such that δ −1 is the inverse image functor of the fibred structure. The objects of the total topos are simply collections (F i ∈ E i ) i∈D together with functorial transition morphisms δ −1 F j → F i for any δ ∈ Hom D (i, j). We assume furthermore that T • is ringed by a O • and that for any Let C • be a full subcategory of the category of O • -modules on a ringed D-simplicial topos (S ii) The morphism f is induced by a compatible collection of quasi-isomorphisms f n : (S iii) For every n the map I n → I n−1 is surjective with kernel K n a bounded below complex of injective O-modules.
(S iv) For any pair of integers n and i the sequence with the obvious transition morphisms. It is exact by the flatness of the morphisms δ. It follows that e * i takes injectives to injectives and commutes with direct limits. We can therefore apply 2.1.4 to e * i M → e * i I to deduce that this map is a quasi-isomorphism. In what follows we call a K-injective resolution f : M → I obtained from data (i)-(iv) as above a Spaltenstein resolution.
The main technical lemma is the following. (2) There exists i 0 such that R i ǫ * H n (C) = 0 for any n and any i > i 0 .
Proof: By 2.1.9 and assumption (1), there exists a Spaltenstein resolution f : C → I of C.
Let J n := ǫ * I n and D n := ǫ * K n . Since the sequences 2.1.8.1 are split, the sequences The exact sequence 2.1.8.1 and property (S ii) defines a distinguished triangle showing that K n is quasi-isomorphic to H −n (C)[n]. Because K n is a bounded below complex of injectives, one gets Rǫ * H −n (C)[n] = ǫ * K n and accordingly By assumption (2), we have therefore By ( [22], 0.11) this implies that is an isomorphism for j ≥ −n + i 0 . But, by adjunction, ǫ * commutes with projective limit. In particular, one has lim ← − J n = ǫ * I, and by (S i) and (S ii) Rǫ * C = ǫ * I and Rǫ * τ ≥−n C = ǫ * J n .
Thus for any n such that j ≥ −n + i 0 one has Let C be a full subcategory of the category of Ψ-modules, and assume that C is closed under kernels, cokernels and extensions (one says that C is a Serre subcategory). Let D(S) denote the derived category of Ψ-modules, and let D C (S) ⊂ D(S) be the full subcategory consisting of complexes whose cohomology sheaves are in C . Let C • denote the essential image of C under the functor ǫ * : We assume the following condition holds: obtain by applying ǫ * ǫ * a commutative diagram with exact rows It follows that α is an isomorphism. Furthermore, since C is closed under extensions we have There exists a unique ϕ ∈ Hom(F 1 , F 2 ) such that f = ǫ * ϕ.
Because ǫ * is exact, it maps the kernel and cokernel of ϕ, which are objects of C , to the kernel and cokernel of f respectively. Therefore, the latter are objects of C • .
Let D(T • ) denote the derived category of O • -modules, and let D C• (T • ) ⊂ D(T • ) denote the full subcategory of complexes whose cohomology sheaves are in C • .
Since ǫ is a flat morphism, we obtain a morphism of triangulated categories (the fact that these categories are triangulated comes precisely from the fact that both C and C • are Serre categories [12]).
In particular, we get by induction R j ǫ * M • ∈ C . Thus Rǫ * defines a functor To prove 2.2.3 it suffices to show that for M • ∈ D C• (T • ) and F ∈ D C (S) the adjunction maps are isomorphisms. For this note that for any integers j and n there are commutative diagrams By the observation at the begining of the proof, there exists an integer n so that the vertical arrows in the above diagrams are isomorphisms. This reduces the proof 2.2.3 to the case of a bounded below complex. In this case one reduces by devissage to the case when M • ∈ C • and F ∈ C in which case the result holds by assumption.
The Theorem applies in particular to the following examples.
Example 2.2.4. Let S be an algebraic space and X • → S a flat hypercover by algebraic spaces.
We then obtain an augmented simplicial topos ǫ : (X •,ét , O X •,ét ) → (Sé t , Oé t ). Note that this augmentation is flat. Let C denote the category of quasi-coherent sheaves on Sé t . Then the category C • is the category of cartesian sheaves of O X •,ét -modules whose restriction to each X n is quasi-coherent. Let D qcoh (X • ) denote the full subcategory of the derived category of O X•,ét -modules whose cohomology sheaves are quasi-coherent, and let D qcoh (S) denote the full subcategory of the derived category of O Sé t -modules whose cohomology sheaves are quasicoherent. Theorem 2.2.3 then shows that the pullback functor is an equivalence of triangulated categories with quasi-inverse Rǫ * .
Example 2.2.5. Let X be an algebraic stack and let U • → X be a smooth hypercover by algebraic spaces. Let D(X ) denote the derived category of sheaves of O X lis-ét -modules in the topos X lis-ét , and let D qcoh (X ) ⊂ D(X ) be the full subcategory of complexes with quasicoherent cohomology sheaves.
Let U + • denote the strictly simplicial algebraic space obtained from U • by forgetting the degeneracies. Since the Lisse-Étale topos is functorial with respect to smooth morphisms, we therefore obtain a strictly simplicial topos U •lis-ét and a flat morphism of ringed topos On the other hand, there is also a natural morphism of ringed topos with π * and π * both exact functors. Let D qcoh (U •ét ) denote the full subcategory of the derived category of O U •ét -modules consisting of complexes whose cohomology sheaves are quasicoherent (i.e. cartesian and restrict to a quasi-coherent sheaf on each U nét ). Then π induces an equivalence of triangulated categories D qcoh (U •ét ) ≃ D qcoh (U •lis-ét ). Putting it all together we obtain an equivalence of triangulated categories D qcoh (X lis-ét ) ≃ D qcoh (U •ét ). Let ∆ denote the strictly simplicial category of finite ordered sets with injective order preserving maps, and let ∆ + ⊂ ∆ denote the full subcategory of nonempty finite ordered sets.
Let T be a topos and U · → e a strictly simplicial hypercovering of the initial object e ∈ T.
For [n] ∈ ∆ write U n for the localized topos T| Un where by definition we set U ∅ = T. Then we obtain a strictly simplicial topos U · with an augmentation π : U · → T.
Let Λ be a sheaf of rings in T and write also Λ for the induced sheaf of rings in U · so that π is a morphism of ringed topos.
Let C · denote a full substack of the fibered and cofibered category over ∆ [n] → (category of sheaves of Λ-modules in U n ) such that each C n is a Serre subcategory of the category of Λ-modules in U n . For any [n] we can then form the derived category D C (U n , Λ) of complexes of Λ-modules whose cohomology sheaves are in C n . The categories D C (U n , Λ) form a fibered and cofibered category over ∆.
We make the following assumptions on C : the topos U n is equivalent to the topos associated to a site S n such that for any object V ∈ S n there exists an integer n 0 and a covering {V j → V} in S n such that for any F ∈ C n we have H n (V j , F) = 0 for all n ≥ n 0 .
(ii) The natural functor is an equivalence of categories.
Remark 2.3.2. The case we have in mind is when T is the lisse-étale topos of an algebraic stack X locally of finite type over an affine regular, noetherian scheme of dimension ≤ 1, U · is given by a hypercovering of X by schemes, Λ is a Gorenstein local ring of dimension 0 and characteristic l invertible on X , and C is the category of constructible Λ-modules. In this case the category D c (X lis-et , Λ) is compactly generated. Indeed a set of generators is given by sheaves j ! Λ[i] for i ∈ Z and j : U → X an object of the lisse-étale site of X .
There is also a natural functor The uniqueness is the easy part: The existence part is more delicate. Let A denote the fibered and cofibered category over ∆ whose fiber over [n] ∈ ∆ is the category of Λ-modules in U n . For a morphism α : F ∈ A (n) and G ∈ A (m) we have We write A + for the restriction of A to ∆ + .
Define a new category tot(A + ) as follows: • The objects of tot(A + ) are collections of objects (A n ) n≥0 with A n ∈ A (n).
• For two objects (A n ) and (B n ) we define where the product is taken over all morphisms in ∆ + .
• If f = (f α ) ∈ Hom((A n ), (B n )) and g = (g α ) ∈ Hom((B n ), (C n )) are two morphisms then the composite is defined to be the collection of morphisms whose α component is where the sum is taken over all factorizations of α.
The category tot(A + ) is an additive category.
Let (K, d) be a complex in tot(A + ) so for every degree n we are given a family of objects or equivalently d(α) is a map K n,p → K m,p+n−m+1 . In particular, d(id [n] ) defines a map K n,m → K n,m+1 and as explained in [7, 3.2.8] this map makes K n, * a complex. Furthermore for any α the map d(α) defines an α-map of complexes K n, * → K m, * of degree n − m + 1. The collection of complexes K n, * can also be defined as follows. For an integer p let L p K denote the subcomplex with (L p K) n,m equal to 0 if n < p and K n,m otherwise. where d ′′ denote the differential (−1) n d(id [n] ). Note that the functor (K, d) → K n, * commutes with the formation of cones and with shifting of degrees.
As explained in [7, 3. Let K(tot(A + )) denote the category whose objects are complexes in tot(A + ) and whose morphisms are homotopy classes of morphisms of complexes. The category K(tot(A + )) is a triangulated category. Let L ⊂ K(tot(A + )) denote the full subcategory of objects K for which each K n, * is acyclic for all n. The category L is a localizing subcategory of K(tot(A + )) in the sense of [8, 1.3] and hence the localized category D(tot(A + )) exists. The category D(tot(A + )) is obtained from K(tot(A + )) by inverting quasi-isomorphisms. Recall that an object K ∈ K(tot(A + )) is called L-local if for any object X ∈ L we have Hom K(tot(A + )) (X, K) = 0. Note that the functor K → K n, * descends to a functor We define D + (tot(A + )) ⊂ D(tot(A + )) to be the full subcategory of objects K for which there exists an integer N such that H j (K n, * ) = 0 for all n and all j ≤ N.
Recall [8, 4.3] that a localization for an object K ∈ K(tot(A + )) is a morphism K → I with I an L-local object such that for any L-local object Z the natural map is an isomorphism.
Lemma 2.3.5. A morphism K → I is a localization if and only if I is L-local and for every n
the map K n, * → I n, * is a quasi-isomorphism.
Proof: By [8, 2.9] the morphism 2.3.4.1 can be identified with the natural map If K → I is a localization it follows that this map is a bijection for every L-local Z. By Yoneda's lemma applied to the full subcategory of D(tot(A + )) of objects which can be represented by L-local objects, it follows that this holds if and only if K → I induces an isomorphism in D(tot(A + )) which is the assertion of the lemma.
Then K is L-local.
Proof: Let X ∈ L be an object. We have to show that any morphism f : X → K in C(tot(A + )) is homotopic to zero. Such a homotopy h is given by a collection of maps h(α) such that We usually write just h for h(id [n] ).
We construct these maps h(α) by induction on b(α) − s(α). For s(α) = b(α) we choose the h(α) to be any homotopies between the maps f (id [n] ) and the zero maps.
For the inductive step, it suffices to show that commutes with the differentials d, where Σ ′ α=βγ denotes the sum over all possible factorizations with β and γ not equal to the identity maps. For then Ψ(α) is homotopic to zero and we can take h(α) to be a homotopy between Ψ(α) and 0. Define where Σ ′ α=ǫργ denotes the sum over all possible factorizations with ǫ, ρ, and γ not equal to the identity maps.
We can now prove 2.3.6. We compute This completes the proof of 2.3.6. Let be the functor sending a complex K to the object of C(tot(A + )) with ǫ * K n, * = K with maps Proof: We apply the adjoint functor theorem [18, 4.1]. By our assumptions the category D(A (∅)) is compactly generated. Therefore it suffices to show that ǫ * commutes with coproducts (direct sums) which is immediate.
More concretely, the functor Rǫ * can be computed as follows. If K is L-local and there exists an integer N such that for every n we have K n,m = 0 for m < N, then Rǫ * K is represented by the complex with (ǫ * K) p = ⊕ n+m=p ǫ n * K n,m with differential given by d(α). This follows from Yoneda's lemma and the observation that for any F ∈ D(A (∅)) we have Proof: Represent F by a complex of injectives. Then ǫ * F is L-local by 2.3.6. The result then follows from cohomological descent.
Proof: For any integer s and system (K n, * , d(α)) defining an object of C(tot(A + )) we obtain a new object by (τ ≤s K n, * , d(α)) since for any α which is not the identity morphism the map d(α) has degree ≤ 0. We therefore obtain a functor τ ≤s : C(tot(A + )) → C(tot(A + )) which takes quasi-isomorphisms to quasi-isomorphisms and hence descends to a functor Furthermore, there is a natural morphism of functors τ ≤s → τ ≤s+1 and we have K ≃ hocolim τ ≤s K.
Note that the functor ǫ * commutes with homotopy colimits since it commutes with direct sums.
If we show the proposition for the τ ≤s K then we see that the natural map ǫ * (hocolimRǫ * τ ≤s K) ≃ hocolimǫ * Rǫ * τ ≤s K → hocolimτ ≤s K ≃ K is an isomorphism. In particular K is in the essential image of ǫ * . Write K = ǫ * F. Then by It therefore suffices to prove the proposition for K bounded above. Considering the distinguished triangles associated to the truncations τ ≤s K we further reduce to the case when K is concentrated in just a single degree. In this case, K is obtained by pullback from an object of A (∅) and the proposition again follows from 2.3.10.
For an object K ∈ K(tot(A + )), we define τ ≥s K to be the cone of the natural map τ ≤s−1 K →
K.
Observe that the category K(tot(A + )) has products and therefore also homotopy limits.
Let K ∈ K C (tot(A + )) be an object. By 2.3.11, for each s we can find a bounded below complex of injectives I s ∈ C(A (∅)) and a quasi-isomorphism σ s : τ ≥s K → ǫ * I s . Since ǫ * I s is L-local and Proof: It suffices to show that for all n the map K n, * → holimǫ * n I s is a quasi-isomorphism, where ǫ n : U n → T is the projection. Let S n be a site inducing U n as in 2.3.1. We show that for any integer i the map of presheaves on the subcategory of S n satisfying the finiteness is an isomorphism. For this note that for every s there is a distinguished triangle and hence by the assumption 2.3.1 (i) the map is an isomorphism for s < i − n 0 . Since each ǫ * n I s is a complex of injectives, the complex s ǫ * n I s is also a complex of injectives. Therefore It follows that there is a canonical long exact sequence From this and the fact that the maps 2.3.12.1 are isomorphisms for s sufficiently big it follows that the cohomology group H i (V, holimǫ * n I s ) is isomorphic to H i (V, K n, * ) via the canonical map. Passing to the associated sheaves we obtain the proposition. Corollary 2.3.13. Every object K ∈ D C (tot(A + )) is in the essential image of the functor Proof: Since ǫ * also commutes with products and hence also homotopy limits we find that K ≃ ǫ * (holim I s ) in D C (tot(A + )) (note that H i (holimI s ) is in C since this can be checked after applying ǫ * ).
Proof: Represent each K n by a homotopically injective complex (denoted by the same letter) in C(U n , Λ) for every n. For each morphism ∂ i : [n] → [n + 1] (the unique morphism whose image does not contain i) choose a ∂ i -map of complexes ∂ * i : K n → K n+1 inducing the given map in D(U n+1 , Λ) by the strictly simplicial structure. The proof then proceeds by the same argument used to prove [7, 3.2.9].
Combining this with 2.3.13 we obtain 2.3.3.
Dualizing complex
3.1. Dualizing complexes on algebraic spaces. Let W be an algebraic space and w : W → S be a separated 5 morphism of finite type with W an algebraic space. We'll define Ω w by glueing as follows. By the comparison lemma ( [2], III.4.1), theétale topos Wé t can be defined using the siteÉtale(W) whose objects areétale morphisms A : U → W where a : U → S is affine of finite type. The localized topos Wé t|U coincides with Ué t .
Unless otherwise explicitly stated, we will ring the variousétale or lisse-étale topos which will be appear by the constant Gorenstein ring Λ of dimension 0 of the introduction.
Notice that this is not true for the corresponding lisse-étale topos. This fact will cause some difficulties below. Let Ω denote the dualizing complex of S, and let α : U → S denote the structural morphism. We define which is the (relative) dualizing complex of U, and therefore one gets by biduality ( [9], «Th.
We want to apply the glueing theorem 2.3.3. Let us therefore consider a diagram with a commutative triangle and A, B ∈Étale(W).
Proof: LetW = U × W V : it is an affine scheme, of finite type over S, andétale over both U, V. In fact, we have a cartesian diagram where ∆ is a closed immersion (W/S separated) showing thatW = U× W V is a closed subscheme of U × S V which is affine. Looking at the graph diagram with cartesian squarẽ we get that a, b areétale and separated like A, B. One deduces a commutative diagram We claim that Indeed, a, b being smooth of relative dimension 0, one has and analogously Pulling back by s gives the result.
Therefore (Ω A ) A∈Étale(W) defines locally an object Ω w of D(W) with vanishing negative E xt's (recall that w : W → S is the structural morphism). By 2.3.3, we get We need functoriality for smooth morphisms.
algebraic space separated and of finite type over S with dualizing complexes Ω 1 , Ω 2 , then Proof: Start with U 2 → W 2é tale and surjective with U 2 affine say. Then, is an algebraic space separated and of finite type over S. Let U 1 →W 1 be a surjectiveétale morphism with U 1 affine and let g : It is a smooth morphism of relative dimension d between affine schemes of finite type from which follows the formula g ! (−) = g * (−) d . Therefore, the pull-backs of L 1 = Ω 1 − d and f * Ω 2 on U 1 are the same, namely Ω U 1 . One deduces that these complexes coincide on the covering sieve W 1ét|U 1 and therefore coincide by 2.3.4 (because the relevant negative E xt i 's vanish. 3.2.Étale dualizing data. Let X → S be an algebraic S-stack locally of finite type. Let A : U → X in Lisse-Et(X ) and α : U → S the composition U → X → S. We define where d A is the relative dimension of A (which is locally constant). Up to shift and Tate torsion, K A is the (relative) dualizing complex of U and therefore one gets by biduality We need again a functoriality property of K A . Let us consider a diagram with a 2-commutative triangle and A, B ∈ Lisse-Et(X ).
Proof: Let W = U × X V which is an algebraic space. One has a commutative diagram In particular, a, b are smooth and separated like A, B. One deduces a commutative diagram where w denotes the structural morphism W → S.
and analogously Pulling back by s gives the result.
Remark 3.2.2.
Because all S-schemes of finite type satisfy cd Λ (X) < ∞, we know that K X is not only of finite quasi-injective dimension but of finite injective dimension ( [5], I.1.5). By construction this implies that K A is of finite injective dimension for A as above.
3.3. Lisse-étale dualizing data. In order to define Ω X ∈ D(X lis-ét ) by glueing, we need induces a continuous morphism of sites. Since finite inverse limits exist inÉtale(U) and this morphism of sites preserves such limits, it defines by ([2], 4.9.2) a morphism of topos (we abuse notation slightly and omit the dependence on A from the notation) ǫ : X lis-ét|U → Ué t .
3.3.1. Let us describe more explicitely the morphism ǫ. Let Lisse-Et(X ) |U denote the category of morphisms V → U in Lisse-Et(X ). The category Lisse-Et(X ) |U has a Grothendieck topology induced by the topology on Lisse-Et(X ), and the resulting topos is canonicallly isomorphic to the localized topos X lis-ét|U . Note that there is a natural inclusion Lisse-Et(U) ֒→ Lisse-Et(X ) |U but this is not an equivalence of categories since for an object (V → U) ∈ Lisse-Et(X ) |U the morphism V → U need not be smooth. It follows that an element of X lis-ét|U is equivalent to giving for every U-scheme of finite type V → U, such that the com- Furthermore, these morphisms satisfy the usual compatibility with compositions. Viewing X lis-ét|U in this way, the functor In particular, the functor ǫ * is exact and, accordingly, that H * (U, F) = H * (Ué t , F U ) for any shaf of Λ modules of X .
where X lis-ét|U → X lis-ét|V is the localization morphism ( [2], IV.5.5.2) which we still denote by f slightly abusively. For a sheaf F ∈ Vé t , the pullback f −1 ǫ −1 F is the sheaf corresponding to the system which to any p : By the preceding discussion, if showing that the family (κ A ) defines locally an object of D(X lis-ét ).
3.4.
Glueing the local dualizing data. Let A ∈ Lisse-Et(X ) and ǫ : X lis-ét|U → Ué t be as Proof: Since ǫ * is exact and for any sheaf F ∈ Ué t one has F = ǫ * ǫ * F, the adjunction map F → Rǫ * ǫ * F is an isomorphism for any F ∈ D(Ué t ). By trivial duality, one gets Taking H i RΓ gives (i).
The discussion above shows that we can apply 2.3.3 to (κ A ) to get It is well defined up to unique isomorphism.
The independence of the presentation is straightforward and is left to the reader : Lemma 3.4.4. Let p i : X i → X , i = 1, 2 two presentations as above. There exists a canonical, Definition 3.4.5. The dualizing complex of X is the "essential" value Ω X ∈ D b (X lis-ét ) of Ω X (p), where p runs over presentations of X . It is well defined up to canonical functorial isomorphism and is characterized by Ω X |U = ǫ * K A for any A : U → X in Lisse-Et(X ).
3.5.
Biduality. For A, B any abelian complexes of some topos, there is a biduality morphism In general, it is certainly not an isomorphism.
is an isomorphism (where K U is -up to shift and twist-the dualizing complex of Ué t ).
Proof: If A is moreover bounded, it is the usual theorem of [9]. Let us denote by τ n the two-sides truncation functor τ ≥−n τ ≤n .
We know that K U is a dualizing complex ( [5], exp. I), and is of finite injective dimension We will be interested in a commutative diagram Lemma 3.5.2. Let F ∈ D c (X lis-ét ) and let F U ∈ D c (Ué t ) be the object obtained by restriction.
Proof: Let's prove (i ). By 3.2.1, one has f * K A = K B , therefore one has a morphism To prove that it is an isomorphism, consider first the case when f is smooth. Because both K A and K B are of finite injective dimension (3.2.2), one can assume that F is bounded where it is obviously true by reduction to F the constant sheaf (or use [5], I.7.2). Therefore the result holds when f is smooth.
From the case of a smooth morphism, one reduces the proof in general to the case when X is a scheme. Let F X ∈ D c (Xé t ) denote the complex obtained by restricting F . By the smooth case already considered, we have For (ii ), one can also assume F bounded and one uses [5], I.7.1.
Proof: By definition of constructibility, H i (F ) are cartesian sheaves. In other words, ǫ * being exact, the adjunction morphism is an isomorphism. We therefore have Therefore, we get a morphism But, one has and the lemma follows from 3.5.2.
Remark 3.5.5. It seems over-optimistic to think that Ω X would be of finite injective dimension even if X is a scheme.
Proof: Immediate consequence of 3.5.2 and 3.5.3.
is an involution. More precisely, the morphism ι : Id → D X • D X induced by 3.5.0.1 is an isomorphism.
Proof: We have to prove that the cone C of the biduality morphism is zero in the derived category, that is to say But we have Let X be an S-stack locally of finite type. As in any topos, one can define internal hom Rhom X lis-ét (F, G) for any F ∈ D − (X ) and G ∈ D + (X ).
Lemma 4.1.1. Let F ∈ D − c (X ) and G ∈ D + c (X ), and let j be an integer. Then the restriction of the sheaf H j (Rhom X lis-ét (F, G)) to theétale topos of any object U ∈ Lisse-Et(X ) is where F U and G U denote the restrictions to Ué t .
Proof: The sheaf H j (Rhom X lis-ét (F, G)) is the sheaf associated to the presheaf which to any smooth affine X -scheme U associates Ext j X lis-ét|U (F, G), where X lis-ét|U denotes the localized topos. Let ǫ : X lis-ét|U → Ué t be the morphism of topos induced by the inclusion ofÉtale(U) into Lisse-Et(X ) |U . Then since F and G have constructible cohomology, the natural maps ǫ * ǫ * F → F and ǫ * ǫ * G → G are isomorphisms in D(X lis-ét|U ). By the projection formula it follows that Sheafifying this isomorphism we obtain the isomorphism in the lemma. Proof: By the previous lemma and the constructibility of the cohomology sheaves of F and G, it suffices to prove the following statement: Let f : V → U be a smooth morphism of schemes of finite type over S, and let F ∈ D − c (Ué t ) and G ∈ D + c (Ué t ). Then the natural map is an isomorphism as we saw in the proof of 3.5.2 (see [5], I.7.2). Proposition 4.1.3. Let X/S be an S-scheme locally of finite type and X → X be a smooth surjection. Let X • → X be the resulting strictly simplicial space. Then for F ∈ D − c (X lis-ét ) and G ∈ D + c (X lis-ét ) there is a canonical isomorphism In particular, Proof: Let X lis-ét|X• denote the strictly simplicial localized topos and consider the morphisms of topos Let Fé t := ǫ * π * F and Gé t := ǫ * π * G. Since F, G ∈ D c (X lis-ét ), the natural maps F ≃ Rπ * ǫ * Fé t and G ≃ Rπ * ǫ * G are isomorphisms (2.2.3). Using the projection formula we then obtain
4.2.
The functor f * . The lisse-étale site is not functorial (cf. [6], 5.3.12): a morphism of stacks does not induce a general a morphism between corresponding lisse-étale topos. In [19], a functor f * is constructed on D + c using cohomological descent. Using the results of 2.2.3 which imply that we have cohomological descent also for unbounded complexes, the construction of [19] can be used to define f * on the whole category D c .
Let us review the construction here. Let f : X → Y be a morphism of algebraic S-stacks locally of finite type. Choose a commutative diagram where the horizontal lines are presentations inducing a commutative diagram of strict simplicial spaces We get a diagram of topos By 2.2.6 the horizontal morphisms induce equivalences of topos We define the functor f * : D c (Y lis-ét ) → D c (X lis-ét ) to be the composite where f * • denotes the derived pullback functor induced by the morphism of topos f • : X •,ét → Y •,ét . Note that f * takes distinguished triangles to distinguished triangles since this is true for where we write f * for Rf * .
Proof: By 4.1.3 and [19], we have The result therefore follows from the usual adjunction by the formula By construction, one has .
Proof:
Let Ω X be the dualizing complex of X .
One has the projection formula
Proof: Notice that the left-hand side is well defined by 4.4.1. One has Using suitable presentation, one is reduced to the obvious formula for a morphism f • of stricltly simplicialétale topos.
is an isomorphism.
Proof: Using 3.4.1, one is reduced to the usual statement forétale sheaves on algebraic spaces. Because, in this case, both Ω Y and f * Ω Y are of finite injective dimension, one can assume that A is bounded or even a sheaf. The assertion is well-known in this case (by dévissage, one reduces to A = Λ Y in which case the assertion is trivial, cf. [5], exp. I). Taking H 0 , one obtains that j * is the right adjoint of j ¡ proving the lemma because j ! = j * is the right adjoint of j ! . and RΓ X is the total derived functor of the left exact functor H 0 X . Lemma 4.6.1. One has Ω X = i * RΓ X (Ω Y ).
Proof: If i is a closed immersion of schemes (or algebraic spaces), one has a canonical (and functorial) isomorphism, simply because i * H 0 X is the right adjoint of i * . If K denotes one of the objects on the two sides of the equality to be proven, one has therefore E xt i (K, K) = 0 for i < 0. Therefore, these isomorphisms glue (use theorem 3.2.2 of [7] as before).
for all A ∈ D(X ), B ∈ D(Y ). Moreover, one has one has i ! = i * and has a right adjoint, the sections with support on X .
Proof: If A, B are sheaves, one has the usual adjunction formula Because i * is exact, it's right adjoint sends homotopically injective complexes to homotopically injective complexes. The derived version follows. One gets therefore One gets therefore Proof: One has The last formula follows by adjunction.
4.8.
Computation of Rf ! via hypercovers. Let Y be an S-scheme of finite type and f : X → Y a morphism of finite type from an algebraic stack X . Let X • → X be a smooth hypercover by algebraic spaces, and for each n let d n denote the locally constant function on X n which is the relative dimension over X . By the construction, the restriction of the dualizing complex Ω X to each X n,ét is canonically isomorphic to the dualizing complex K Xn = Ω Xn −d n of X n . Let K X• denote the restriction of Ω X to X •,ét .
Let L ∈ D − c (X ), and let L| X• denote the restriction of L to X •,ét . Then D X (L)| X• is isomorphic to D X• (L| X• ) := Rhom X •,ét (L| X• , K X• ). In particular, the restriction of Rf ! L to Yé t is canonically isomorphic to (4.8.0.1) where f • : Xé t → Yé t denotes the morphism of topos induced by f .
Let Y •,ét denote the simplicial topos obtained by viewing Y as a constant simplicial scheme.
Let ǫ : Y •,ét → Yé t denote the canonical morphism of topos, and letf : X •,ét → Y •,ét be the morphism of topos induced by f . We have f • = ǫ •f . As in [19], 2.7, it follows that there is a canonical spectral sequence On the other hand, we have where the second isomorphism is by biduality 3.5.7. Combining all this we obtain Proposition 4.8.1. There is a canonical spectral sequence Example 4.8.2. Let k be an algebraically closed field and G a finite group. We can then compute H * c (BG, Λ) as follows. We first compute Rhom(RΓ ! (BG, Λ), Λ). Let Spec(k) → BG be the surjection corresponding to the trivial G-torsor, and let X • → BG be the 0-coskeleton.
Note that each X n isomorphic to G n and in particular is a discrete collection of points. Therefore Rf !p Λ ≃ Hom(G n , Λ). From this it follows that Rhom(RΓ ! (BG, Λ), Λ) is represented by the standard cochain complex computing the group cohomology of Λ, and hence RΓ ! (BG, Λ) is the dual of this complex. In particular, this can be nonzero in infinitely many negative degrees.
For example if G = Z/ℓ for some prime ℓ and Λ = Z/ℓ since in this case the group cohomology From this we obtain a long exact sequence From this sequence one deduces that H 0 c (P, Λ) ≃ Λ, H 2 c (P, Λ) ≃ Λ(1), and all other cohomology groups vanish. In particular, the cohomology of P is isomorphic to the cohomology of P 1 .
Therefore, for any A ∈ D c (Y ), one has the distinguished triangle (4.5.3) which by duality gives the distinguished triangle Recall (4.6.2) the formula i ! = RH 0 X . The usual purity theorem for S-schemes gives which by adjunction for i * gives a map which gives by composition a morphism which is the usual morphism for closed immersion of schemes. This morphism is compatible with the duality in an obvious sense. The usual purity theorem gives then the proposition, at least for A ∈ D + c (Y ). By duality, one gets the proposition for A ∈ D − c (Y ), and therefore for A ∈ D c (Y ) using the distinguished triangle τ >0 A → A → τ ≤0 A.
Base change
We start with a cartesian diagram of stacks and we would like to prove a natural base change isomorphism . Though technically not needed, before proving the general base change Theorem we consider first some simpler cases where one can prove a dual version:
Smooth base change.
In this subsection we prove the base change isomorphism in the case when p (and hence also π) is smooth.
Proof: Because the relative dimension of p and π are the same, by 4.5.2, one reduces the formula 5.0.1.3 to By adjunction, one has a morphism p * Rf * → Rφ * π * which we claim is an isomorphism (for complexes bounded below this follows immediately from the smooth base change theorem). To prove that this map is an isomorphism, we consider first the case when Y ′ is algebraic space and show that our morphism restricts to an isomorphism on Y ′ et . Since p is representable X ′ represents a sheaf on X lis-ét . Let X lis-ét|X ′ denote the localized topos, w : X lis-ét|X ′ → Yé t the projection, and let A ∈ D c (X ) be a complex. Let X → X be a smooth surjection with X a scheme, and let X • → X denote the associated simplicial space. Let X ′ • denote the base change of X • to Y ′ . Then X ′ • defines a hypercover of the initial object in the topos X lis-ét|X ′ and hence we have an equivalence of topos X lis-ét,X ′ ≃ X lis-ét,X ′ • . Let w • : X lis-ét|X • ′ → Y ′ et be the projection. Since the restriction functor from X lis-ét to X lis-ét|X ′ • takes homotopically injective complexes to homotopically injective complexes (since it has an exact left adjoint), On the other hand, w • factors as we see that Rφ * (A| X ′ lis-ét ) is isomorphic to Rφ • * (A| X ′ •ét ). We leave to the reader that the resulting isomorphism agrees with the morphism defined above. Thus this proves the case when p is representable.
For the general case, let Y ′ → Y ′ denote a smooth surjection with Y ′ a scheme, so we have a commutative diagram with cartesian squares. Let A ∈ D c (X ). To prove that the morphism p * Rf * A → Rφ * π * A is an isomorphism, it suffices to show that the induced morphism q * p * Rf * A → q * Rφ * π * A is an isomorphism. By the representable case, q * Rφ * π * A ≃ Rg * σ * π * A so it suffices to prove that the is an isomorphism. By the construction, this map is equal to the base change morphism for the diagram and hence it is an isomorphism by the representable case.
Proof:
The key point is the following lemma.
Proof: Using 2.3.4 and smooth base change, it suffices to construct a functorial morphism in the case of schemes, and to show that E xt i (Rf * Ω X , Ω Y ) = 0 for i < 0. Now if X and Y are schemes, we have Ω X = f ! Ω Y so we obtain by adjunction and the fact that Rf ! = Rf * a morphism Rf * Ω X → Ω Y . For the computation of E xt's note that We define a map Rf * • D X → D Y • f * by taking the composite To verify that this map is an isomorphism we may work locally on Y . This reduces the proof to the case when X and Y are algebraic spaces in which case the result is standard.
One has then (projection formula 4.4.2 for f ). But, we have trivially the base change for p, namely Therefore, one gets Applying p * gives the base change isomorphism. Start with A on X an injective complex. Because R 0 f * A i is flasque, it is Γ Y ′ -acyclic. Then, On the other hand, π ! A can be computed by the complex H 0 X ′ (A i ) which is a flasque complex (formal, or [3], V.4.11). Therefore, the direct image by φ is just R 0 φ * H 0 X ′ (A i ). One is reduced to the formula
Base change by a universal homeomorphism.
If p is a universal homeomorphism, then p ! = p * and π ! = π * . Thus in this case 5.0.1.3 is equivalent to an isomorphism p * Rf * → Rφ * π * . We define such a morphism by taking the usual base change morphism (adjunction).
Let A ∈ D c (X ). Using a hypercover of X as in 5.1, one sees that to prove that the map p * Rf * A → Rφ * π * A is an isomorphism it suffices to consider the case when X is a scheme.
Furthermore, by the smooth base change formula already shown, it suffices to prove that this map is an isomorphism after making a smooth base change Y → Y . We may therefore assume that Y is also a scheme in which case the result follows from the classical corresponding result forétale topology (see [1], IV.4.10).
5.5.
Base change morphism in general. Before defining the base change morphism we need a general construction of strictly simplicial schemes and algebraic spaces.
Fix an algebraic stack X . In the following construction all schemes and morphisms are assumed over X (so in particular products are taken over X ).
Let X • be a strictly simplicial scheme, [n] ∈ ∆ + an object, and a : V → X n a surjective morphism. We then construct a strictly simplicial scheme M(X • , a) (sometimes written M X (X • , a) if we want to make clear the reference to X ) with a morphism M(X • , a) → X • such that the following hold: (i) For i < n the morphism M(X • , a) i → X i is an isomorphism.
(ii) M(X • , a) n is equal to V with the projection to X n given by a.
The construction of M(X • , a) is a standard application of the skeleton and coskeleton functors ( [3], exp. Vbis). Let us review some of this because the standard references deal only with simplicial spaces whereas we consider strictly simplicial spaces.
To construct M(X • , a), let ∆ + n ⊂ ∆ + denote the full subcategory whose objects are the finite sets with cardinality ≤ n. Denote by Sch ∆ +opp n the category of functors from ∆ +opp n to schemes (so Sch ∆ +opp is the category of strictly simplicial schemes). Restriction from ∆ +opp to ∆ +opp n defines a functor (the n-skeleton functor ) (5.5.0.1) sq n : Sch ∆ +opp → Sch ∆ +opp n which has a right adjoint called the n-th coskeleton functor. For X • ∈ Sch ∆ +opp n , the coskeleton cosq n X in degree i is equal to where the limit is taken over the category of morphisms [k] → [i] in ∆ + with k ≤ n.
Note in particular that for i ≤ n we have (cosq n X) i = X i since the category of morphisms [k] → [i] has an initial object id : Lemma 5.5.1. For any X • ∈ Sch ∆ +opp n and i > n the morphism is an isomorphism.
Proof: Using the formula 5.5.0.3 the morphism can be identified with the natural map which is clearly an isomorphism.
Lemma 5.5.2. The functors sq n and cosq n commute with fiber products.
Proof: The functor sq n commutes with fiber products by construction, and the functor cosq n commutes with fiber products by adjunction.
To construct M(X • , a), we first construct an object M ′ (X • , a) ∈ Sch ∆ +opp n . The restriction of M ′ (X • , a) to ∆ +opp n−1 will be equal to sq n−1 X, and M ′ (X • , a) n is defined to be V. For 0 ≤ j ≤ n define δ j : M ′ (X • , a) n → M ′ (X • , a) n−1 = X n−1 to be the composite where δ j,X denotes the map obtained from the strictly simplicial structure on X • . There is an obvious morphism M ′ (X • , a) → sq n (X • ) inducing cosq n M ′ (X • , a)) → cosq n sq n X • .
We then define where the map X • → cosq n sq n X • is the adjunction morphism. The map M(X • , a) → X • is defined to be the projection. The properties (i) and (ii) follow immediately from the construction.
Proposition 5.5.3. Let X be an algebraic stack and X • → X a hypercover by schemes. Let n be a natural number and a : V → X n a surjection. Then M X (X • , a) → X is also a hypercover.
If X • is a smooth hypercover and a is smooth and surjective, then M X (X • , a) is also a smooth hypercover.
Proof: By definition of a hypercover, we must verify that for all i the map is surjective. Note that this is immediate for i ≤ n. For i > n we compute Here the second isomorphism is because sq n and cosq n commute with products, and the third isomorphism is by 5.5.1. Hence it suffices to show that the natural map is surjective, which is true since X • is a hypercover. This also proves that if X • is a smooth hypercover and a is smooth, then M X (X • , a) is a smooth hypercover.
where each morphism f n : X n → Y n is a closed immersion.
Proof: We construct inductively hypercovers X (n) • → X and Y (n) • → Y and a commutative diagram over f . We further arrange so that the following hold: (i) For i < n the maps X This suffices for we can then take X For the base case n = 0, choose any 2-commutative diagram with p i and q i smooth, surjective, and of finite type, and X i and Y i affine schemes. Thenf i are also of finite type, so there exists a closed immersion X i ֒→ A r i Y i for some integer r over X i → Y i . Replacing Y i by A r i Y i we may assume thatf is a closed immersions. We then obtain X with a and b smooth and surjective, and j a closed immersion. Then define X (n) Remark 5.5.5. The same argument used in the proof shows that for any commutative diagram where p and q are smooth hypercovers, there exists a morphism of simplicial schemes g : X • → Y • over f • with each g n : X n → Y n an immersion such that X • (resp. Y • ) is a hypercover of X (resp. Y ). In other words, the category of diagrams 5.5.4.1 is connected.
Let f : X → Y be a morphism of algebraic stacks over S. For F ∈ D − c (X ) we can compute Rf ! F as follows. Let Y • → Y be a smooth hypercover, and let π : X Y• → X be the base change of X to Y • . Let f • : X Y• → Y • be the projection. Let ω X Y• denote the pullback of the dualizing sheaf Ω X to X Y• , and let D X Y• denote the functor Rhom(−, ω X Y• ). Similarly let ω Y• denote the pullback of Ω Y to Y • , and let D Y• denote Rhom(−, ω Y• ).
If d n (resp. d ′ n ) denotes the relative dimension of Y n over Y (resp. Y ′ n over Y ′ ), then d n (resp. d ′ n ) is also equal to the relative dimension of X Yn over X (resp. X ′ Y ′ n over X ′ ). From 4.5.2 it follows that the restriction of ω X Y• to X Yn is canonically isomorphic to Ω X Yn −d n .
Similarly the restriction of ω Y• to Y n is canonically isomorphic to Ω Yn −d n . Note that this For F ∈ D c (X ), we can then consider The sheaf D X Y• (π * F) is just the restriction of D X (F) to X Y• . It follows from this that Rf • * D X Y• (π * F) is equal to the restriction of Rf * D X (F) to Y • , and this in turn implies that D Y• Rf • * D X Y• (π * F) is isomorphic to the restriction of Rf ! F to Y •,ét . From this we conclude that Rf ! F is equal to the sheaf obtained from D Y• Rf • * D X Y• (π * F) and the equivalence of categories Theorem 5.5.6. Let be a cartesian square of stacks over S. Then there is a natural isomorphism of functors Proof: By 5.5.4, there exists a commutative diagram where p and q are smooth hypercovers and j is a closed immersion.
Then there is a cartesian diagram (5.5.6.4) where i and j are closed immersions. Let F denote the functor Proposition 5.5.8. There is an isomorphism of functors F ≃ Rg ′ * .
The following Lemma therefore shows that there is a canonical morphism A → F ′ (A) functorial in A.
Lemma 5.5.9. For all s ∈ Z there is a canonical isomorphism In particular, induces a canonical morphism Proof: It suffices to construct such a canonical isomorphism over each Y ′ n . Let d n (resp. d ′ n ) denote the relative dimension of Y n (resp. Y ′ n ) over Y (resp. Y ′ ). Note that d n (resp. d ′ n ) is also equal to the relative dimension of X Yn (resp. X ′ Y ′ n ) over X (resp. X ′ ). As mentioned above we therefore have From this and an elementary manipulation using the identity , Ω X Yn ), Ω Yn ), j * Ω Y ′ n ). We then get , Ω X Yn ) (4.6.2) Therefore 5.5.9.1 is equal to Then where the last morphism is induced by adjunction g ′ * Rg ′ * B → B. This map is functorial in A, so by Yoneda's Lemma we get a canonical morphism Rg ′ * B → F (B). To prove 5.5.8 we show that this map is an isomorphism for all B ∈ D c (X ′ Y ′ • ). For this we can restrict the map to any X ′ Y ′ n . Noting that the shifts and Tate twists cancel as in 5.5.9.1, we get We leave to the reader the task of verifying that this isomorphism agrees with the map obtained by restriction from the morphism F (B) → Rg ′ * B constructed above, thereby completing the proof of 5.5.8.
By construction this morphism is compatible with smooth base change on Y and Y ′ . It follows that in order to verify that 5.5.9.3 is an isomorphism it suffices to consider the case when Y ′ and Y are schemes. Furthermore, by construction if X • → X is a smooth hypercover and X ′ • the base change to Y ′ , then the base change arrow 5.5.9.3 is compatible with the spectral sequences 4.8.1. It follows that to verify that 5.5.9.3 is an isomorphism it suffices to consider the case of schemes which is [4], XVII, 5.2.6. Finally the independence of the choices follows by a standard argument from 5.5.5. This completes the proof of 5.5.6. 5.6. Equivalence of different definitions of base change morphism. In this subsection we show that the base changed morphism defined in the previous subsection agrees with the morphism defined earlier for smooth morphisms, immersions, and universal homeomorphisms. 5.6.1. The case when ρ is smooth. Choose a diagram as in 5.5.6.3, and let d denote the locally constant function on Y ′ which is the relative dimension of ρ. For any morphism Z → Y ′ we also write d for the pullback of the function d to Z . Note that Proof: Consider the natural map (5.6.2.1) We claim that this map is an isomorphism. This can be verified over each X ′ Y ′ n . Let π n : X Yn → X (resp. π ′ n : X ′ Y ′ n → X ′ ) be the projection. By the equivalence of triangulated categories D c (X Y• ) ≃ D c (X ), there exists an object A ′ ∈ D c (X ) so that the restriction of A to X Yn is isomorphic to π * n A ′ . The morphism 5.6.2.1 is then identified with the isomorphism n Rhom(ρ * A ′ , ρ * Ω X ) (4.5.1) ≃ Rhom(π ′ * n ρ * A ′ , π * n ρ * Ω X ) (4.5.1) The same argument proves the statement For any A ∈ D c (Y • ), let α A denote the isomorphism . induces for every E ∈ D c (X Y• ) a morphism δ E given by The map α A is a special case of a more general class of morphisms. For A, M ∈ D c (Y • ) let Here the second morphism is given by 5.5.7 and the third morphism is by the adjunction property of ⊗.
Lemma 5.6.3. For any A ∈ D c (Y • ) the map α A is equal to the composite Proof: This follows from the definitions. commutes.
Proof: Consider the diagram where to ease the notation we write simply [−, −] for Rhom(−, −). An elementary verification shows that each of the small inside diagrams commute, and hence the big outside rectangle also commutes. Applying j * we obtain the lemma.
where the third morphism is provided by 5.5.7. As above, one verifies that for A, B, M ∈ commutes.
From this it follows that if ϕ A : A → F ′ (A) denotes the morphism constructed in the proof of 5.5.8, then the diagram Note that the base change morphism in the above diagram is an isomorphism. This can be verified over each X ′ Y ′ n . Here the functor D X Yn i * D X ′ Y ′ n is up to shift and Tate torsion isomorphic to i ! = i * . The base change morphism is therefore induced by the isomorphism By the definition of the morphism in 5.5.8 this implies that for any A ∈ D c (X ′ Combining the commutativity of this diagram with the commutativity of the diagram (verification left to the reader) 5.6.5. The case when ρ is a universal homeomorphism. The same argument used in the previous section shows the agreement of the base change morphism in 5.5 with the base change morphism in 5.4. Indeed the only property of smooth morphisms used in the previous section is that the dualizing sheaves can be described as in 5.6.1.1. This also holds when ρ is a universal homeomorphism (with d = 0). 5.6.6. The case when ρ is an immersion. With notation as in 5.3, note first that to prove that the two base change morphisms agree it suffices to show that they agree on sheaves of the form π * A with A ∈ D c (X ′ ). Indeed for any B ∈ D c (X ) either base change isomorphism factors as In order to prove that the two base change morphisms agree, it is useful to first give an alternate description of the morphism defined in 5.3.
Proof: Chasing through the definitions this amounts to the commutativity of the following We leave to the reader this verification.
Using this alternate description of the base change morphism in 5.3, we can prove the equivalence with that given in 5.5. By a standard reduction it suffices to consider the case of a closed immersion. So fix the diagram 5.5.6.1 with ρ a closed immersion, and choose a diagram as in 5.5.6.3. Since ρ is a closed immersion we may without loss of generality assume that 5.5.6.3 is cartesian. Note that for any [n] ∈ ∆, the restriction of j ! (resp. i ! ) to a functor D(Y n ) → D(Y ′ n ) (resp. D(X Yn ) → D(X ′ Y ′ n )) agrees with the usual extraordinary inverse image. This follows for example from the explicit description of these functors in the proof of 5.6.8.
Lemma 5.6.9. There are canonical isomorphisms ω Y ′ • ≃ j ! ω Y• and ω X ′ Y ′ • Proof: By the glueing lemma 2.3.3, it suffices to construct an isomorphism over each Y ′ n (resp. X ′ Y ′ n ). Let d denote the relative dimension of Y n over Y . Then d is also equal to the relative dimension of Y ′ n over Y ′ , the relative dimension of X Yn over X , and the relative dimension of X ′ Y ′ n over X ′ . We therefore have Lemma 5.6.10. For any A ∈ D(Y ′ • ) and B ∈ D(Y • ) (resp. C ∈ D(X ′ Y ′ • ) and E ∈ D(X Y• )) we have j * Rhom(A, j ! B) ≃ Rhom(j * A, B), i * Rhom(C, i ! E) ≃ Rhom(i * C, E).
That this map is an isomorphism can be verified after restricting to each Y n in which case it follows from the theory for schemes [4], XVIII, 3.1.10. The same argument gives the second isomorphism in the Lemma.
Corollary 5.6.11. For any and for B ∈ D X ′ Y ′ • let β B denote the isomorphism Define γ ′ B to be the isomorphism (B) → B be the isomorphism obtained by adjunction.
Following the same outline used in 5.6.1 (replacing the α's, β's, and γ's by the above defined morphisms), one sees that the morphism 5.5.9.2 in the case of a closed immersion is given by the composite From this it follows that the sequence of morphisms in 5.5.9.3 is identified via cohomological descent with the sequence of morphisms 5.6.7.1, and hence the two base change morphisms are the same. Lemma 5.7.1. There is a natural isomorphism Proof: By ( [5], III.1.7.6) there is for any smooth morphisms U i → Y i (i = 1, 2) with U i a scheme, a canonical isomorphism Furthermore, this isomorphism is functorial with respect to morphisms V i → U i . It follows that the sheaf K Y 1 L ⊗ S K Y 2 also satisfies the E xt-condition (2.3.3), and hence to give an isomorphism as in the Lemma it suffices to give an isomorphism in the derived category of U 1 × S U 2 for all smooth morphisms U i → Y i . This we get by tensoring the two evaluation morphisms For the definition and standard properties of homotopy colimits we refer to [8]. Because the diagonal is cofinal in N × N, the lemma follows.
Proposition 5.7.4. For L i ∈ D − c (Y i ) (i = 1, 2), there is a canonical isomorphism Proof: By 5.7.1 and 5.7.2 there is a canonical morphism (note here we also use that K Y i has finite injective dimension) To verify that this map is an isomorphism, it suffices to show that for every j ∈ Z the map Because L ⊗ commutes with homotopy colimits (5.7.3), we deduce from D(A) = hocolim D(τ ≥ mA) (use 5.7.3) that to prove this we may replace L i by τ ≥m L i for m sufficiently negative, and therefore it suffices to consider the case when L i ∈ D b c (Y i ). Furthermore, we may work locally in the smooth topology on Y 1 and Y 2 , and therefore it suffices to consider the case when the stacks Y i are schemes. In this case the result is [4], XVII, 5.4.3. Now consider morphisms of S-stacks f i : X i → Y i (i = 1, 2), and let f : X := X 1 × S X 2 → Y := Y 1 × S Y 2 be the morphism obtained by taking fiber products. Let L i ∈ D − c (X ). Proof: We define the morphism 5.7.5.1 as the composite That this map is an isomorphism follows from a standard reduction to the case of schemes using hypercovers of X i , biduality, and the spectral sequences 4.8.1. | 2014-10-01T00:00:00.000Z | 2005-12-05T00:00:00.000 | {
"year": 2005,
"sha1": "e0f74444c890dfa99589e3bbf99c295ec2fe4ad6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0512097",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "23637d9e774867e074db48c314e269dbbdb24690",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
253471798 | pes2o/s2orc | v3-fos-license | Unveiling the Effect of CaF2 on the Microstructure and Transport Properties of Phosphosilicate Systems
As an effective flux, CaF2 is beneficial in improving the fluidity of slag in the steel-making process, which is crucial for dephosphorization. To reveal the existence form and functional mechanism of CaF2 in phosphosilicate systems, the microstructures and transport properties of CaO-SiO2-CaF2-P2O5 quaternary slag systems are investigated by molecular dynamics simulations (MD) combined with experiments. The results demonstrate that the Si-O coordination number does not vary significantly with the increasing CaF2 content, but the P-O coordination number dramatically decreases. CaF2 has a minor effect on the single [SiO4] but makes the structure of the silicate system simple. On the contrary, F− ions could reduce the stability of P-O bonds and promoted the transformation of [PO4] to [PO3F], which is beneficial for making the P element-enriched phosphate network structure more aggregated. However, the introduction of CaF2 does not alter the tetrahedral character of the original fundamental structural unit. In addition, the results of the investigation of the transport properties show that the self-diffusion coefficients of each ion are positively correlated with CaF2 content and arranged in the order of F− > Ca2+ > O2− ≈ P5+ > Si4+. Due to CaF2 reducing the degree of polymerization of the whole melts, the viscosity decreases from 0.39 to 0.13 Pa·s as the CaF2 content increases from 0% to 20%. Moreover, the viscosity of the melt shows an excellent linear dependence on the structural parameters.
Introduction
The physical and chemical properties of slag are crucial for mass transfer and chemical reactions between liquid steel and slag. It is well-known that the physical and chemical properties of slag are determined by its structural characteristics [1,2], and it is of extraordinary interest to study the structural information of slag to understand its performance.
There is a large quantity of experimental approaches that have been applied to study the structure information of slag, and they mainly include nuclear magnetic resonance, X-ray diffraction, neutron diffraction, Raman spectroscopy, etc. [3][4][5]. These methods help one to effectively understand the microstructure and unique properties of slag, which is a significant breakthrough in this research direction. In recent years, with the rapid development of computer technology, a large number of simulation techniques have gradually entered the field of vision of scholars. In particular, MD simulations are expected to provide an effective way to understand the slag structure from a microscopic point of view with its advantages. Specifically, it is not affected by experimental conditions, such as high temperature and pressure. At present, a large number of scholars have used the molecular dynamics (MD) simulation method to conduct studies on the microstructure and properties of metallurgical slag and achieved remarkable results [6][7][8].
At present, structural information of binary and ternary silicate and aluminate melts has been extensively and carefully studied. As for phosphate melt, it has not been systematically studied due to the complexity and diversity of its structure. However, phosphorus is one of the harmful elements in steel. The excessive amount of phosphorus in steel is detrimental to its quality and properties. Therefore, dephosphorization is one of the key tasks in steel-making. In addition, phosphorus removal relies on the reaction between steel and slag, so a comprehensive study of the microstructure and transport properties of phosphorus removal slag could help clarify the underlying causes of the alteration in its macroscopic properties. In previous studies, Diao et al. [9,10] studied the microstructure of the ternary slag system of CaO-SiO 2 -P 2 O 5 through MD simulation, and the results showed that silicon and phosphorus mainly formed a tetrahedral structure. Additionally, the concentration of free oxygen decreases significantly with increasing P 2 O 5 content and the degree of polymerization of the melt increases. Fan et al. [11] reported the existence form of Si and P in CaO-SiO 2 -P 2 O 5 melt under high basicity, and the results showed that both Si and P tended to form complex anions, and P ions are more inclined to form a single tetrahedron structure. Moreover, Jiang et al. [12] used MD simulation to study the structure and properties of the molten CaO-SiO 2 -P 2 O 5 -FeO slag system and concluded that the polymerization degree of the system decreased with the increase of basicity.
In our previous studies, a relatively deep understanding of the structural properties of binary phosphate systems has been obtained [1]. In actual production, additional components will be used to adjust the comprehensive physicochemical properties of the slag to meet the requirements of the metallurgical production process. CaF 2 is widely used as a conventional flux to reduce the viscosity of slag and improve its mobility. At present, some scholars have carried out studies on the structural properties of CaF 2 -containing glasses. For example, Kansal et al. [13] studied the effect of the CaO/MgO ratio on the structure and thermal properties of CaO-MgO-SiO 2 -P 2 O 5 -CaF 2 , and found that CaF 2 always tends to combine with [PO 4 ] to form composite structures. Pedone et al. [14] investigated the influence of halides on the structure of phosphosilica bioactive glasses by MD simulation. The results show that in the mixed-fluoride/chloride-containing glasses, fluorine tends to surround phosphate, whereas chloride moves toward the silicate network. Furthermore, interestingly, according to relevant reports, F − can also directly participate in the dephosphorization reaction, thus directly affecting the structure of dephosphorization slag [15,16]. However, no reports have been found regarding the presence of CaF 2 in dephosphorized slag and its effects on structure and properties.
Therefore, to shed light on the morphological and functional mechanisms responsible for the presence of CaF 2 in dephosphorized slag, in this paper we focus on the quaternary slag system CaO-SiO 2 -CaF 2 -P 2 O 5 and perform a comprehensive analysis of its microstructure using molecular dynamics simulations. Under the conditions of certain contents of (xCaO)/(xSiO 2 ) and P 2 O 5 in the slag, the influence of CaF 2 on the microstructure and transport performance of the slag system under high temperature is investigated combined with experiments. The results of this investigation can provide some valuable information to understand the microstructure of dephosphorized slag and clarify the intrinsic link between melt flow properties and the evolution of structural units.
Interatomic Potential
For molecular dynamics simulation, selecting appropriate potential function and corresponding parameters is the basis of accurate calculation. All molecular dynamics simulations were carried out using the Born-Mayer-Huggins (BMH) model [1,7,[9][10][11] in this study, and the potential can be expressed as Equation (1): where, Z i and Z j are the effect charges of ions i and j, respectively, e represents the charge of a single electron, ε 0 represents the vacuum permittivity, r ij is the distance between atoms i and j, and A ij , B ij , and C ij are the adjustable parameters for BMH potentials. The three items on the right side of the above formula represent the coulombic interaction, short-range repulsion interaction, and the van der Waals interactions. Various interaction potential parameters between the selected particles are listed in Table 1 [11,17,18].
Simulation Approach
The target sample composition was discussed in the full melting component range at the corresponding temperatures. The slag samples of CaO-SiO 2 -CaF 2 -P 2 O 5 were divided into five groups, where the first group with 0% CaF 2 was used as a comparison with the other four groups. As Figure 1 shows, according to the liquid phase range of the CaO-SiO 2 -CaF 2 -P 2 O 5 quaternary slag system at 1600 • C obtained through Factsage 8.0 thermodynamic calculation software, the chemical composition of each group in this study was determined. The varying number of atoms was then calculated based on the mole fraction of each group. Referring to the empirical formula in relevant literature and research results [19], the density of various groups at 1600 • C was calculated, respectively. The chemical composition, atomic number, and other information about each group of samples are listed in Table 2.
The computational methods used and the choice of parameters are critical factors in achieving efficient and accurate simulations. In this study, about 6000 atoms were randomly placed in a model box. Since the number of calculated atoms is always finite, periodic boundary conditions were performed on all faces of the model box to obtain an infinite system of atoms without boundaries. The results obtained with periodic boundary conditions are sufficient to reflect the actual situation. All MD simulations use a canonical ensemble (NVT), which means that the calculations were performed in a system with a constant atomic number (N), sample volume (V), and temperature (T). Additionally, the sum method of Ewald was used for the long-distance coulomb force, and the motion equation of atoms was explained by the jump integral method of a 1 fs timestep. The potential cutoff radius was set to 10 Å in the calculation of the repulsive force. Besides, the total time length of each group of simulations was determined to be 60 ps, equivalent to 60,000 steps. After the beginning of the simulation, the initial temperature was set at 5000 K (4727 • C) for 15,000 timesteps to agitate the atoms and eliminate the effect of the intentional distribution. Secondly, the temperature was cooled down to 1873 K (1600 • C) with 30,000 timesteps. Subsequently, the system was relaxed at 1600 • C for another 15,000 timesteps in the equilibrium calculation. The temperature, volume, and enthalpy remained nearly constant for 15,000 timesteps, demonstrating that the system has reached equilibrium.
Statistics of Structural Information
The radial distribution function (RDF) is commonly used to investigate the character of the short-range order of melts. Equation (2) lists the mathematical expression of the RDF [20]: where, V is the volume of MD-simulated cells, N is the number of particles, and n ij is the average number of atom j surrounding the atom i within a distance r ± ∆r/2. The abscissa of the first peak and the first trough in the RDF curve represent the average bond length and cutoff radius of the corresponding atoms, respectively. In addition, integration of the corresponding partial RDF of the particles generates the average CN function, which represents the number of atoms j around atoms i within the cutoff radius. The CN function is expressed as Equation (3): Finally, as far as the concentration of oxygen species and the distribution of structural units, Q n , were concerned, this structural information will be counted by the Matlab program, based on the spatial atomic coordinates derived from the MD simulation.
Viscosity Calculation
Viscosity is one of the most significant physical parameters of slag. The viscosity of melts can reflect the degree of polymerization. Through statistical analysis of atomic coordination by MD simulation, a function of MSD would be generated, displayed as Equation (4): where, N is the number of particles, r i (t) represents the position coordinate of atom i at time t, and angular brackets denote a statistical average of many function values. The self-diffusion coefficient could be obtained from MSD as shown below [21]: Then, the shear viscosity information of the melts can be obtained by combining the self-diffusion coefficient, D, with the Stokes-Einstein equation [22,23]: where, K B is the Boltzmann constant, which is 1.38 × 10 −23 J/K, T is the system temperature, and λ is the particle transition step size, which is commonly considered λ = 2R O = 2.8 Å [24][25][26]. Based on the above calculation method, the partial transport performance of the melts can be obtained, and the relationship between the structural information and performance can be established.
Experimental Method
Based on the mole fraction of each sample in Table 2, the composition of the experimental slag was obtained by mass conversion. The results are shown in Table 3. The reagents used in our experiments are all from a specialist chemical reagent company in Chongqing, China. The purity of the reagents (CaO, SiO 2 , CaF 2 , and P 2 O 5 ) used was above 99.5 wt.%. The weighted sample powder was well-mixed and placed in a graphite crucible before the viscosity was measured. The viscosity was measured using the rotating cylinder method. The viscometer was calibrated at room temperature using an oil with known viscosity prior to the experiment. Approximately 250 g of each slag sample was placed into a graphite crucible to melt, and the average heating rate was 5 • C/min. Since P 2 O 5 has a low boiling point, it is prone to volatilization and produces white smoke at high temperatures. Consequently, the other three components were firstly added into the crucible and heated to 1500 • C for 20 min. After the sample was fully melted, P 2 O 5 was added, and the crucible was covered to prevent volatilization. Then, the crucible was opened after keeping it for 5 min. If there was no obvious white smoke, the crucible was reheated to 1600 • C and kept for 20 min to homogenize the chemical composition. Finally, the viscosity of each sample was determined from an average of 60 consecutive measurements.
Local Structural Characteristics
The local structure information of melts can be preliminarily obtained by RDF and CN. Taking G2 as an example, Figure 2a,b show the distribution of RDF and CN in the system of CaO-SiO 2 -CaF 2 -P 2 O 5 at 1600 • C when CaF 2 content was 5%, respectively. According to the RDF curve, the average bond length of each atom pair in the melts can be concluded. As can be obtained from Figure 2a, the average bond lengths of Ca-O, Si-O, P-O, and Ca-F were 2.31, 1.62, 1.50, and 2.30 Å, respectively. The results are in good accordance with previous research obtained from MD simulations and experiments [9][10][11]27]. Table 4 shows the variation of the bond lengths for various pairs of atoms from G1 to G5. In general, a strong and sharp peak in the RDF curve indicates a steep stabilization of the corresponding bond. Similarly, for CN curves, a broad flat plateau implies a large stability of the corresponding polyhedron. It can be observed in Figure 2a that both Si-O and P-O curves had a sharp peak, meaning Si and P tended to combine with O atoms and form stable structures. From Table 4, the average Si-O bond length remained constant with the increasing CaF 2 content in the melt, demonstrating that the Si-O bond was particularly stable and was not subject to CaF 2 . However, the length of the P-O bond became longer, confirming the character of the P-O structure affected by CaF 2 , while indicating a decrease in the strength of the P-O bond, which may lead to the evolution of the phosphate melt structure. Furthermore, the P-F bond appeared in the system due to the addition of CaF 2 . Interestingly, the RDF curve for the P-F bond had an unusually sharp peak and the P-F bond length did not change significantly with CaF 2 content, indicating that the P-F bond was considerably more stable than the P-O bond, which is unprecedented. In addition, the Ca-O and Ca-F bonds had slightly increased lengths, indicating that they were more loosely bound with the addition of CaF 2 .
As can be seen from Figure 2b, CN Si-O , CN P-O , and CN Ca-O were 4.04, 3.87, and 5.51, respectively. The plateau on the CN Si-O curve was smoother than that on the CN P-O curve, indicating that the stability of the Si-O structure was higher than that of the P-O in the CaO-SiO 2 -CaF 2 -P 2 O 5 systems. Since Ca 2+ is typically present as a network modifier, CN Ca-O exhibited a sloping plateau, meaning that no stable structure was formed between Ca-O, which is consistent with previous studies on slag or glassy structures containing CaO [10,[28][29][30]. Additionally, it is worth noting that CN P-F had an extremely flat plateau between 0 and 1, which means that F − and P 5+ had a strong coordination tendency. Wang et al. [31] introduced CaF 2 into CaO-SiO 2 -Al 2 O 3 slag systems and found that F − has a strong tendency to replace an O in the [AlO 4 ] structure to form an Al-F bond. They attributed the phenomenon to the difference between the electronegativity of F − and O 2− . Therefore, the addition of CaF 2 causes a shift in the original structure of the melts, especially for phosphate systems. The tendency of F − to coordinate with P 5+ is so strong that it may form a competitive relationship with O 2− , leading to a large-scale transformation of the P-O structure. This trend may be more significant in high-temperature conditions. Besides, the CN curves for all pairs of atoms except Si-O, P-O, and P-F did not have a clear plateau, suggesting that they do not typically form stable structures, and they are therefore not discussed in detail here. Figure 2c,d show the alters of Si-O and P-O coordination numbers as CaF 2 content in slag from 0% to 20%, respectively. At 0% CaF 2 content, the coordination numbers of Si-O and P-O were close to 4.0, indicating that most of them exist as 4-coordinates and conform to the tetrahedral form. From Figure 2c, the coordination number changes of Si-O were not obvious in the range of the mole fraction of (CaF 2 ) = 0~20%, which were all around 4.0. Due to the high stability of [SiO 4 ], it is difficult for F − to break through the bond energy barrier between Si-O to coordinate with Si 4+ , which was also discussed in previous studies [32,33]. However, when increasing the CaF 2 content from 0% to 20%, the coordination plateau of P-O became increasingly tortuous and the average coordination number decreased, with values of 4.05, 3.87, 3.74, 3.66, and 3.57. Moreover, it can be seen from the variation law of the bond length that P-O kept increasing, indicating that its stability decreased. In contrast, the P-F bond length was much smaller than the P-O bond length. All indications show that the affinity between P 5+ and F − is greater than that of O 2− , which confirms that CaF 2 will affect the coordination of P-O and alter the original [PO 4 ] structure. Figure 3 shows the coordination distributions for Si-O and P-O, with superscripts indicating coordination numbers. The content of SiI V was always above 95%, which shows that [SiO 4 ] is the main structural unit in silicate systems and the content of [SiO 4 ] did not alter significantly with the increase of CaF 2 content, which is consistent with the findings of Fan et al. [34]. From Figure 3b, when the CaF 2 content was 0, virtually all P-O in G1 appeared in a 4-coordination structure, indicating that the majority of P exists in slag in the form of a [PO 4 ] structure and serves as the basic structural unit of the phosphate systems. However, as the CaF 2 content increased, the PI V content decreased and the P V gradually disappeared, while the PIII content continued to increase. Therefore, in contrast to silicate systems, the structural units of phosphate systems absolutely change with increasing CaF 2 content, and new structures may emerge as the coordination number of P-O gradually evolves from high to low. With the gradual increase of tri-coordinated P content, combined with the coordination of P-F in Figure 2b, it indicated that the addition of CaF 2 prompted F − to replace O 2− , and a [PO 4 ] to [PO 3 F] structural transition occurred. A similar phenomenon also appeared in the study of the phosphate glass structure by Rao et al. [35] and Touré et al. [36]. However, in Pedone et al.'s [14] work, no P-F/Cl bonds were found at room temperature. It may be that the particles become more active and their diffusion ability is enhanced at high temperatures compared with normal temperatures, which provides favorable thermodynamic and kinetic conditions for the bonding between P 5+ and F − .
Distribution of Bond Angles
The distribution of bond angles is also a critical parameter to characterize the structure of the melt. Figure 4 4 ] tetrahedron, it did not affect some structural characteristics of the original P-O bond, and the network structure with Si 4+ and P 5+ as the core still maintained the tetrahedral structure. Moreover, CaF 2 did not appear to cause large-scale rearrangements of the atoms in the whole systems, which consisted of a polymeric tetrahedral structure of Si 4+ , P 5+ , F − , and O 2− , as well as network modifiers such as Ca 2+ dispersed.
Structural Unit Evolution
The silicate and phosphorene systems mainly consist of a network structure with O atoms connected to Si and P atoms. There are three types of distinct oxygen, which are divided into free oxygen (O f ), non-bridging oxygen (O nb ), and bridging oxygen (O b ).
Additionally, a unique tri-coordinated oxygen structure has been found in the aluminate system according to the literature [7]. Bridging oxygen with two tetrahedra, including Si-O-Si, Si-O-P, and P-O-P, improved the degree of polymerization of the system. Non-bridged oxygen was attached to only one tetrahedron, namely O-Si and O-P, while the other end was attached to a metallic cation. They function in the opposite way to bridging oxygen. The free oxygen is not connected to any tetrahedron. The cutoff radii of Si-O and P-O were selected to be 2.3 and 2.5 Å, respectively, and the distribution of various oxygen types in the melts was collected in Figure 5a. With the increase of CaF 2 content, the amount of free oxygen in the melts slightly increased, while the shift of the number of bridging oxygen and non-bridging oxygen had no obvious rule and was approximately in dynamic equilibrium. 1% to 20.0%, Si-O-P increased from 13.0% to 16.7%, and P-O-P increased from 0.7% to 4.8%. It has been shown that CaF 2 is beneficial in disrupting Si-O-Si and losing the initially polymeric silicate network structure. The Si-O-P structure in the system increased; that is, the addition of CaF 2 promoted the connection between [SiO 4 ] and [PO 4 ] or [PO 3 F], resulting in a silicophosphate composite structure that was more easily established in the systems. Moreover, the increase of P-O-P also indicates that the connectivity of the phosphate network structure became higher, which makes the phosphate melt structure more complex.
To further quantitatively analyze the influence of CaF 2 on the network structure of the systems, Q n was introduced to characterize the polymerization degree of silicate and phosphate systems respectively, where n represents the number of bridging oxygen (O b ) in a single tetrahedral unit. The current results show that Q n can be classified into five types: Q 0 , Q 1 , Q 2 , Q 3 , and Q 4 , indicating that 0, 1, 2, 3, and 4 O b are connected in a tetrahedral element. Figure 5c,d show the distribution of Q n in silicate and phosphate systems, respectively. As CaF 2 content increased, the Q 0 and Q 1 in the silicate system increased, while the Q 2 , Q 3 , and Q 4 decreased, again confirming that CaF 2 breaks the high connectivity between [SiO 4 ] tetrahedral structures, simplifying the structure of the silicate systems. Besides, only Q 0 and Q 1 structures originally existed in the phosphate system, indicating that [PO 4 ] normally exists in the form of a single tetrahedron or pairings, which is consistent with the research results of Fan et al. [11]. However, as the CaF 2 content increased, the Q 0 rapidly decreased and the Q 1 increased. In addition, several Q 2 and Q 3 structures appeared and continued to increase. The results indicate that the original phosphate structure was not complicated, and the connectivity between the [PO 4 ] tetrahedra was low. However, the addition of CaF 2 reduced the number of single [PO 4 ] tetrahedral elements and the current [PO 3 F] structure tended to form a chain or network composite structure, which increased the connectivity of the phosphate network. Macroscopically, higher connectivity is beneficial to the enrichment of P elements. In other words, CaF 2 can enrich the phosphate network, which is favorable for dephosphorization.
Transport Properties and Viscosity
The above results indicate that increasing the CaF 2 content simplified the structure of the silicate system in CaO-SiO 2 -CaF 2 -P 2 O 5 melts but complicated the structure of the phosphate system. Therefore, to further understand the effect of CaF 2 on the degree of polymerization of the whole melt and to assess the changes in macroscopic properties, it is necessary to quantitatively analyze the transport properties of the system. Liquid molecules do not stay in a fixed position but are constantly moving [37]. The self-diffusion coefficient is a momentous parameter that reflects the diffusivity of the particles in the melt. As shown in Figure 6a, based on the MSD function and the Einstein relation, the self-diffusion coefficients of distinct ions can be obtained. It can be seen that the order of the self-diffusion coefficients of different ions was F − > Ca 2+ > O 2− ≈ P 5+ > Si 4− , and they were all in direct proportion to the content of CaF 2 , indicating that the addition of CaF 2 can make each ion become more active. It mainly results from the depolymerization of the network structure in the melt by CaF 2 , which lowers the energy barrier for the migration of ions in the melt and enhances the mobility of each particle. In addition, these phenomena can also lead to changes in the macroscopic properties of the melts. It is worth noting that the diffusion capacity of F − in the melts was most prominent and much larger than that of O 2− , indicating that the substitution of F − for O 2− improved the overall mobility of the phosphate structural units. Moreover, the diffusion coefficients of P 5+ and O 2− were equivalent, which means that P and O always maintained the stable structure of [PO 4 ] or [PO 3 F] and diffused cooperatively throughout the melts. The melts' viscosity was calculated from the self-diffusion coefficient of each ion and compared with the experimental measurements. The results of the MD simulation and experiment in Figure 6b both show that with the increase of CaF 2 content, the viscosity of the CaO-SiO 2 -CaF 2 -P 2 O 5 systems decreased and led to an improvement of melt liquidity. Clearly, the viscosity, which reflects the viscous resistance of the melt during the flow and depends prominently on the degree of polymerization of the melt, would be reduced in a melt with simple structural units. Besides, the NPL model [38] and Pal model [39] were also used to compare the calculation results. It can be observed that although there were some errors between the calculated viscosity consequences and the experimentally measured ones, the trends were in perfect agreement, which indicates that the MD simulations were able to predict the viscosity of the system accurately to some extent and reflects the reliability of the MD simulations. The predictions of both models differed significantly from the experimental data due to discrepancies in some of the components. Consequently, the MD viscosity calculations are in better agreement with the experimental results compared to both models.
In the process of steel-making dephosphorization, P is usually enriched in 2CaO·SiO 2 -3CaO·P 2 O 5 (C 2 S-C 3 P) solid solution [40][41][42]. Dephosphorization depends on the concentration of phosphorus in the solid solution, and the flow properties of the dephosphorized product in the slag also determine whether phosphorus can be efficiently removed from the slag. As can be seen from the above analysis, the introduction of CaF 2 directly changed the basic structural units of the phosphate melt, making the phosphate network units more easily enriched. On the other hand, CaF 2 reduced the viscosity and improved the fluidity of the slag, so that the dephosphorized products enriched in P could be better transported to and removed from the slag layer. CaF 2 is thus favorable for dephosphorization both from the microscopic reaction point of view in slag and from the macroscopic flow properties. Our study links the microscopic to the macroscopic and essentially defines the critical role of CaF 2 in the dephosphorization of slag.
Correlation between Viscosity and Structural Properties
The viscosity of the slag depends on its degree of polymerization. Researchers have proposed two common approaches to describe the complexity of melts. The first one amounts to counting the number of non-bridged oxygen atoms, denoted as NBO, based on the results of molecular dynamics simulations. The parameter NBO/T, which reflects the degree of melts' polymerization, can be obtained by combining the number of network formers, T (Si or P), in the system [43]. The larger the NBO/T, the higher the ratio of non-bridging oxygen in the melts, that is, the simpler the structure of the melts is, then the viscosity and other parameters of the melts will also change accordingly. The second is to judge the melts' complexity according to the evolution of total Q n . It is usually expressed by the ratio of high-complexity Q n to low-complexity Q n , such as, DOP = (Q 3 + Q 4 + Q 5 )/(Q 0 + Q 1 + Q 2 ). [44]. The higher the DOP, the more complex the systems. Comparing the calculated melts' viscosity with the above two parameters, we observed a correspondence between the viscosity and the two parameters, as shown in Figure 7.
It can be observed in Figure 7a that NBO/T increased with the increase of CaF 2 content, while DOP was the opposite. The results show that CaF 2 can effectively reduce the complexity of the system, and the variation of both quantities has a good correspondence with the trend of the viscosity value, suggesting that the melting viscosity is directly related to the complexity of the system. Specifically, the introduction of CaF 2 simplified some complex network units formed by interweaving [SiO 4 ], [PO 4 ], and [PO 3 F] structures in the whole melt, and formed simple structures such as single or chain, greatly reducing the connectivity of the whole melt. Furthermore, the complexity of the slag structure depends on the competing effects of silicates and phosphates on the polymerization of the molten slag. In the CaO-SiO 2 -CaF 2 -P 2 O 5 systems, CaF 2 promoted the disaggregation of complex network units into small units, which made the diffusion of micro-particles easier. The macroscopic manifestation of this phenomenon is a reduction of the total viscosity. In Figure 7b, the relationship between the viscosity and the above two parameters was obtained by linear fitting. For viscosity and DOP, y = 0.5728x − 0.0167, R 2 = 0.9821, and for viscosity and NBO/T, y = −0.3130x + 1.1755, R 2 = 0.9712. The correlation coefficients of the above two fitting results were high enough, so the relationship between viscosity and microstructure of CaO-SiO 2 -CaF 2 -P 2 O 5 melts could be accurately described, and at the same time, the viscosity could also be predicted by the microstructure of the systems.
Conclusions
We have presented the microstructure information of the CaO-SiO 2 -CaF 2 -P 2 O 5 melts at 1600 • C by MD simulation and explored the evolution of each structural unit with the increase of CaF 2 content. Combined with the analysis of microscopic particle transport and macroscopic flow properties, it is clear that the crucial role played by CaF 2 in phosphosilicate melts has been investigated.
By analyzing the distributional properties of the coordination and bond angles between different atoms, we found that both S 4+ and P 5+ were present in tetrahedral form in the molten CaO-SiO 2 -CaF 2 -P 2 O 5 system. The coordination number of Si-O was maintained at around 4.0 when increasing the CaF 2 content from 0% to 20%, while the coordination number of P-O decreased from 4.05 to 3.57. Therefore, CaF 2 had little effect on the structure of [SiO 4 ] but decreased the stability of the [PO 4 ] structure. Specifically, F − tended to replace O 2− and promote the transformation of [PO 4 ] to a [PO 3 F] structure, and at the same time, it is beneficial to make the P element-enriched phosphate network structure more aggregated. However, the addition of CaF 2 did not lead to a large-scale rearrangement of the atoms in the whole system, and the network structure with Si 4+ and P 5+ as cores remained tetrahedral.
The results of the MD simulation and experiment showed that CaF 2 is beneficial for reducing the degree of polymerization of the melt and thereby reducing the melt viscosity, which decreased from 0.39 to 0.13 Pa·s as the CaF 2 content increased from 0% to 20%, and it had a good linear relationship with the structural parameters. In summary, CaF 2 is beneficial for dephosphorization both from the microscopic reaction point of view and from the macroscopic flow properties in slag.
Institutional Review Board Statement:
The study did not require ethical approval. | 2022-11-12T16:02:56.204Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "4ed9b43193ff577ab5a4e5416c0432aa954e93a3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/22/7916/pdf?version=1667989006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2dcbfe3ff92398f9626cd1dbd5e21cb3a588e844",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
209167174 | pes2o/s2orc | v3-fos-license | Selection of the Best Model of Distribution of Measurement Points in Contact Coordinate Measurements of Free-Form Surfaces of Products
The article presents a new method for determining the distribution of measurement points, which can be used in the case of contact coordinate measurements of curvilinear surfaces of products. The developed method is based on multi-criteria decision analysis. In the case of the new method, the selection of the distribution of measurement points on free-form surfaces is carried out based on the analysis of five different criteria. The selection of the best model of the distribution of measurement points results from the accuracy of coordinate measurements, the time needed to complete measurement tasks, the number of measurement points, the accuracy of the substitute surface representing the measured free-form surface and the area where measurement points are located. The purpose of using the developed method of the distribution of measurement points is to increase the performance of coordinate measurements primarily by increasing the automation of strategy planning of measurements of curvilinear surfaces and improving the accuracy of measurements of free-form surfaces of products. The new method takes into account various aspects of coordinate measurements to determine the final model of the distribution of measurement points on measured surfaces of products, thereby increasing the probability of the proper determination (i.e., identifying the highest deviations of a product) of the location of measurement points on the surfaces of a measured object. The paper presents an example of the application of the created method, which concerns the selection of the best model of the distribution of measurement points on a selected free-form surface. This example is based on, among others, the results of experimental investigations, which were carried out by using the ACCURA II coordinate measuring machine equipped with the VAST XXT measuring probe and the Calypso measurement software. The results of investigations indicate a significant reduction in time of coordinate measurements of products when using the new method for determining the distribution of measurement points. However, shortening the time of coordinate measurements does not reduce their accuracy.
Introduction
The coordinate measuring technique is widely applied under industrial conditions. Various coordinate measuring systems are presently used by many measurement laboratories and different companies representing various branches of the industry. A coordinate measuring machine (CMM) is still a very popular measuring system. CMMs may work in two main modes (i.e., the contact and non-contact modes), which may be applied because of using different types of measuring probes [1,2]. The user of a CMM must take into account a lot of factors to properly create a measurement program, which controls the work of a coordinate measuring machine. One of the factors, which has a significant influence on the accuracy of contact coordinate measurements, is the measurement strategy, planning points are distributed non-uniformly. The following paragraphs of this section present the examples of methods of the determination of locations of measurement points on surfaces of measured objects.
The method of the distribution of measurement points on curvilinear surfaces of products is presented e.g., in the work by Mingrang et al. [11]. The algorithm of localization of points is based on the form error model, which was created by adding deviations to a nominal model of an investigated product. The proposed method was compared to two often applied methods of selecting locations of measurement points on measured free-form surfaces: the uniform distribution of points and the method based on the curvature of a product. The presented method gained the best results concerning the applied number of measurement points and the time of measurements.
Three methods of the distribution of points were analyzed by Lalehpour et al. [12]. The points were distributed on selected surfaces. One of the considered methods was the random distribution of measurement points. The investigations were performed for different numbers of points and by using the virtual sampling. The analyses were conducted to select the best method of the distribution of measurement points. The best method includes the smallest number of measurement points which accurately represent an investigated surface.
ElKott et al. [13] presented four methods of distributing measurement points, which are based on, e.g., the uniform distribution, the areas of surface patches and the curvature of a patch.
The paper by ElKott and Veldhuis [14] and the work by ElKott [15] present two different methods of positioning measurement points on surfaces measured by a CMM. The developed methods are as follows: the method based on deviations calculated between a nominal model and a substitute model and the method taking into account the curvature of a considered surface.
New strategies of finding the locations of measurement points were also proposed by Rajamohan et al. [16,17]. They are based on the lengths of investigated curves and the areas of measured surfaces. Moreover, the authors applied dominant points, at which the maximum local curvatures occur.
The results of investigations regarding the selection of measurement points in the case of coordinate measurements of a blade were presented by Jiang et al. [18]. Three methods of the distribution of points were compared and the chordal deviation method obtained the best results.
The literature concerning the coordinate measuring technique lacks automated methods for selecting the location of measurement points on measured surfaces of products that could be directly implemented in commercial metrological software cooperating with, e.g., coordinate measuring machines equipped with contact measuring probes. The developed method of defining the distribution of measurement points, which is shown in this paper, can be easily implemented in commercial measurement software. This implementation contributes to increasing the chances of using the new method under industrial conditions.
The following sections of the paper concern, e.g., the presentation of the proposed method of the distribution of measurement points, the applied structure of the analytic hierarchy process, the explanation of considered criteria used during the AHP analysis, the results of performed research and the conclusions regarding the selection of the best model of the distribution of measurement points.
Proposed Method of the Distribution of Measurement Points
The proposed method of determining the location of measurement points on free-form surfaces of measured objects is dedicated primarily to coordinate measurements of a series of manufactured products. Its fundamental advantage, which is associated with shortening the time of coordinate measurements, is visible when it is necessary to carry out measurements of the whole series of machined products not differing in terms of their nominal shapes. The length of shortened time results from the number of measured objects. The new method requires the use of the so-called reference model of the distribution of measurement points, which should be used in the case of coordinate measurements of the first object being a part of a series of manufactured products. The created method of the distribution of points consists of three stages. They are presented in the first part of Figure 1.
Moreover, Figure 1 presents the steps of coordinate measurements of a series of investigated products when applying the developed method. The initial stage of the proposed method for defining the location of measurement points is the coordinate measurement of the first workpiece from a given series of products by using the mentioned reference distribution of measurement points. The reference distribution should include the largest possible number of measurement points, which may lead to the highest possible measurement accuracy. A large number of measurement points increases the chances of identifying real form deviations of a measured object.
In the next stage, the so-called modified distributions of measurement points, including the smaller number of points compared to the reference distribution, are created. They are formed based on the reference distribution of measurement points. A smaller number of measurement points, especially in the case of contact coordinate measurements, leads to a reduction in the time of measurements, which may contribute to increasing the efficiency of the entire manufacturing process of a given product. Modified distributions can be created in a random manner by random elimination of measurement points included in the so-called reference distribution. In the second stage, the first object from a given series of manufactured products should also be measured by using the analyzed modified distributions of measurement points.
The third stage of the created method of defining the distribution of points concerns performing the AHP analysis based on the adopted criteria. The purpose of implementing the multi-criteria analysis is to choose the best distribution of measurement points from the group of analyzed and modified, in relation to the reference distribution, models of the location of measurement points lying on a measured free-form surface of a product.
After conducting the AHP analysis, the rest of objects from analyzed series of products should be measured by means of a selected modified distribution of measurement points, as shown in the second part of Figure 1. The modified distribution includes a smaller number of points compared to the reference distribution. This, in turn, reduces the time of contact coordinate measurements of curvilinear surfaces of products. Speeding up a measurement task by using a smaller number of measurement points is mainly visible when measurements are carried out in the single-point probing mode. This mode of coordinate measurements is still widely used in the industrial practice, e.g., during measurements carried out with the use of coordinate measuring machines or CNC machine tools.
AHP Hierarchical Structure
The most important stage of the proposed new method of defining the distribution of measurement points on measured curvilinear surfaces of objects is the third one, which assumes the application of the multi-criteria analysis of the considered modified models of the distribution of measurement points located on measured free-form surfaces.
The goal of the AHP analysis was the selection of the best model of the distribution of measurement points lying on a free-form surface. To select the most appropriate distribution of measurement points the following criteria were used: areas of 3D substitute models representing the models of the distribution of measurement points.
The general view of the AHP hierarchical structure, which was used during performed investigations for the multi-criteria prioritization process in order to select the best model of the distribution of measurement points located on a free-form surface of a measured object, is presented in Figure 2. The first criterion used during the AHP analysis is associated with the form deviations calculated by using the considered models of the distribution of measurement points located on a free-form surface. The form deviations obtained by means of the analyzed models of the distribution were compared to the reference form deviation measured with the use of a coordinate measuring machine and the reference distribution of points which includes the largest number of uniformly distributed measurement points. The first considered criterion helps the user of a coordinate measuring system to assess whether the applied distribution of measurement points can detect the highest values of form deviations of measured curvilinear surfaces of objects. In the case of the first criterion, the form deviation measured by using the best model of the distribution should be as close as possible to the reference deviation.
The second criterion applied during the prioritization process of the models of the distribution of measurement points is related to the time of coordinate measurements performed by using CMMs.
The time of measurements in the case of the single-point probing mode depends on the numbers of measurement points and scanning lines, along which measurement points are located, used to measure a free-form surface. The time of coordinate measurements is one of the factors with the influence on the efficiency of measurements. The users of coordinate measuring machines must decide how to conduct measurements to obtain the highest possible accuracy of coordinate measurements in relatively short time. The time of measurements is the part of the entire time of the production process of a product. Therefore, for the second criterion, the most favorable distribution of points is provided by the model for which the measurement time is the shortest.
The next applied criterion is the number of measurement points located on a curvilinear surface of a measured object. The accuracy of contact coordinate measurements is directly connected with the amount of data representing a measured product. A large number of measurement points increases the probability of measuring the highest of existing form deviations characterizing the quality of a measured product. Therefore, the model with the largest number of measurement points is the best one. However, if coordinate measurements are conducted in the single-point probing mode, the number of measurement point cannot be too large in order not to increase the time of coordinate measurements too much.
The fourth applied criterion is related to the accuracy of a substitute model of a measured free-form surface. The substitute models are fitted to the groups of measurement points being the parts of the considered models of the distribution of measurement points. The accuracy of the substitute models must be analyzed to check how the considered models of the distribution represent a measured free-form surface taking into consideration the geometrical complexity of curvilinear surfaces. Moreover, substitute models of measured curvilinear surfaces were successfully used by many researchers [13][14][15] in order to distribute measurement points on measured surfaces of an object, which makes such an approach to planning the strategy of contact coordinate measurements very popular in the coordinate measuring technique. In the case of the proposed AHP analysis and the fourth criterion, the best model of the distribution is the one which generates a substitute surface characterized by the highest accuracy.
The last of the analyzed criteria is the area of the substitute model representing measurement points distributed on an analyzed curvilinear surface. In the coordinate measuring technique, it is always better to spread measurement points on the largest possible fragments of measured free-form surfaces of an investigated object. This may increase the chances of detecting the highest form deviations of free-form surfaces, which are planned to be measured by using CMMs. For the last criterion, the model of the distribution of measurement points located on the largest possible area of a measured surface is the best one.
In the case of the proposed method of defining the distribution of measurement points on free-form surfaces of measured products, the selection of the best model of the distribution is carried out on the basis of the above-mentioned criteria of the AHP analysis. The models of the distribution assume different numbers and positions of points located on measured curvilinear surfaces. To present the possibilities of the method of determining the distribution of measurement points, both simulation and experimental investigations were carried out. Five different models of the distribution were taken into account when conducting the research. They are presented in detail in Section 4.2.
Simulation and Experimental Investigations
The conducted numerical and experimental investigations concern three stages of the created method of defining the distribution of measurement points on free-form surfaces of measured products. The analyzed stages are aimed at selecting the best distribution of measurement points among the considered models of the distribution. The best distribution of measurement points should ensure fast and accurate coordinate measurements of analyzed products. The purpose of the research was to verify the possibility of performing coordinate measurements of a series of manufactured products by using a measurement strategy, which includes a smaller number of points compared to the reference distribution of measurement points.
Analyzed Curvilinear Surface
The investigations were carried out for the selected free-form surface shown in Figure 3. Its 3D model was created by using the CATIA V5-6 software and then imported into the Calypso metrology software of the Carl Zeiss company. In the Calypso software, the reference distribution of measurement points was declared. It includes the largest number of points, and thus it provides the highest probability of measuring the actual value of a form deviation of an analyzed curvilinear surface. The real object containing the analyzed free-form surface was made of aluminum alloy by using the DMU 100 monoBLOCK CNC machine tool. The dimensions of the bottom surface of the machined product were around 125 × 127 mm. The considered free-form surface of the product was the theoretical surface that was used to present the possibilities of the proposed method for defining the distribution of measurement points. Therefore, no technical documentation, including, e.g., geometrical tolerances, was available for this investigated surface. The values of tolerances were not needed for the presentation of the proposed method. However, the documentation of the machined product is, of course, necessary, but after applying the developed method and selecting measurement points, when the accuracy of the product should be assessed.
The Considered Models of the Distribution of Measurement Points
The best distribution of measurement points located on the selected free-form surface was chosen from the group of five different models of the distribution of measurement points. The models were randomly selected by an operator of a coordinate measuring machine. Figure 4 illustrates the considered models of the distribution. The models, similarly to the reference distribution of measurement points, were created by using the Calypso inspection software. The models were prepared by modifying the reference distribution of points presented in Figure 3. The modifications involved changing the amount of the scanning lines containing measurement points and distributed on the investigated free-form surface. The use of the models based on measurement points arranged along selected curves representing the analyzed curvilinear surface results from a scanning measuring probe applied during experimental research. However, the proposed method of determining the location of measurement points can also be used for other distributions of points (not only lying on selected curves) measured by using, e.g., the single-point probing mode. Moreover, Figure 4 presents the number of measurement points included in the considered models of the distribution of measurement points. The final number of models of the distribution of measurement points considered in the proposed method must be selected by the user of a coordinate measuring system.
Results of Simulation Investigations
The performed numerical research concerned two criteria of the AHP analysis. The first one was related to the determination of the accuracy of 3D substitute surfaces created for the considered curvilinear surface and the analyzed models of the distribution of measurement points. The second one was associated with the calculation of the areas of generated substitute surfaces. The substitute models of the considered free-form surface were created by using the Zeiss Reverse Engineering software. Moreover, they were prepared by means of the third-degree B-Spline surfaces fitted to measurement points being parts of the considered models of the distribution of points. Figure 5 presents the examples of the substitute surfaces, created based on the considered models of the distribution of measurement points, of the analyzed curvilinear surface. The boundaries of the presented substitute surfaces do not correspond to the boundaries of the nominal curvilinear surface due to the fact that the substitute models were created based on the different groups of measurement points, which were differently shifted from the boundaries of the analyzed free-form surface.
After creating the substitute models of the analyzed free-form surface, the next step was to calculate their areas, which was done by using the appropriate functions of the CATIA V5-6 software. The calculated surface areas for the individual substitute models, created by means of the mentioned Zeiss Reverse Engineering software, are presented in Table 1. The numbers of the substitute models, presented in Table 1, correspond to the numbers of the individual models of the distribution of measurement points. The need to calculate the surface areas of the substitute models was the result of using the last of the five criteria of the AHP analysis.
In the next step, the maximum deviations between the substitute models of the analyzed curvilinear surface and the geometrical model of the surface which was created on the basis of the reference model of the distribution of measurement points containing the largest amount of measurement data were calculated. The relative values of deviations calculated based on the maximum form deviations of the individual substitute models are presented in Table 2. The values presented in Table 2 were calculated by dividing the deviations of the models listed in the first row of the mentioned table by the deviations of the models, which are listed in the first column. Figure 5. The selected substitute models representing the measured free-form surface. The results of the numerical investigations were necessary for conducting the third stage of the proposed method of defining the distribution of measurement points, i.e., to perform the AHP analysis aimed at selecting the best distribution of points on the measured curvilinear surface.
Results of Experimental Research
To conduct the AHP analysis regarding the considered models of the distribution of measurement points the experimental investigations were also conducted. During the experimental research, the values of form deviations and the time of coordinate measurements were registered. The values of the mentioned parameters, similarly to the results of numerical research, were necessary to perform the AHP analysis. The experimental investigations were conducted by using the ACCURA II coordinate measuring machine (Carl Zeiss, Oberkochen, Germany) equipped with the VAST XXT measuring probe (Carl Zeiss, Oberkochen, Germany) and the Calypso software. In the first stage of coordinate measurements the reference form deviation was measured by using the reference distribution of measurement points presented in Figure 3. In the next step of the experimental investigations, the subsequent form deviations were obtained by means of the considered five models of the distribution of measurement points. All form deviations were calculated by means of the appropriate functions of the Calypso inspection software. Moreover, the time of coordinate measurements of the mentioned deviations was registered. Figure 6 presents the coordinate measuring system applied during the experimental research and the investigated free-form surface.
The accuracy parameters of the applied coordinate measuring machine are as follows: • E L,MPE = (1.6 + L/333) µm; • P FTU,MPE = 1.7 µm; • MPE Tij = 2.5 µm; • MPT τij = 50.0 s; where: E L,MPE -a maximum permissible error of a length measurement; L-a measured length, mm; P FTU,MPE -maximum permissible single-stylus form error; MPE Tij -a maximum permissible scanning error; MPT τij -maximum permissible scanning test duration. The analysis of the results of coordinate measurements was carried out without the best-fit of the measurement results to the nominal data. The measurements were carried out in the scanning mode with the scanning speed of 10 mm/s, the distance between measurement points was 2 mm. The results of experimental studies, in the form of the measured deviations and the times of individual measurements, are presented in Table 3. The time of coordinate measurement carried out by using the reference distribution of measurement points was equal to 514 s. Large deviations from the reference model, presented in the second column of Table 3 and calculated for the first, third and fifth models of the distribution of measurement points, are the results of the location of points in the places of the analyzed curvilinear surface where the largest form deviations do not occur. The form deviations obtained during coordinate measurements conducted by using the second and fourth models were very close to the deviation measured on the basis of the reference model of the distribution of measurement points because the points were in the areas characterized by the worst quality. The value of the reference deviation was equal to 0.1955 mm. In the case of the reference model, there is the highest probability of identifying the worst-made fragments of the measured free-form surface due to a large number of measurement points.
The results of experimental research, similarly to the results of the simulation investigations, were used to select the best (i.e., accurate and fast) measurement strategy.
AHP Analysis of the Considered Models of the Distribution
The AHP analysis was performed to select the most appropriate model of the distribution of points when conducting coordinate measurements of the selected free-form surface. At the beginning, the comparisons of the criteria in relation to the goal (i.e., the selection of the optimal distribution of measurement points) by using the nine-point scale were conducted. The pairwise comparisons were carried out by using the scale presented in Table 4 [19]. The results of the comparisons are presented in Table 5 and they derived from the experience of the user of a coordinate measuring system. The consolidated priorities regarding the considered criteria are presented in Table 6. It has been revealed that comparisons do not demonstrate inconsistency, as the consistency ratio (CR) is 8.9%. In the next step, the comparisons of the alternatives with respect to each criterion were made. To compare the alternatives the normalization of the data calculated by using the CATIA V5-6 and Calypso software packages to the range from 1 to 9 was applied. Tables 7-11 present the results of the comparisons of the models of the distribution. The comparisons, similarly to the comparison of the criteria, do not demonstrate inconsistency. The largest value of the CR parameter was identified for the area criterion and it was equal to 2.1%. Table 12 presents the sequence of the alternatives (i.e., the considered models of the distribution of measurement points), based on the weights calculated with respect to each criterion, after performing the AHP analysis. Moreover, Figure 7 presents the consolidated weights of the alternatives that have been derived by taking the criteria and alternatives-based pairwise comparisons into consideration. Tables 5-12 are the result of the sequence of actions that should be carried out when using the Analytic Hierarchy Process in order to choose the best distribution of measurement points. Based on the AHP analysis, the best distribution of measurement points on the considered curvilinear surface, taking into account all analyzed criteria, is represented by the fourth model. Models 1-3 and model 5 have subsequent priorities. Hence, model 4 shall be used in practical coordinate measurements of a series of manufactured products. It is also allowed to use the second model of the distribution of measurement points, which achieved the very similar result to the fourth model.
The Time of Coordinate Measurements Carried Out by Using the New Method of the Distribution of Measurement Points
The significant profit resulting from the application of the proposed method of defining the distribution of measurement points on free-form surfaces of products is immediately visible for even a small series of measured products (e.g., composed of ten products). In the case of the created method, the first object from a given series of products should be measured by using a reference distribution of measurement points (i.e., the first stage of the method) and with the use of models of the distribution of measurement points generated by modifying a reference distribution (i.e., the second stage of the method). The measurement conducted by using a reference distribution should identify the existing true value of a deviation of a measured product.
Assuming the measurement of the surface, the shape of which is presented in Section 4.1 and the analysis of five measurement point distribution models presented in Section 4.2, the total measurement time of the first product from the series of ten measured products would be 885 s by using the new method of determining the distribution of measurement points. The calculated time results from the sum of the times of coordinate measurements of the first product performed by using the reference distribution and the five considered models of the distribution of measurement points.
Based on the AHP analysis, which is implemented in the third stage of the proposed method of defining the location of points, the fourth model of the distribution of measurement points should be used in the case of real coordinate measurements of the next nine objects included in the considered series composed of ten products. The application of the fourth model of the distribution of measurement points generates the measurement time for the considered free-form surface of a single product equal to 61 s. Therefore, for the series of ten products, the total time of their coordinate measurements would be 1434 s assuming the application of the proposed method of defining the distribution of measurement points located on surfaces of measured objects. This time is the result of the sum of the total measurement time of the first product (i.e., 885 s) and the sum of the measurement times of the other nine items that are the part of the given series of products and which should be measured by means of the fourth measurement point distribution model (i.e., 549 s).
In the case of not using the new method for determining the location of measurement points, all ten items of the given series of products should be measured by using the reference distribution of measurement points, which would generate the total measurement time of all ten products equal to 5140 s. This time is almost four times longer than the total measurement time of coordinate measurements of ten workpieces when using the new method of defining the distribution of measurement points on curvilinear surfaces. The graphical representation of the benefit of using the new method of the distribution is shown in Figure 8. The figure presents the comparison of the time of measurements when using the developed method with the time of measurements of all products conducted by means of the reference distribution.
Conclusions
The article presents the new method for determining the location of measurement points on free-form surfaces of measured products. The results of the performed investigations and the analysis carried out by using the AHP method indicate the possibility of a significant reduction in time of coordinate measurements of a series of products when using the new method for determining the distribution of measurement points located on measured free-form surfaces. The time reduction is the result of decreasing the number of measurement points for most products that are the part of a given series of workpieces. Moreover, a smaller number of measurement points does not reduce the accuracy of coordinate measurements, which results from taking into account various criteria of the considered AHP analysis, e.g., 'the time of coordinate measurements' and 'the form deviation of a measured surface', when determining the location of measurement points by using the proposed method.
The advantages of the proposed method of the distribution of measurement points on curvilinear surfaces are the increase in the level of automation of defining the position of measurement points and the possibility of implementing the developed method in commercial measurement software, so that it can be used in industry. The example of popular measurement software in which the new method can be relatively easily applied is the Calypso software package. The application of the created method of determining the location of measurement points in the Calypso inspection software is possible by means of its module of parametric programming of coordinate measurements-PCM (parameter coded measurements).
The further research should be aimed at testing the proposed method of determining the location of measurement points under real industrial conditions and the implementation of the proposed method in commercial measurement software. | 2019-12-11T14:01:40.706Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "f0e40bb3a00b8af4059b7a54f243041cad3d5897",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/19/24/5346/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1b1aa22509ce219bffe8cea978891cb26b08526",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
212701625 | pes2o/s2orc | v3-fos-license | Project Management Office Strategies of Hosting Indonesia National Olympic Games (PON) XIX/2016 in West Java
With West Java about to host the National Olympic Games of Sport 2016. Therefore, this study begins by problematic the nation of sport multi events and the benefits. They are intended to bring to hosts. The study serves as a general introduction to papers that follow in this special issue. This study aims to explore strategies of West Java as a host of National Olympic Games XIX/2016. This study uses Project Management Office (PMO) to support the strategy operationalization using improvement the multi events of sport. Research method in depth interview data were collected from a few stakeholders related to sport events. The qualitative data were analysed to understand the impact of PMO strategies on success a host of National Olympic Games 2016 in West Java. Thus, this study serves as a discussion of some of the key concepts in, and assumptions about the use of PMO strategies to produce a legacy for the hosting province. From data that we found, we can conclude that PMO is the most important key from hosting National Olympic Games of Sport 2016.
I. INTRODUCTION
The holding of the National Sports Week (PON) is the Indonesian government's agenda in the field of sports which is held every 4 years [1]. The PON was first held in the city of Solo in 1948, continued in Medan in 1953 and Makassar in 1957 with the theme "safeguarding the independence, unity and integrity of the Indonesian nation." The next PON was held in Jakarta until 1996. In an effort to equalize sports development, the government the centre began in 2000 and the administration of the PON was returned to the provincial government.
A. PON XV 2000 East Java
In 2000 East Java was appointed as the host of the XV PON with the theme "Adhesives of the Unity of the Nation" In general, the implementation of PON in East Java was wellimplemented and successful. The construction of sports facilities is spread in 5 regencies / cities, with standard sports achievements that are not special. In organizing it is still not optimal from opening, implementation to closing. There is nothing special, however, the host of East Java has left a memorable ending event and there are no problems, like in the PON -PON that had been done before.
B. PON XVI 2004 Palembang
In 2004 Palembang hosted the XVI PON with the theme, "United Unity". All PON events are centred in the city of Palembang. Preparation and organization are considered weak, so that the KONI Centre must intervene. PON has not yet seen any link with efforts to increase investment, trade and tourism. Promotion of events in print and electronic media is still very minimal, so that many people do not know when the PON will be held. In fact there are some obstacles that are not ready for the facilities and infrastructure needed for the biggest national sporting event. In conclusion, the implementation of the PON in Palembang did not satisfy all the contingents.
C. PON XVII in 2008 East Kalimantan
In 2008, East Kalimantan Province was chosen as the host of the holding of PON XVII with the theme, "Achieving Sturdy Achievement of Brotherhood." Competition and competition activities were carried out in several Regencies / Cities. Preparation and organization are considered too many interventions by people who are not experts in sports, so that there are many obstacles faced at that time, especially the chaotic transportation services. Not involved in some areas of the 2008 PON XVII 2008. Lack of PON Advertising and media exposure. Some infrastructure constraints occurred
D. PON XVIII 2012 Riau
The 2012 PON XVIII was held in Riau with the theme, "PON Glow." Focused on various regions or regions in Pekan Baru Riau. Preparation and organization are considered weak, and slow due to careful preparation of facilities and infrastructure so that many obstacles faced at that time were more complex, including lodging, until transportation was erratic at that time. The lack of maximum PON echoes in several regions, up to the Centre in relation to the 2008 PON XVII event both print and electronic media. Lack of PON Advertising as well as media exposure and several infrastructure constraints occur The implementation of PON to PON is very expensive when the organizing activities are held outside Jakarta, because it does not require a large and large cost to build the infrastructure, physical, transportation, accommodation, accommodation, as well as ceremonial opening events and ceremonial closing events. PON tends to be a "Project" that is detrimental and has a variety of problems, and leaves debt and does not have a significant further impact on regional development, as well as other sports athletes who have performed well and are struggling for their province and tend to only get promises. Facilities & Infrastructure that are built with large investments tend to be less effective when post-PON ends, making it wasteful and a burden for the Regional Government.
In 2016 West Java was chosen as the host of PON XIX with the theme, "victorious in the land of legend". PON West Java prepares itself from the preparation period, the implementation period, until after the implementation [2]. The target of West Java as the host of PON XIX is to want to achieve chess success in PON XIX in 2016, namely (1) successful achievement means the creation of young athletes with high talent and able to break various national and world records. According to Saefudin West Java's success was ranked top with 28.5% of the total 756 gold medals contested by 34 provinces [3]; (2) successful implementation means the success of the province of West Java as the host of PON is expected to be able to build a building image as the organizer of the best and professional multi-sport events; (3) the success of community economic empowerment means the success in carrying out the 2016 PON XIX relay task as an initial milestone in the renewal of the PON event in Indonesia to be of economic value; and (4) administrative success means the success of West Java in organizing PON without the occurrence of financial administration problems that lead to violating the law.
West Java PON XIX expectation in 2016 is not only from West Java Province, to be better, even various levels of society from other regions and provinces, hoping that the implementation of PON to XIX in 2016 can really realize a building image event that is more good and accountable, professional, and can create a sparkling sporting event that is felt by all levels of society from various regions, especially also West Java. As the host of the XIX PON, starting in 2010 West Java has made preparations, especially facilities and infrastructure events, both through various print and electronic media.
With particular hope for all levels of society in West Java Province that the 2016 PON XIX Grand Committee starting from the Chairperson and Chairperson and Chairperson of the Daily or National Committee can truly synergize and commit to jointly succeed the PON XIX event. Therefore, the West Java Provincial Government began in 2013 to implement a strategy that had never been carried out in the previous PON-PON. By appointing a Project Management Office (PMO), it is hoped that the successful chess launched by the West Java Provincial Government can be realized [4]. This success is expected to be a model for the implementation of the upcoming PON-PON. PON tends to be a "Project" that is detrimental and has a variety of problems, and leaves debt and does not have a significant further impact on regional development, as well as other sports athletes who have performed well and are struggling for their province and tend to only get promises. Facilities & Infrastructure that are built with large investments tend to be less effective when post-PON ends, making it wasteful and a burden for the Regional Government. National Olympic Games of Sport in Indonesia can we make comparison with the world Olympic Games of Sport, as said by Agha, The International Olympic Committee (IOC) requires cities that bid for the Olympic Games to formulate a legacy strategy [5]. This case follows a sport professional tasked with developing an Olympic bid for their city. Specifically, the case considers various legacy outcomes including: destination image, tourism, cost, venues, housing, and social legacies. The case is written with anonymity of the actual city so that the instructor can adapt the case to a specific city. The case is particularly useful for courses covering sport tourism, stakeholder management, event management, or sport economics and finance.
II. METHOD
In accordance with the problems examined in the field study, qualitative research is used. According to Ary, qualitative research is a series of scientific processes in the social field that study the complexity of behavioural and social phenomena [6]. The purpose of qualitative research is directed at efforts to describe what is happening and interpret the description in clear language so that events what happened can be illustrated carefully and accurately. In qualitative research there is no manipulation of the dependent variable, because it is not carried out under controlled conditions as in quantitative research. In addition, the data generated is not in the form of numbers, but descriptive data in the form of written or verbal words from people and observed behaviour.
With this qualitative study, researchers are expected to be able to interpret data and present the results of strategies for success at the 2016 PON XIX in West Java. As Denzin and Lincoln describe, in qualitative research the notes and interpretations are based on field findings [7]. The field findings are further reviewed to obtain a more accurate meaning. Mikkelsen's opinion states that, qualitative methods are identified with phenomenological and interpretative research [8]. Phenomenological approaches lead to the dual focus of observation, namely: (1) what appears in the experience, which means that the whole process is the object of study and (2) what is directly provided in the experience is directly present for those who experience it.
In general, qualitative methods have more characteristics in accordance with the phenomena that develop in the field of education, social, and psychological. This is because the study design in qualitative research is more flexible. That is, modifications can be made even though research is being carried out. In addition, researchers can relate directly to the object of their study in a more conducive relationship atmosphere, as well as the need for research costs using qualitative methods in general are relatively more efficient, and others. During this qualitative method is considered to have the Advances in Health Sciences Research, volume 21 power to answer research questions more broadly and deeply. Hyllegard, et al. explains that, qualitative research methods give researchers the possibility to find out more broadly and deeply about the complexity of social phenomena [9]. Likewise in the context of physical education and sports, especially to better appreciate the position of sports and training in a diverse range of cultures and values. In the process of qualitative research is always open, meaning that improvements and improvements can be done when the process takes place. Therefore in qualitative research, the process is as important as the results to be achieved, even according to Bogdan and Biklen, qualitative research places more emphasis on process rather than results [10].
So, the data collection techniques used in this study is intended to obtain various information that mutually support and complement each other. Data is collected from research subjects based on: (1) observation guidelines, (2) interviews, (3) document review. To get the truth of the data done by triangulation and consistency test between the PON Large Committee, West Java KONI and PMO [11]. Project Management Office is a division or department within an organization that determines and maintains standards in project management in that organization. The main purpose of establishing a PMO is to get the maximum benefit by standardizing and disciplining projects according to certain regulations, processes and methods [12].
III. RESULTS
The findings are something that is obtained from the field in the form of answers from various research data sources. The data analysis, according to Patton, is the process of arranging the order of the data, organizing it with interpretation, which gives a significant meaning to the analysis, explaining the description patterns, and looking for relationships between the dimensions of the description [13]. So, the results and analysis of collected data obtained from the results of the field notes are carefully organized to answer the research question, namely: how is the PMO strategy in the successful implementation of PON XIX in 2016 West Java?
Based on the results of data processing and other support in the field that has been documented as follows: At the PON implementation stage, the PMO has conducted based on the baseline of the field activities [14]. Each field understands what activities must be carried out in the relevant month. If the activity cannot be carried out, the activity will be carried out as much as possible in the following month so that there will be no accumulation of activities at the end of achieving the baseline. Overall activities that must be The relevance of this research as has been studied by Parent, the purpose of this paper was to examine the theory and practice of knowledge management processes, using the Olympic Games as the empirical setting and the Olympic Games Organizing Committee and its stakeholders as participants [15]. The case study of the 2010 Vancouver Olympic Winter Games was inductively and deductively content analysed, resulting in the development of a knowledge management and transfer process model for Olympic Games organizing committees and their stakeholders. Moreover, we found that the information and knowledge concepts should be Advances in Health Sciences Research, volume 21 placed on a continuum from explicit to tacit (with experience); practitioners do not distinguish between knowledge management activities as researchers do; socialization, externalization, combination, and internalization mechanisms can be found when tailoring knowledge for a stakeholder; and knowledge sources, reasons, organizational culture, and especially individuals are important when implementing knowledge management/transfer processes.
IV. CONCLUSIONS
Based on the results of the analysis and discussion it can be concluded that the Project Management Office (PMO) strategy adopted by the Government of West Java Province in organizing the 2016 PON XIX is very effective [16]. This is evidenced by the average achievement of 100.00% of all activities contained in the baseline of activities in all fields. This shows that PB PON XIX 2016 WEST JAVA at the end of its implementation has successfully held multi-PON events to XIX in 2016 with the achievement of: (1) Successful Implementation; (2) The production of LoG work packages is carried out by the PDU Sector.
Distribution and installation of LoG at each venue and at specified pairs of points.
Setup the Command Centre, Head Quarter and Media Centre. Although it is fully implemented by PT TELKOM through sponsorship, so that the ICT Sector can continue to facilitate licensing needs so that the installation of ICT devices can be carried out.
Repetition of simulation of the implementation of crosssector activities including PON fire parade, reception pickup, contingent distribution, contingent flag raising, closing opening ceremony, sports branch competition, organizing exhibitions and side events, broadcasting, publication and media services, business fund marketing, call operations centre, head quarter, command centre and media centre.
Comprehensive cross-sectional coordination, synchronization and communication so as to grow common understanding in the implementation of crosssectional activities. Concern and a strong sense of ownership over the implementation of West Java 2016 PON XIX which continues to be fostered and fostered in every stakeholder have an impact on the realization of work team solidity.
With the successful implementation of West Java 2016 PON XIX, we recommend to other provinces that will host PON in the future to use the PMO strategy as a companion before, during, and after PON implementation. | 2020-03-05T11:06:26.734Z | 2020-02-19T00:00:00.000 | {
"year": 2020,
"sha1": "1d58a5b087bda68ec939d05fe2cb9af4824038f8",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125934809.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "68403cb2d826be89d33a31450aa97764b2d7247c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Business"
]
} |
252572260 | pes2o/s2orc | v3-fos-license | Sex workers’ peer support during the COVID-19 pandemic: Lessons learned from a study of a Portuguese community-led response
To respond to the consequences felt by the COVID-19 pandemic, a community-led intervention was developed by the Portuguese national Movement of Sex Workers. With this exploratory study, we aimed to document their work and analyze their perceptions of this impact. To do so, we interviewed them individually, between May and August of 2020. Additionally, we analysed an Excel Sheet that contained the needs assessment and the support provided by the Movement. The content analysis of both suggests that the impact of the pandemic might have been exacerbated by the social inequalities caused by the prostitution stigma and characteristics such as gender, migration status, race, and socioeconomic status. This study calls for the inclusion of sex workers’ voices in the design of policies and responses related to the commerce of sex. The consolidation of a Portuguese Movement of Sex Workers is also noted.
Introduction
The rapid spread of the COVID-19 virus resulted in an emergency state of public health all over the world, which led to the development of restrictive measures to reduce the risk of its transmission. In Portugal, measures such as the implementation of compulsory confinement, the establishment of sanitary fences, a ban on non-justified moving about and on public roads and a travel ban were declared on the 18 th of March 2020 (Decreto do Presidente da República n°.14 A/2020 de 18 de Março). From the 4 th of May 2020 on, most of these measures were lifted, although the use of personal protective equipment, namely mouth masks and disinfectant gel, as well as practises such as hand hygiene, respiratory etiquette, the civic duty of staying at home and physical distancing remained in force (Resolução do Conselho de Ministros n.°33-C/2020 de 30 de Abril).
Based on their lived experiences, the National Movement of Sex Workers predicted the dramatic impact that these measures would have on sex workers and developed a support network as a response. Community-led interventions, in which sex workers take the lead in designing, delivering, and monitoring the response, have been widely used in the HIV epidemic intervention. They enable 'sex workers to address structural barriers to their rights, and to empower them to change social norms to achieve a sustained reduction in their vulnerability that goes beyond HIV'. (WHO et al., 2013: 44).
Sex work can be defined as the direct physical contact and indirect sexual stimulation in commercial activities that relate to the provision of sexual services, performances, or products (e.g. prostitution, lap dancing, pornography, stripping, telephone sex, live sex shows, erotic webcam performances, etc), maintained between consenting adults. Sex workers are the people who perform any type of behaviour with sexual or erotic meaning in exchange for money or other material compensation (Oliveira, 2016;Weitzer, 2010).
The life trajectories of people selling sex, their motivations for entering into the industry and their approach to engagement in sex work are multiform (Oliveira, 2018;Sanders, 2007;Weitzer, 2009). Indeed, experiences of poor labour conditions, dissatisfaction with selling sex, having been physically or sexually abused, being addicted to drugs and having been tricked or forced into the industry were found not to be representative of the whole group (Vanwesenbeeck, 2001, Weitzer, 2009). This is not to say that sexual commerce is always an edifying, lucrative, or esteem-enhancing activity either. The happy hooker/victim dichotomy is rejected by the sex workers' movements themselves, given that the emphasis on experiences of exploitation, subjugation, and violence leads to the 'victim' stereotype of sex workers while framing it with exclusively positive narratives excludes those who identify as sex work survivors or human traffic victims (Hofstetter, 2018). These simplified visions obfuscate the complexity of sex work realities, which builds upon societal conditions. Characteristics such as the immigration status, drug dependency, race, age, appearance, gender, and economic disparities are found to influence differences in social and economic stratum within this industry and, consequently, the uneven distribution of the working conditions and job satisfaction (Vanwesenbeeck, 2017, Weitzer, 2009. The term 'sex work' itself is bound to the perspective of sexual commerce as a form of work. By focusing on the occupational aspect of it, this expression aims to break the 'prostitution stigma' related to the negative and moral aspects of the commerce of sex (Leigh, 1997). Furthermore, it is more inclusive since it comprises all types of sex work and alerts to the need to legally recognize it as a form of work, dignify it and guarantee the rights of those who perform it.
The whore stigma is well-documented, as well as the harmful consequences it causes on the workers (Weitzer, 2018). It has been widely described in the literature as leading to social isolation, loss of social ties, lack of well-being, low self-esteem, restriction of freedom, exploitation, and violence, including symbolic violence (internalisation of guilt and shame widespread by society). (e.g. Benoit et al., 2018;Link and Phelan, 2001;Oliveira, 2012).
According to Goffman (1963), stigma is an attribute that is deeply discrediting and reduces a 'whole and usual person, to a tainted, discounted one' (p.3). Further, Link and Phelan (2001) conceptualise it as a process that involves the labelling of individuals with negative stereotypes, implying their loss of social status and discrimination.
To protect themselves from stigma, many sex workers hide this part of their lives from their relatives and are thus living a 'double life'. The separation of these two worlds may create psychosocial stress and conceal them from seeking and receiving social support (Gaffney et al., 2008).
According to Oliveira (2008), hiding the activity from social care structures is also a common strategy to avoid mistreatment or discrimination, for example, when accessing social and health care services. This type of behaviour, grounded in the dehumanization of sex workers, expresses the institutional violence suffered by them and leads to their exclusion. Because they are not granted the common civil and human rights, sex workers see themselves as unable to exercise full citizenship, becoming socially excluded.
The stigmatization of people selling sex can be further embodied in legislation, with the regulation or criminalization of the commerce of sex. A growing body of literature has found that the criminalization of sex work, including laws that target only the purchase of sex, and the activities relating to its facilitation, has adverse effects on the workers' personal lives, health, and vulnerability to violence (Platt et al., 2018;Vanwesenbeeck, 2017).
In Portugal, the act of selling sex was decriminalized in 1983 (Decreto-Lei 400/82, de 23 de setembro). Although this framework does not criminalize sex work itself, it does not recognize sex work as a form of labour either, preventing the access of its professionals to their labour rights and citizenship. Furthermore, the promotion, encouragement, or facilitation of another person's sexual commerce is considered a crime by the Portuguese Penal Code (article 169°), which also has some consequences on the workers' lives. Because they might be accused of facilitating other people's sex work, the workers often choose to work alone, which holds them more vulnerable to violence. Additionally, prevention materials, such as condoms, can be used as evidence of this facilitation, which encourages unprotected sex practices. This approach conceptualizes commercial sex as inherently violent and oppressive to women, which is influenced by the existent prejudices towards sex workers (Oliveira, 2011).
Even though the legal and social situation in every country is different, sex workers movements all over the world share a common goal of reclaiming the recognition of the profession as a legitimate form of labour, both social and legally (Heying, 2018;Lopes and Oliveira, 2006). This recognition calls on the need to grant human and labour rights to individuals in the sex industry, in order to improve their living conditions. The sex workers' movement counts various supporters, namely researchers, harm reduction practitioners, political parties, and powerful international organisations such as the World Health Organization (WHO), the Joint United Nations Programme on HIV/AIDS (UNAIDS) and Amnesty International (AI). Altogether, they assert the need for its decriminalisation, constituting that it is the illegal status that turns sex workers more vulnerable to violence, abuse, and exploitation (AI, 2016;WHO et al., 2012). Decriminalisation is the legislative model that entails the removal of both criminal laws (prohibiting both sex work or sex work-related activities) and civil regulations. The main point of this model is not to have sex work-specific regulations, but rather to have it fall under the existing regulations that cover other industries and health issues. The goals are to remove the stigma from sex work, respect sex workers' human rights, improve their health, safety, and working conditions, as well as to consider sex work like any other profession. New Zealand is the only country adopting this model (Mossan, 2007).
In Portugal, although some informal and punctual collective actions were registered throughout the last 40 years (Lopes and Oliveira, 2006), the formal organization of sex workers was non-existent until recently (Graça, 2019;Oliveira, 2018). In 2018, however, a member of the former ICRSE encouraged three sex workers to create a movement, now known as the national Movimento dxs Trabalhadorxs do Sexo (Movement of Sex Workers -MTS). MTS is a collective of sex workers and former sex workers that aims to represent and advocate for their rights at a national level. Like the global stand of sex workers' organizations, they aim to reclaim power over self-representation and their voice as experts in the public discourses on sex work. Indeed, although the movement still strives for its civil and human rights, it follows the current guidelines of global sex workers' organizations by constituting itself as a trade union, focusing on economic and labour rights (Gall, 2007). Nevertheless, with the outbreak of the pandemic, they concentrated their efforts on the response they developed.
Although much attention has been paid to the effects that the measures applied to avoid the transmission of COVID-19 had on Portuguese sex workers in the mainstream media (with numerous journalistic pieces published, some of them written by MTS), to the best of our knowledge, no studies have yet examined it. While NSWP (2020)<CE: Please confirm this citation with reference list> conducted several surveys on the impact of these measures on people selling sex, Portuguese sex workers were not among the respondents. However, its study is highly relevant to contribute, in useful time, to improve the national response to sex workers in need, as well as to elaborate recommendations for the future. Thus, this study aims to fill this gap by exploring (1) the possible consequences that the restrictive measures related to the COVID-19 pandemic had on sex work, through the perspective of MTS' support network leaders. Furthermore, it explores (2) how did these consequences impact sex workers' needs and (3) which sex workers did suffer the most with the pandemic and respective isolation measures taken by the government. Finally, it also aims to (4) analyse which intervention practices were implemented by the MTS to address these needs.
Participants and instruments
The participants of this study were five people (P1-P5). These people lead the support network created by MTS during the COVID-19 pandemic of 2020.
Considering that they contacted 218 sex workers and all of them are, or were at some point, active within the industry themselves, we consider them key informants (Cobertta, 2003).
Two of the participants have been part of the movement since 2018, the year of its establishment, one joined in 2019 and two early into 2020. They have, on average, 14 years of experience as sex workers and are aged between 28 and 46 years old.
The participants were contacted by the research team and provided with a thorough explanation of the study, objectives, and access to future publications, in a meeting held online. A participatory model of research based on the participants' collaboration in all stages was agreed upon. Thus, the participants of this study are co-researchers and co-authors of everything generated by it. This approach represents an important step towards giving voice to a population who is usually marginalized, as it recognises them as experts and legitimate producers of knowledge about their own lives (Oliveira, 2019).
It was also agreed that MTS would only share with the research team data that was considered ethically innocuous, properly anonymised, and generic about their work during the pandemic. At the beginning of the first interview, we explained to the participants their anonymity rights, as well as the right to withdraw and refuse to participate or answer any question. They accepted this confidentiality agreement orally and were audio-recorded doing so. Furthermore, this study was approved by the Ethics Committee of the Faculty of Psychology and Education Sciences of the University of Porto (Decision Ref.ª 2020/05-7b).
Materials
To answer our research questions, we developed a qualitative exploratory study, which uses interviews with privileged informants and documentary analysis as its primary research strategies. Two semi-structured interview scripts were designed to explore and reconstruct the participants' experience supporting sex workers during the pandemic. The first script included questions about MTS and the impact of COVID-19 on MTS work, sex work and sex workers, and suggestions on what would improve the response to these workers. The second script, used in the follow-up interviews, comprised questions aimed at assessing the continuity of the work of participants since the previous interview. The order in which the topics were introduced, and the wording of the questions, were adjusted throughout the interview, in order to not interfere with the interviewee's train of thought, as suggested by Cobertta (2003). Additionally, a documental analysis of an excel sheet developed by MTS was performed to get a deeper understanding of what was done by the movement.
Data collection
Fifteen interviews were conducted, between May and August of 2020. The five participants were interviewed during three moments of the pandemic each, to monitor their responses over time. The period between the three different interviews ranged between 4 and 9 weeks, depending on the availability of each participant. Due to the contact restrictions imposed because of the COVID-19 pandemic, the interviews were conducted over the phone and recorded for future reference. The first set of interviews ranged between 2 h and 32 min; the second one ranged between 6 and 30 min and the third between 7 and 57 min.
The excel sheet was provided to the research team by the participants themselves and contained information about the sex workers' needs and the support provided by MTS over the pandemic period.
Data analysis
The data were subjected to categorical content analysis, as specified by Bardin (2002). Firstly, the interviews were transcribed and, together with the excel sheet, constituted the data corpus. We read the data thoroughly to familiarize ourselves with the main ideas expressed by the participants, after which the coding and categorization process was initiated. The corpus was first fragmented into units and then it was systematically transformed and merged into categories. Since we intended to capture the spontaneity of the participants' speech, the analysis followed an inductive approach (Braun and Clarke, 2006).
The categories were later reviewed in relation to each other and the coded data extracts. As a result of this process three broad themes arose: The impact of the pandemic on MTS, The impact of the pandemic on Sex Work and The Relationship with the Social System.
The use of two different data collection strategies allowed us to triangulate information, enhancing the trustworthiness of our data. Additionally, we believe that the long-term engagement with our participants lead to the development of safe and trusting relationships with them which further benefits the credibility of our findings.
Limitations
Whereas this qualitative approach provided an in-depth understanding of the dynamics involved in the peer-led intervention conducted by MTS, some additional quantitative sociodemographic data, collected directly on a sample of sex workers, would have yielded a representative picture of the relation between those data and the impact felt with the COVID-19 pandemic. Additionally, we acknowledge that having the representatives of MTS as gatekeepers in the data collection does limit the results to the perspective of the service providers and may have biased the information collected on what the organization was doing and how was it received by the workers. Listening to the sex workers supported by MTS would have avoided this limitation and would also have contributed to the representativeness of the impact of the pandemic on their lives.
Results
The impact of the pandemic on MTS Response to COVID-19. MTS's response to COVID-19 consisted of an assessment of their colleagues' needs, followed by the facilitation of the access to the resources required to fulfil them, as mentioned by all participants. Initially, the participants contacted sex workers to conduct this needs assessment: '(…) we went to announcements' websites, we were the ones who contacted people asking… explaining who we were, what we were doing and asking if people were going through difficulties that we could help [with]' (P5, June) and, later on, they 'created a form' that was 'disseminated through various social media [websites], even in newspapers…' (P1, May). Additionally, one participant said that they did a follow up of each person's situation: 'We have always tried to keep up and not just 'Look, we helped once, that is it' (P2, May).
The facilitation of the access to the resources was made using an 'emergency fund' created by MTS, which consisted of money gathered with crowdfunding (donations were made for this purpose specifically), as expressed by two participants. However, 'since the fund was not enough to assist the number of people [in question]' (P1, May) all participants explained that they asked for help from all kinds of institutions and organizations: 'in the beginning, it was institutions that worked specifically with… people in prostitution. Later it [was] expanded [to other organisations]' (P5, June), they contacted NGO's, city halls, parish councils, religious associations, State-funded institutions, community networks and 'informal organisations created to respond to the pandemic consequences'.
Moreover, all the participants referred that they shared legal, political, and COVID-related information whenever it was requested and one of the participants mentioned that when they contacted sex workers and learned that they were still working, they made sure they were safe. Two of the participants reported the creation of a chat where relevant information and documents were shared among the sex workers.
Although MTS was still providing this service when the last interview happened, three of the participants reported that when the state of emergency was over some of the institutions and community networks were not helping anymore, due to the 'lack of human resources' (P2, June) (e.g, volunteers went back to work, P1, June), 'monetary incapacity' (P2, June) or because they 'ceased the support' (P4, June). Further, at the end of July and middle of August two of the participants stated that this was still happening, now due to some of the organizations being closed for holidays. Therefore, according to three of the participants, they had to use more of their funds, which eventually 'ran out of money' (P4, August). Nevertheless, by the end of the interviews, two of the participants mentioned that they were still trying to get more funds. One of the participants also stated that 'as we got out of the emergency state, we noticed that the level of donations to our fund decreased a lot as well… Because people got back to the routine.' (P1, June). Hence, in June, the participants said that they did not have the capacity to meet the demand and one explained that they had to distribute resources according to the perceived priority.
When asked if they were able to respond to all the requests received, three of the participants stated that when it comes to 'food', 'medication' and 'information' requests, they were, but not with requests of paying rent. Another participant said that they were not able to, just like any other entity, including official institutions.
Movements' consolidation. The changes felt within the movement are noted: '(…) it made us jump some steps. It made us hurry the process of connection (…).' (P4, May), and their organization is commended by themselves and others: 'Everyone was like 'How did you manage to get so many people and have an organization so 'on the clock' (…) without any resources and showing up from nowhere' (P2, May). Additionally, P1 (May) said that 'with this' they were able to establish 'a very close relationship with the workers', that led to the perception of the movement as different from the other organisations and entities that work with sex workers, as well as to the realisation of the importance of uniting. In this regard, P3 (May) expressed that the pandemic 'generated union' and two other participants described the mutual help, support, and dialogue they perceived on the WhatsApp chat they created with the sex workers. In the last interviews, three of the participants reported that the movement was 'growing' and 'getting new members'. (P2, August).
Knowing that fighting for their rights 'cannot be unrelated with what is going on right now' (P5, June), three participants referred that they continued working on the political aspect of the movement and in August, they denoted that they were working towards their legal formalization.
The pandemic impact on sex work
Consequences. The main consequence that the COVID-19 pandemic had on sex work was unanimously referred to as the impossibility to work. The closing of bars and nightclubs, their children's presence at home, the neighbour's vigilance, the prohibition of being on the street imposed by the government, having chronic diseases, the lack of clients, the impossibility of renting other places to do sex work and the insecurity of the pandemic situation were pointed out as the reasons for that. On another hand, three of the participants outlined that moving sex work to the online setting was not a possibility, because 'online sex work is a different type of sex work' (P5, July), which implies a 'greater level of exposure' (P5, July). Also, they explain that not everyone can do online sex work due to the skills and resources that are required to do so. Additionally, two of the participants pointed out that the amount of money made with online sex work is not enough to make a 'salary' (P4, May) or 'pay rent' (P5, June). Nevertheless, by August one of the participants stated that 'the social networks and technology are opening new ways to sex work', especially to people that have chronic diseases and have no other option' (P1). According to the participants, this inability to work is associated with a financial impact on sex workers which 'catapulted' several other needs.
Three participants also referred to the pandemic impact on mental health: whilst two participants associate this mental health impact with the 'lack of means to survive', P1 (May) talks about 'major psychological problems' caused by the change in their children's routine. P1 explains that, because these sex workers isolate themselves from friendships and from connections with their child's teachers and other parents (to not risk being recognised for their commercial activity), they did not have their support to help them adapt to the situation. Additionally, the participant reported that the pandemic caused more isolation because sex workers could not have contact with the people they meet in their daily routine. P5 shared a different thought: I think it brought a bit more isolation, not only because of the confinement but also being a sex worker and not having, in certain places, access to the community (…) and that is something that we [MTS] are also trying to break right now (…) coming together more as a community and knowing we are not alone… (June) With the end of the emergency state, all the participants claim that some sex workers were getting back to work, but encountered a lack of clients and, therefore, a lack of income. Moreover, two reported that sex workers were taking precautions when doing so. One participant said that more people were engaging in the commerce of sex.
The most affected. When stating which sex workers suffered the most from the pandemic related consequences, three of the participants referred to the poorest, most precarious and belonging to the lower-class ones. Means, the most 'marginalised' sex workers: 'migrants', 'racialized', 'black', 'trans', 'people who have kids', 'people who suffer from chronic diseases' and 'with a lower education level' (P1, May; P2, May; P4, May). Two explained that the more marginalised one is the more violent will be the impact of not being able to work and that these factors of discrimination are related to the lack of access to resources (P2, May, P4, May).
Additionally, three of the participants considered that all types of sex work that require physical contact were affected and one mentioned the people that used to live in the nightclubs where they work but were evicted when the nightclubs got closed as the most affected. (P5, June).
As for the period after the end of the state of emergency, two of the participants claimed that the sex workers going back to work were the ones working independently/individually, while the street sex workers, the ones working at nightclubs, the ones that have chronic diseases and the ones who are mothers could not do so (P1, July; P2, August).
Needs. Regarding the needs felt, all the participants reported food as the main request made at the beginning of their service (P1, May). Food was then followed by requests for medicines and according to three participants, the request for help to pay bills and rent was the main one from June on. One of the participants offered an explanation: for P1 (August) the state of emergency gave a 'false sense of security' to the colleagues, who were then faced with the number of expenses they had been ignoring, as well as with the poorly developed agreements with their landlords. Hence, two of the participants reported requests for information related to 'negotiation with landlords', as well as the legal aspects of 'migration' and information about how to go back to work and others. One participant also reported that some people asked for help to look for another job. One other explained that many sex workers had more than one need.
From the excel sheet provided by MTS, we were able to calculate the frequency of each need reported. This analysis indicated that food was, indeed, the main request made by the sex workers who contacted MTS (46.3% of the needs reported). House rents and bills were the second and third main requests (16.1% and 13.1% of the needs respectively), while monetary help for medication (5.7%), information on how to negotiate with the landlord and the type of support they are entitled to (4.5%) and for a mental health practitioner (3.3%) were also among the frequently requested needs.
Vulnerability to COVID-19. The sex workers' vulnerability to COVID-19 is perceived, by four of the participants, to be no different from the rest of the population or other occupations that require physical contact. One of the participants, however, stated that they were more vulnerable 'because this disease is a silent disease' and they cannot know if the client will be honest about being infected or not (P3, May).
Nevertheless, although three participants reported 'a few' situations of a positive diagnosis of COVID-19, two stated that 'it was not related to sex work', whereas the other one said that 'because part of the people did go back to work, some colleagues started to show up with COVID' (P4, August). The two remaining participants did not report any cases of COVID-19. One declared that not having more COVID-19 reported cases was a surprise, probably because of the stigma that associates sex work with the transmission of viruses.
Moreover, two of the participants stated that people in more vulnerable situations (e.g. the elderly, migrants and people who use drugs) take more risks at work: I think that it is really hard to convince clients to wear a mask, for example. And I think that when the situations are more vulnerable when people need to make money the most, those are the times when clients try to push a bit for things that they usually would not. And that, depending on the person's needs, is what can result in more risky behaviours. (P5.1).
The relationship with the social system
Trust and peer work. Trust in the institution was described as an important factor for sex workers when it came to asking for help: the fear of stigma, exposure and lack of anonymity were reported as factors that inhibited them from reaching out. One, for example, was afraid that the State services would take their children away. Hence, confidentiality was regarded as essential in MTS work, even because sometimes 'people with who sex workers share their lives do not know what they do for a living' (P4, May). The participant also said that when they collaborated with the institutions and they needed to give the sex workers' data, they demanded guarantees that the data was not going to be shared with the State's control services.
The perception of the Movement as an organization of people who 'are exactly like them' (P1, May) was highlighted by four participants as a crucial factor to earn their trust. P4 (May) refers to their efforts of not being perceived by their peers as a charitable institution or a financed service, but rather as an organization of peers, that they can join. In fact, one of the participants stated that they 'had to replace the role of the institutions' and adds that the ones that were in direct contact with the sex workers did 'almost an intensive course to be able to act as peer workers' (P1, May). The participants also mentioned the need to implement more peer work practices in the responses to sex workers.
Institutions, organizations, and community networks. The collaboration with some institutions, organizations and community networks is highly complimented, whilst with others, a participant felt that they were 'ignored' and 'not taken seriously' (P2, July). Furthermore, two of the participants mentioned reports of situations of mistreatment and discrimination related to their status as sex workers, which led them to not want to ask for help. The unwillingness of some harm reduction institutions to collaborate with MTS was commented on.
Besides that, the amount of bureaucracy demanded by the institutions is reported by three people as an obstacle to providing help, especially to the 'ones that need it the most' (P5, June). One of the participants declares that it proves that 'the supporting systems were never enough, and, in fact, the support never existed to give a real response to the people's needs because if it did, it would take them out of poverty.' (P4, May).
Government and law. The inaccessibility to state support caused by the non-recognition of the profession as such is linked by all participants of this study to the impact felt by sex workers with the COVID-19 pandemic related measures because they could not access the support given by the State: 'it is basically as if we do not exist in the economy' (P5, July). One participant mentioned that if sex workers had some kind of security, they would not feel the need to take risks by working and that the support given to informal workers ('200€ per month', P5, July) is not enough to pay for their monthly expenses. According to them, MTS did reach out to the Government to explain this, but never got a reply.
Thus, the urgency of its legal recognition is pointed out by all the participants as a form of preventing crises of this kind. One of the participants expands: 'Since these people live illegally, in marginality, in a refusal of institutional recognition, they are more exposed and it is much harder to intervene' (P4, May). The participant also said that 'it is necessary to have a risk reduction perspective [on sex work]'.
Recommendations. When it comes to the institutions and the government, three participants state that it is necessary to listen to 'the sex workers' opinions, decisions and needs' (P4, May) when implementing intervention projects, politics and solutions that have sex workers as the target group. The participants expressed that there are details that only the ones with lived experience on the subject understand. Hence, listening to the people to whom the intervention and policies are targeted is mentioned as crucial to match their needs. In this regard, P1 (July) calls for collaboration from the institutions that work with them and P5 (July) requests help in applying 'political pressure' to grant more visibility to this situation because otherwise, they will 'not be able to help effectively'.
According to four of the participants, the MTS is implementing the practice of listening to the sex workers' opinions: 'We are going to have a questionnaire online [and ask] what the difficulties are, what they need… our representation has to come from the voices of our own.' (P1, May).
Discussion and conclusions
By exploring the perspectives of the sex workers that led a peer intervention during the COVID-19 pandemic, the present study contributes to the understanding of the impact that the pandemic had on people selling sex. Furthermore, it offers recommendations for the development of comprehensive intervention strategies and policies on the subject.
The results of the present study suggest that the measures developed to control the COVID-19 transmission made it impossible for many people selling sex to continue to work, which hindered their source of income. As a result, these sex workers were left with no means of subsistence, making it difficult to pay for basic needs such as food, medicines, rent, or household bills. These findings are consistent with the impact of the pandemic on the sex industry recently found by Callander et al. (2020) in male sex workers and suggested by the Global Network of Sex Work Projects's data (2020).
This economical strain is linked, by the participants of this study, to the exclusion of sex work from the economic warranties provided by the government, which is further associated with the non-recognition of commercial sex as a legitimate form of work. It is beyond doubt that the pandemic put a significant economic burden on almost all sectors. However, because sex work is a non-regularized informal activity, people selling sex were not legally included in the extraordinary legislative measures developed to support the other sectors facing this burden. The framework of the Portuguese law on sex work, which neither criminalizes the sex workers nor provides them with the basic labour rights given to other sectors (Oliveira, 2018), did hinder the access to the help conceded to other citizens, exacerbating the harmful effects of the pandemic in sex workers lives when compared to the wider population.
Besides not being able to access the government's economic help, sex workers encountered difficulties contacting the social care systems. Although the facilitation of access to the resources was to some extent possible due to the collaboration with other institutions, organizations, and community networks, the results showed that with the end of the state of emergency many of these collaborations fell through, leading MTS to rely more and more on their own resources over time. Thus, if it were not for MTS, some sex workers would not have been provided with the help they needed. Consequently, the leaders of MTS described the response they conducted as 'taking up the role of the institutions'.
On another hand, the leaders of MTS reported situations of mistreatment and discrimination toward sex workers when they accessed the social aid. Because these types of behaviours did inhibit people from accessing the social care structures, our findings support the assertion that institutional violence highly contributes to the exclusion of sex workers (Oliveira, 2008). Structural violence and sources of macro (legal framework) and meso (lack of access to the care systems) stigma have been previously found to affect sex workers' lives significantly and negatively, by hindering their access to health, justice, and social care support (Platt et al., 2018;Vanwesenbeeck, 2017). These findings build on these results and highlight the negative effects of stigma on sex workers' lives.
The reported feelings of isolation are also consistent with previous research (Benoit et al., 2018). In this study, the participants mentioned that to escape the negative social reactions associated with their professional activity, some sex workers avoided social networks that could now have been useful to provide support during the pandemic. For example, they avoided the networks of parents and teachers that could help them adapt to their children's changed routines. Seen as immoral, deviant, and transgressive, people selling sex may avoid these types of social networks to not being recognised (Oliveira, 2008). Hence, they must rely on each other even for the most basic needs, like maintaining contact with other human beings. In this regard, MTS's efforts to create a feeling of union among sex workers were referred to counter this lack of access to the wide community.
The participants seem to associate this feeling of isolation with negative effects felt at the psychological level, which is coherent with the existent research on the matter (Benoit et al., 2018;Oliveira, 2008;Platt et al., 2018). Nevertheless, it should also be noted that negative consequences of the COVID-19 pandemic on mental health have been found in the general population as well (Xiong et al., 2020).
At the beginning of the pandemic, the UNAIDS (2020) stated that stigma and discrimination could leave the most vulnerable people further behind. Based on the lessons learned from the HIV epidemic, the organisation stated that a successful response relied on the removal of barriers to people protecting their own health, including fear of unemployment and loss of wages. Research on HIV prevention among sex workers supports this statement (Deering et al., 2013), as risk behaviours (e.g. non-use of condoms) were also found to be related to structural conditions. Although very few cases of COVID-19 were reported in this study, our findings seem to be in line with these concerns. People in vulnerable situations, such as migrants, people who (ab)use drugs and the elderly, were held as more prone to take risks related to the COVID-19 transmission than other sex workers. Further, the workers that suffered the pandemic related consequences the most were described as belonging to vulnerable groups (e.g. race, migration status, gender identity, sexual orientation, low socioeconomic status). These findings suggest that the characteristics that lead some sex workers to the lower social and economic strata in the industry are reported as being the same which makes them susceptible to the nefarious consequences of the measures taken to respond to the pandemic (Weitzer, 2009).
Moreover, when discussing the results, the participants highlighted the exacerbated effect the pandemic had on women who are mothers, who, according to them, represented most of the requests received during the support period. A special emphasis was also given to the undocumented sex workers who, in addition to the consequences faced by their colleagues, encountered the fear of being deported when asking for help. Borrowing Vanwesenbeeck's (2017) words, whilst some structural conditions on which the 'ugly side' of sex work is rooted, like gendered labour markets, and double sexual standards, continue to shape one's options to make money, disproportionate consequences of this kind will continue to exist The prostitution stigma, together with factors such as class, gender, race, and education are fundamental determinants of social inequality. They generate exclusion and have a significant impact on people's health and well-being (Hatzenbuehler et al., 2013;Wilkinson and Marmot, 2003). The findings of this study are consistent with these studies and emphasise the need for the development of legislative measures that ensure both public health and respect for human rights. UNAIDS (2020) itself recommends that 'when preparing for epidemics, members of communities generally considered more vulnerable to an epidemic should have a place at the governance table' (p. 6), calling on community-led responses. Because peers face the same concerns and pressures, they are generally perceived as more credible, and their messages are more likely to change attitudes and behaviours (Gaffney et al., 2008). In this study, the recognition of MTS as a peer organisation was identified as essential to earn the trust of the sex workers supported, which is consistent with the literature.
The same plea as UNAIDS's is made by the leaders of MTS, who find their involvement in the development of policies and responses to sex workers to be crucial, due to their best understanding of the work. Nevertheless, their requests to meet with the government were never met, and their letters reporting the situation were ignored. The demand made to be heard and listened to tags along with the one made by the international sex workers movement itself, which adopts the motto 'nothing about us, without us' to reinforce the idea that no decision should be made without taking into consideration the opinions and needs of those with lived experience on the subject (Dziuban and Stevenson, 2015).
Whilst the unification of sex workers into a formal organisation was recently considered non-existent in Portugal (Graça, 2019), our findings suggest that the COVID-19 pandemic drew the line for the consolidation of the sex workers' movement, now organised into a formal structure with a growing number of new members. Taking Graça's (2019) deduction that the emergence of the collective of sex workers follows real threats, this public health hazard seems to have had a positive impact on the emergence of the sex workers collective that is more alive than ever and ready to seat at the governance table.
The sex workers' international movement has been clear in stating the harmful effects that stigma, discrimination, and punitive laws have on sex workers' lives. The findings of this study support the need to hear those statements. Further, they call on the need to include sex workers' voices in the design and implementation of interventions and policies targeting the sex industry. This study served as an attempt to give this voice and end up also registering the emergence of a Portuguese Sex Workers Movement. | 2022-09-29T15:11:32.982Z | 2022-09-27T00:00:00.000 | {
"year": 2022,
"sha1": "df345c6dc637822f336e3fb874e3ee222542e86f",
"oa_license": "CCBYNC",
"oa_url": "https://repositorio-aberto.up.pt/bitstream/10216/144129/2/582785.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "59e9ec509ebed55acc5be5b0f914594acf17ffc1",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
236209998 | pes2o/s2orc | v3-fos-license | Behavior Modeling for a Beacon-Based Indoor Location System
In this work we performed a comparison between two different approaches to track a person in indoor environments using a locating system based on BLE technology with a smartphone and a smartwatch as monitoring devices. To do so, we provide the system architecture we designed and describe how the different elements of the proposed system interact with each other. Moreover, we have evaluated the system’s performance by computing the mean percentage error in the detection of the indoor position. Finally, we present a novel location prediction system based on neural embeddings, and a soft-attention mechanism, which is able to predict user’s next location with 67% accuracy.
Introduction
The advances in hardware and software technologies have led to the adoption of smartenvironments in many contexts of our daily lives. Smart homes and smart buildings are already equipped with a multitude of embedded devices, along with connected sensors and actuators [1]. Several real cases already exemplify smart cities, which use the opportunities provided by innovative technologies to improve the lives of their inhabitants [2]. In such settings, smart environments are expected to play a crucial role for coping with the needs of sustainability, energy distribution, mobility, health and public safety/security [3]. A particular focus is the realization of ambient assisted living (AAL) solutions to enable elderly people to live independently for as long as possible, without intrusiveness from others. These solutions benefit from Internet of Things (IoT)-enabling technologies to improve elderly life thanks to the introduction of intelligent, connected devices [4].
Several AAL applications have been developed that have user positioning as their core capability. Elderly care [5], guidance systems [6], energy consumption [7] and security [8] are only some of the possible applications of indoor positioning information. Based on indoor positioning, it is possible to identify where a user is located and to predict his/her future locations based on the recent location history. In this paper, the indoor positioning issue is addressed by considering the performance obtained while using two different kinds of device to estimate the indoor position: a smartphone and a smartwatch. With both devices, the Bluetooth Low Energy (BLE) technology was exploited to obtain indoor positioning information. A generic home has been equipped with BLE beacon infrastructure, and several tests have been carried out with different configurations in terms of the number and models of beacons in each room. For each test campaign, the performance in terms of mean percentage error in the detection of the indoor position was calculated using a smartphone and a smartwatch, and the results have been discussed.
For the location prediction, we present an algorithm based on using neural embeddings to represent the locations of a house and an attention-based mechanism that instead of being applied to the hidden states of the neural network architecture is used to modify those embeddings.
The rest of the paper is structured as follows. Section 2 contains an analysis of the state of the art. In Section 3 we describe the overall architecture of the system and in Section 4 the location prediction algorithm. Section 5 contains an explanation of the testing environment and we discuss the results of the experiments in Section 6. Finally, in Section 7 we draw the conclusions and propose future areas of research.
Indoor Location
Indoor positioning systems (IPS) are an essential part of any intelligent environment or pervasive computing system. Indoor positioning has been used to model users' behavior in order to detect early risks related to frailty in elders [9], guide museum visits [10] and coordinate emergency responses [11]. There are different approaches and technologies that have been proposed over the years to tackle indoor positioning. Vision-based approaches use either visible light systems [12] or infrared signals, such as the Active Badge Location System, wherein a wearable tag emits an infrared code that is captured by an interconnected network of sensors [13]. Other vision-based systems use computer vision to detect specifically generated bidimensional codes in order to locate users and devices in an intelligent environment, such as the TRIP location system [14]. In the context of ambient assisted living, in [15] a video-based monitoring system for elderly care was proposed. The main objectives of this system are to preserve elderly independence and increase the efficiency of the homecare practices. The main disadvantage of the vision-based technology lies in the cost, which is still too high, especially for systems with very high precision. Alternatives to these systems are the radio frequency-based systems, such as those using Wi-Fi [16], RFID or Bluetooth.
Radio frequency identification (RFID) is one of the most popular wireless technologies for tracing and positioning [17,18]. The main advantage of this technology is the capability to work in the absence of line of sight (LoS). An example of this is the work done in [19]. The authors used Bayesian probability and k-nearest neighbors in combination with RFID tags. Other authors applied a deep belief network as fingerprinting-based RFID indoor localization algorithm [20]. Additionally, a combination of hyperbolic positioning and genetic algorithms has been used in order to compute the phase offset caused by the interference between tags [21]. NFC systems, such as [22], can be considered a sub-category of RFID systems. In most cases, such systems have the drawback of requiring a smartphone to approach deployed beacon. This type of active participation from the users is not desirable in most scenarios.
Bluetooth technology is an alternative for indoor positioning [23]. It can guarantee a low cost since it is integrated in most of our devices that are used daily, such as tablets and smartphones. Moreover, the spread of the emerging BLE technology makes BT also energy efficient, which is a key requirement in many indoor applications. This efficiency allows for higher measuring rates when determining a user's location and for longer battery life. For these reasons, BLE is considered as one of the most suitable positioning technologies for indoor positioning currently. The recent rise of iBeacons by Apple has contributed to the rapid spread of this technology, which is used to provide information and location services [24] in a completely innovative way. The accuracy of BLE for indoor locating has been extensively studied by several authors [25]. Subedi et al. [26] proposed the use of weighted centroid localization alongside the received signal strength indications from the neighboring BLE beacons. However, in order to achieve similar accuracy rates to Wi-Fi based approaches, BLE beacon-based approaches require more beacons than Wi-Fi APs [27].
Ultra-Wide Band (UWB) is another alternative for accurate indoor positioning. García et al. [28] presenteded a novel system for indoor positioning using UWB in highly complex environments where there are non-line-of-sight (NLOS) conditions. To do so, the authors used an extender Kalman filter for a NLOS detection algorithm. UWB has been widely applied in the tracking of sports activities in indoor environments [29,30]. A more extensive analysis of the state-of-the-art can be found in the following reviews: [31,32].
Behavior Prediction and Modeling
User behavior prediction and modeling is an area of research applied to several domains. As discussed in [33], behavior prediction is a core problem to be solved in the creation of more energy efficient and sustainable spaces. In [34], the authors applied behavior prediction to the online behavior in order to identify malicious users. In [35] it was used for marketing purposes. Behavior prediction is a commonly used technique in both real and virtual intelligent spaces. In [4], behavior was used to identify the risks related to mild cognitive impairments and frailty in the elderly in IoT-augmented spaces. The authors of [36] used behavior prediction to create more intelligent automated homes. In [37], the authors used similar behavior predicting approaches, but in this case to predict the behavior in virtual spaces.
Different techniques have been used to tackle this problem. Almeida et al. [38,39] have studied the usage of both convolutional neural networks (CNNs) and long shortterm memory (LSTMs) architectures to predict users' behavior while representing actions with neural embeddings. A LSTM approach was also used in [40] to learn and predict design commands based upon building information modeling (BIM) event log data stored in Autodesk Revit journal files. In [41] the authors followed a neuro-fuzzy approach. A Gaussian radial basis function neural network (GRBF-NN) was trained based on the example set generated by a fuzzy rule-based system (FRBS) and the 360-degree feedback of the user. Kim et al. [42] studied using RNN architectures in order to predict multi-domain behavior. In [43], the authors used both attention and memory mechanisms in their neural network architectures to improve the prediction results.
System Architecture
In Figure 1, the system's overall architecture is depicted. It mainly consists of the following components: • BLE beacon infrastructure. • A monitoring device to capture positioning data. • A cloud server to store and process captured data. Beacons are small radio transmitters that send Bluetooth signals. They are available in different sizes and shapes, making them suitable for a wide range of applications and allowing them to be easily integrated into any environment unobtrusively. A beacon is cost-effective and can be installed easily, and its position can be determined with to within a few meters. The BLE standard is also very energy efficient. Beacons can be used in server-based (asset tracking) and client-based (indoor navigation) applications. The last option was used in our study. Specifically, in the proposed solution, the indoor environment is equipped with a BLE beacon infrastructure. In particular, a BLE beacon is placed in each room, but in large rooms or long corridors, more beacons can be placed.
On the server side, every association between a beacon (i.e., the MAC address of the beacon) and its location (i.e., the room in which it is located) is stored in the database. When the application starts, this beacon/room map is transmitted from the server to the local database on the monitoring device. In this way, the application performs preliminary filtering during the scanning phase and only considers signals from beacons that are part of the implemented infrastructure for subsequent operations. The monitoring device consists of a smartphone or smartwatch running a specially designed and implemented application. In particular, the mobile application performs repeated Bluetooth scanning in configurable time intervals. With our settings, Bluetooth scanning lasts 10 seconds, and the next scan starts 15 s after the end of the previous one. During the scanning phase (i.e., within a 10-s interval), each beacon will be detected multiple times, triggering an event. Specifically, the average value of the detected RSSI and the average value of the transmission power (TxPower) (the power at which the beacon broadcasts its signal) are calculated. At the conclusion of the scanning process, a list of beacons identified by MAC address is obtained, along with relative average RSSI and TxPower values. Using these values, the calculateRating function in Listing 1 is applied for each beacon. This allows an "accuracy" value, called a rating, to be assigned to each beacon, which is used to correct the detected average RSSI value.
The formula used in the previous code to calculate the rating was [44]: The three constants in the formula (0.89976, 7.7095 and 0.111) are based on a best fit curve based on a number of measured signal strengths at various known distances from a Nexus 4. However, because the accuracy of this measurement is affected by errors, our algorithm uses the formula generically as a "rating" rather than as a true distance.
Additionally, each beacon is linked to an indoor location (room) via the function in Listing 2. As a result, each beacon is identified by its MAC address, associated with the corresponding room (e.g., living room, bathroom, bedroom or kitchen), and labeled with a location ID label (53, 56, 32, LVR, etc.) In addition, each beacon has a calibration RSSI value that corresponds to the average RSSI value measured at a 1 meter distance.
Finally, the distance from each beacon is calculated using this calibration value and the "log-distance path loss" [45] as reported in Listing 3.
Listing 3. Distance calculation.
1 /* * 2 * Calculates distances using the log -distance path loss model 3 * 4 * @param rssi the currently measured RSSI 5 * @param c alibra tedRss i the RSSI measured at 1 m distance 6 * @param p a t h L o s s P a r a m e t e r the path -loss adjustment parameter 7 */ 8 public static double c a l c u l a t e D i s t a n c e ( double rssi , float cal ibrate dRssi ) { 9 float p a t h L o s s P a r a m e t e r = 3 f ; 10 return Math . pow (10 , ( calibr atedRs si -rssi ) / (10 * p a t h L o s s P a r a m e t e r ) ) ; 11 } Then, the beacon list is sorted according to the rating value. Since the calculated rating is proportional to the distance, as specified in the formula (1), the first beacon in the list is the beacon with the lowest rating and closest to the smartphone.
The information about the nearest beacon is sent to the cloud server by the application. All detected locations are saved on the server and provided to the location prediction system to be processed, as described in Section 4.
Location Prediction System
The location prediction system is based in our previous work [38,39] focused on predicting users' behavior. The algorithm we present in this paper models the user's movements through indoor locations; it uses the semantic location to model them. One of the characteristics of our algorithm is that it works in the semantic-location space instead of the sensor space, which allows us to abstract from the underlying indoor location technologies. The location prediction is divided into four modules that process the data sequentially (see Figure 2): Figure 2. The architecture of the location prediction algorithm. Both approaches are shown in the same image, as they share the input, attention and output modules.
1.
Input module: It takes the semantic locations as inputs and transforms them into embeddings to be processed. It has both an input and an embedding layer.
2.
Attention mechanism: It evaluates the location embedding sequence to identify those that are more relevant for the prediction process. To do so, it uses a GRU layer, followed by a dense layer with a tanh activation and finally a dense layer with a softmax activation.
3.
Sequence feature extractor: It receives the location embeddings processed by the attention mechanism and uses a 1D CNN or a LSTM to identify the most relevelant location n-grams of sequences of locations for the prediction. In case of the CNNs, multiple 1D convolution operations are done in parallel to extract n-grams of different lengths in order to obtain a rich representation of the relevant features.
4.
Location prediction module: It receives the features extracted by the sequence feature extractor (multi-scale CNNs or LSTMs) and uses those features to predict the next location. This module is composed of two dense layers with ReLU activations and an output dense layer with a softmax activation.
Input Module
The input module is in charge of receiving the location IDs and using the embedding matrix to get the vectors that represent them. As we demonstrated in [38], using better representations, such as embeddings, instead of IDs, provides better predictions. The proposed system uses Word2Vec to obtain the embedding vectors [46], a model widely used in the NLP community.
Given a sequence of locations S loc = [l 1 , l 2 , . . . , l n ] where n is the length of the sequence and a i ∈ R n indicates the location vector of the ith location in the sequence, and Context(l i ) = [l i−n , . . . , l i−1 , l i+1 , . . . , l i+n ] represents the context of l i , the window size being 2n. p(l i |Context(l i )) is the probability of l i being in that position of the location instance sequence. To calculate the embeddings, we try to optimize the log maximum likelihood estimation: Our system uses Gensim to calculate the embedding vectors for each location in the dataset. The location embedding vectors have a size of 50. To translate the one-hot encoded location IDs to embeddings, we use an embedding matrix, instead of providing the embedding values directly. This allows us to train this matrix and adapt the calculated embeddings to the task at hand, thereby improving the results.
Attention Mechanism
Once we have the semantic embeddings for each location, they are processed by the attention mechanism to identify those locations in the sequence that are more relevant for the prediction process. To do so we use a soft attention mechanism. This is a similar approach to the ones used in NLP to identify the most relevant words in a phrase. However, we do something different in this approach: applying the attention mechanism to the embeddings instead of the hidden steps of the sequence encoder. As proven in [38], this approach has achieved better results when predicting locations.
Location sequences S loc are temporally ordered sets of locations l t , given t ∈ [1, T]. The location sequence l t , t ∈ [1T] goes through the input module, which uses the matrix L e , calculated previously with Word2Vec, to obtain the location embedding vectors. Those embedding vectors are then processed by the gated recurrent unit layer, creating a representation of the sequence. This gated recurrent unit layer reads the location sequence from l 1 to l T . The used gated recurrent unit layer has a total of 128 units.
The attention module gets the gated recurrent unit's layer states h t and creates a vector of weights α t ∈ [0, 1] with the relevance of each location l t . This is done using a dense layer with a unit size of 128 to get the hidden representation u t of h t : Then we use a softmax function to calculate the normalized relevance of the weights (α t ) for the location instances: The obtained vector is used to calculate the relevance of the location embeddings x t for the prediction, L eadj : Those embeddings L eadj are the used to process the sequence.
Sequence Feature Extractor
After obtaining the attention modified location embeddings L eadj in Equation (7), we tested two different approaches to perform the feature extraction: CNNs and LSTMs.
On the one hand, the CNN architecture was used to extract the features of the sequence. This architecture was composed of multiple 1D CNNs that processed the sequence in parallel with different kernel sizes. This was done to identify differently sized n-grams in the location sequences. The location sequences had a set length: L eadj = {le 1 , ..., le lc }. The size of the location embedding was represented by d le , and the elements of the embedding were real numbers, le i ∈ d le . After getting the attention modified location embeddings, each location sequence was represented like L eadj ∈ l le ×d le . The convolution operation was: The result of the operation was O j ∈ l le −s+1 , and W j ∈ l×d , and b were the trained parameters. The activation function f () was a rectified linear unit, and W • L eadj represents the element-wise multiplication. Using On the other hand, the LSTM with 512 units received the location embeddings and analysed the existing temporal relations among the different locations that formed each of the sequences. Then, dropout normalization was applied to the extracted features.
Location Prediction Module
The input for the location prediction module is the output of the previously described feature extractor module. To predict the most probable location, this module uses three dense layers. The first two ones ( f re ) use rectified linear units as their activation: To predict the location, the final dense layer uses a softmax activation. The output of this module is a vector with the probabilities of each possible location.
Physical Location
To assess the indoor positioning system's performance, a realistic scenario was created in which the user wore a smartwatch or placed a smartphone in his or her pocket and moved around his or her home. The house measured 15 m × 7 m and had five rooms. A path was defined inside the house that led to four different rooms, each with one or more BLE beacon. The user started to follow the established path from the bedroom, as shown in Figure 3. One or more checkpoints were established for each room (indicated by the circle icon).
Three tests have been carried out, each being characterized by different BLE beacon infrastructure, and each test was repeated eight times. Both monitoring devices (smart-phone and smartwatch) were used at the same time in each test to compare smartphone and smartwatch performance. The results of the indoor tracking method were read three times at each checkpoint, with a ten-second interval between each detection. The number of false positives was counted for each detection (a false positive occurred when the beacon detected by the mobile application was different from the beacon associated with the checkpoint). The configuration used in each test was as follows: 1.
Test 1
Beacon model: the battery powered BlueBeacon Mini by BlueUp [47]. Number of beacons: one BLE beacon in each room.
Test 2
Beacons model: the AKMW-iB005N-SMA by AnkhMaway [48] with a USB power supply. Number of beacons: one BLE beacon in each room.
Test 3
Beacons models: BlueBeacon Mini and AKMW-iB005N-SMA. Numbers of beacons: one AKMW-iB005N-SMA in the bedroom, one AKMW-iB005N-SMA in the bathroom, two BlueBeacon Mini in the Kitchen and two AKMW-iB005N-SMA in the living room.
Dataset
The dataset consists of location data of a single user gathered through a smartwatch over the course of a week. Each time a location change was detected by the smartwatch, the new location and timestamp were stored. In total, the dataset has 267 location changes and four different locations: bedroom, living room, bathroom and kitchen. For the training process, we split the dataset into a training set (80% of the dataset) and a validation set (20% of the dataset) of continuous days.
Since the model uses n previous locations (5 in this case) as input to predict the next location, the dataset was split into sequences of n locations, the next location being the one that the model has to predict. Therefore, the training set had 209 training samples, and there were 52 test samples.
Indoor Location System
To evaluate the performance of the smartphone and the smartwatch in our indoor location system, the positions detected by the two monitoring devices during the performed tests were compared. For each test and for each iteration, the numbers of false positives at the checkpoints have been calculated. The percentage of error was calculated by the ratio between the number of total false positives and the number of total detections (21 total detections for each test repetition) by using the following equation: where n is the number of total detections and x i is the number of false positives at the i-th iteration (this value was 0 or 1). Moreover, for each test and for each iteration, the average absolute deviation (AAD) has been calculated by using the following equation: where y i is the number of false positives at the i-th iteration, µ is the average value of false positives and k is the total number of test repetitions (in our case, eight is the value to assign to the parameter k). Table 1 presents the results of test 1. From this table, it is possible to notice that in this configuration the smartwatch ensured better performance (the mean percentage error was less compared to the mean percentage error obtained with the smartphone as a monitoring device). Table 2 presents the results of test 2. In this case, a general improvement of the indoor localization method performance can be observed. The mean percentage error was reduced from 36.31% to 23.21% using the smartphone, and from 21.43% to 13.69% using the smartwatch. This is because the beacons used in this test had a wider transmission range (the maximum distance at which beacon's signal can be received). In fact, though the transmission range depends of many factors (beacon installation position, operating environment and receiver performance, just to name a few), at the same TxPower of +4 dBm, the theoretical maximum distance (in Line of Sight free-space condition) offered by the AKMW-iB005N-SMA is 130 m, which is 100 m greater than for the BlueBeacon Mini. Additionally, in this configuration, the performance registered using the smartwatch as the monitoring device was better. Finally, Table 3 presents the results of the test 3. In this test the mean percentage error using the smartphone was higher compared to the mean percentage error obtained in test 2 for the same monitoring device. This was due to the high reception capacity of the smartphone antenna, which was too sensitive if several beacons were placed close in the same environment. The mean percentage error using the smartwatch was reduced to 7.74%. It is possible to draw some conclusions based on the results of all three tests. Several factors, such as the positions of the beacons within the room (for example, height from the ground) or the distances between the beacons, can influence indoor tracking results. For example, beacons close to each other can cause interference. However, in general, the smartwatch guaranteed better performance in indoor localization than the smartphone with the same configuration in BLE beacon infrastructure.
Location Prediction System
We have evaluated the proposed approach by comparing our results in terms of accuracy score with two different approaches that have been used in the location prediction literature as baselines: nearest locations (NL) where the nearest neighbor to the user's current location is selected [49], and a hidden Markov model which characterizes the movement patterns [50]. As can be seen in Table 4, our approaches outperformed the proposed baselines by a wide margin.
Moreover, to give more insights into the performance of the proposed architectures, we have evaluated the proposed location prediction system using the top-k accuracy score. This score measures how many times the ground truth (or correct label) is among the top k predicted labels provided by the fully connected layer with a softmax activation function. l i is the correct location, T k i the ordered list of the top k predicted locations, N the number of tests samples and b the scoring function with two possible outputs 0, 1. The top-k accuracy is formulated in Equation (12): Therefore, the value of the scoring function is 1 when the label of the correct location exists in the ordered list of the top k predicted locations. On the contrary, if the label is not in the ordered list, the function will return 0. For this experimentation, we report the accuracy scores with k = 1, k = 2 and k = 3 since we had a total of four possible outcomes. Table 5 presents the results. The best results for accuracy at 1 and 3 were obtained in experiment L2, where the LSTM with the previously mentioned attention mechanism was used. However, the best metric for accuracy at 2 was obtained in M2, where multi-scale convolutional neural networks were used with the same attention mechanism. In both cases, the best results were achieved using the embedding level attention mechanism introduced in [38]. However, in this dataset the best results overall were achieved using LSTMs.
Conclusions
In this paper we present an indoor locating system based on BLE and its evaluation utilizing a smartphone and a smartwatch as monitoring devices. Over that system, we built a behavior prediction system based on locations and validated two different approaches.
Our system provides a holistic approach to an indoor location system, providing both the necessary infrastructure and the intelligent framework over it.
The system's performance in terms of mean percentage error was assessed and analyzed. A distinct BLE beacon infrastructure was considered for each test by altering the quantity and models of BLE beacons in each room of the considered indoor environment. The best results were achieved using the smartwatch instead of the smartphone. Furthermore, a position prediction system based on neural embeddings to represent the locations of a house was introduced, along with an attention-based mechanism that modifies those embeddings rather than having them applied to the hidden states of the neural network design. The location prediction system's accuracy has also been assessed, compared with other approaches and discussed. From these experiments, two main conclusions can be drawn: first, the proposed attention mechanism applied to the embeddings improves the architecture's performance; and second, despite the limited size of training samples, the presented deep neural network architectures performed better than shallow machine learning algorithms such as hidden Markov models.
The system will be expanded in the future by including RFID components, such as wearable RFID devices and RFID tags, to capture data that will be processed by an activity recognition module. From location data and RFID data, this module will be able to deduce user activities. Additionally, as a future extension of the presented work, it would be interesting to use algorithms such as the Kalman filter to improve the results of our indoor location system. Regarding the location prediction system, using the transformers introduced by Vaswani et al. [51] could improve its performance. | 2021-07-25T06:17:03.662Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "dd9128d440de376937ac83c28daaf3e08749fed6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/14/4839/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe770fbd20bfab48c0f731b75d5e5ab0e689bc84",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
26419975 | pes2o/s2orc | v3-fos-license | Probabilistic Model Checking: One Step Forward in Wireless Sensor Networks Simulation
A novel collision resolution algorithm for wireless sensor networks is formally analysed via probabilistic model checking. The algorithm called 2CS-WSN is specifically designed to be used during the contention phase of IEEE 802.15.4. Discrete time Markov chains (DTMCs) have been proposed as modelling formalism and the well-known probabilistic symbolic model checker PRISM is used to check some correctness properties and different operating modes and, furthermore, to collect some performance measures. Thus, all the benefits of formal verification and simulation are gathered. These correctness properties as well as practical and relevant scenarios for the real world have agreed with the algorithm designers.
Introduction
The joint efforts of the Zigbee Alliance and the IEEE 802. 15.4 Task Group have produced a set of protocols that ensure the functionality of wireless personal area networks (WPANs). IEEE 802.15.4 standard [1] defines the specification of the physical and media access control (MAC) layers for lowrate wireless personal area networks (LR-WPANs). These networks are convenient in scenarios where the availability of resources is limited. IEEE 802.15.4 supports star and peerto-peer network topologies and uses carrier sense multiple access with collision avoidance (CSMA/CA) as medium access mechanism. Moreover, it provides two operating modes that may be selected by a central node: nonbeaconenabled and beacon-enabled. They are used in nonslotted CSMA/CA and slotted CSMA/CA, respectively. When a device wishes to transfer data to a coordinator in a beaconenabled network, it first listens for the network beacon. When the beacon is found, the device synchronises to the superframe structure. At the appropriate point, the device transmits its data frame, using slotted CSMA-CA, to the coordinator. The coordinator acknowledges the successful reception of the data by transmitting an optional acknowledgement frame. When a device wishes to transfer data in a nonbeaconenabled network, it simply transmits its data frame, using unslotted CSMA-CA, to the coordinator. The coordinator acknowledges the successful reception of the data by transmitting an optional acknowledgement frame. The performance of slotted CSMA/CA algorithm has been analysed previously (e.g., [2,3]) concluding that the binary exponential backoff algorithm is not flexible enough to be used in largescale sensor networks.
We focus here on 2CS-WSN (two cells sorted wireless sensor network) algorithm [4], a simple, fast, and effective collision resolution method specifically designed to be used during the contention phase of IEEE 802. 15.4. It is intended to be used as alternative to CSMA/CA. As 2CS-WSN uses probabilities and sorted transmissions for quick collision resolution, there is a clear need to inspect how those parameters can be tuned so as to achieve performance improvements as well as to detect possible inconsistencies or issues (e.g., deadlocks).
From our experience, we advocate for the use of simulation and formal verification techniques to analyse protocols or algorithms since there is an eternal debate about the appropriateness of using simulation or formal verification in not only this area. On the one hand, simulation-based approaches study in a nonexhaustive way the behaviour of the system. On the other hand, formal verification is based on a systematic and exhaustive analysis of all the possible paths in the system, trying to find possible inconsistencies and/or errors not 2 International Journal of Distributed Sensor Networks evaluated in the simulation. Obviously, each of them has its advantages and disadvantages and it is out of the scope of this paper to summarise them, but, from our experience, it is better to use formal verification up until the problem of "state explosion" arises and, then, use simulation to obtain results in bigger scenarios.
Here, we use probabilistic model checking (a formal method for the verification of probabilistic systems) since the use of probabilities can influence the behaviour of 2CS-WSN. In particular, we describe 2CS-WSN algorithm in terms of discrete time Markov chains (DTMCs) and, using the wellknown probabilistic symbolic model checker PRISM [5], we verify some correctness properties, compare different operating modes of the algorithm, and analyse the performance and accuracy of different model abstractions. It is clear that the resolution of a collision in minimum time is a primary requirement in these kinds of algorithms and therefore our performance analysis is mainly focused on temporal aspects. By adding the incurred time costs during the execution of the system, we can evaluate the expected time to resolve the collision in different scenarios. In addition to this, we are able to study properties of great interest to designers such as "the probability that a certain number of nodes have successfully managed to transmit within a certain time" or "the probability that all nodes have transmitted before a certain time. " Analysing such properties for a range of parameter values (e.g., retransmission probability) is often key to identify interesting or anomalous behaviour, and, probably the most important issue the designer can determine if the algorithm fulfils the timing requirements.
The rest of the paper is organised as follows. As usual, we first introduce some related works and compare it with our work. We continue by presenting some background required for a better understanding of this work. Thus, we describe the algorithm under study in Section 3, and some formal background in Section 4. After that, we show the PRISM model for 2CS-WSN in Section 5, and we study it in Section 6. Finally, we summarise some conclusions and discuss possible future work.
Related Works
Recently, the analysis of WSNs has attracted a lot of attention and, therefore, there are many ways to present these works. We divide such works in two main categories (simulationbased and formal verification-based) since both techniques are used here.
To begin with, we present those works that are based on simulation. It is worthwhile to mention here that some of them use simulation to demonstrate the correctness of their analytical approach; that is, an analytical model of the protocol/algorithm is introduced and, then, some experiments are conducted in a well-known (or ad hoc) simulator to validate the correctness of the analytical model. For instance, Bianchi defines in [6] an analytical evaluation of the saturation throughput of the 802.11 distributed coordination function. As in our work, the author uses a Markov chain to model the behaviour of a single node and assumes an ideal channel. In [7], Ye et al. introduce S-MAC, a medium access control protocol designed for wireless sensor networks, and validate it in a real testbed. Faridi et al. [8] characterise the key metrics in a beacon-enabled IEEE 802.15.4 system with retransmissions, and, in [9], Lee et al. propose an additional carrier sensing algorithm based on the IEEE 802.15.4 acknowledgement mode to detect the channel condition. Then, a Markov chain model is depicted, analysing the throughput of the algorithm by means of an ad hoc experimentation. Finally, Hoesel and Havinga [10] develop a novel lightweight medium access protocol (LMAC) for wireless sensor networks. In addition, they validate the algorithm in the simulation package OMNet++ enriched with a framework for WSNs.
On the other hand, one can use formal verification to verify the correctness of a system. A well-known problem when using formal verification is that it becomes intractable when the possible paths in the model are infinite. For example, in [11], it is modelled and analysed LMAC [10] by using timed automata and the popular model checker UPPAAL [12]. In [10], they are able to analyse networks with up to 5 nodes, whereas we are able to analyse bigger networks (up to 40 nodes). In [13], LMAC is studied as a case study to present a new version of UPPAAL, SMC-UPPAAL. The novelty here is that they apply statistical model checking to LMAC. Roughly speaking, the substantial difference between simulation and statistical model checking is that the latter one obtains the probability that the system behaves in such a manner. Again, small networks (up to 10 nodes) are studied. Next, we cite two works fairly related to the present one. On the one hand, Duflot et al. [14] evaluate CSMA/CD by using probabilistic timed automata and two well-known tools, PRISM and APMC [15]. With PRISM, they study the system using probabilistic model checking, whereas with APMC they approximate other properties. On the other hand, Kwiatkowska et al. [16] pose the automatic verification of a medium access control protocol of the IEEE 802.11 WLAN standard using probabilistic model checking. They use probabilistic timed automata as modelling formalism and PRISM as model checker. Finally, let us note that we have previous experience analysing wireless algorithms. For instance, we studied a recent role-based routing algorithm (NORA) for WSNs in [17], and its fuzzy-logic based version in [18]. Moreover, we would like to note that our paper is, to the best of our knowledge, the first work that achieves applying probabilistic model checking to networks up to 40 nodes in conflict since the related works presented in this section only success to model networks with at most 10 nodes.
2CS-WSN: Random Access with Stack Protocols
Before we begin, let us remark that 2CS-WSN algorithm is partially derived from the definition of the stack algorithm described in [19]. In the following, we will refer to it as 2C algorithm. It is a fair, efficient, and simple algorithm to resolve the possible collision when sharing the same transmission channel and it is called a stack protocol because its time evolution can be easily visualised as a group of stations moving up and down in a two-cell stack; that is, stations may be either transmitting or waiting, and these two states can be represented using only two cells in a stack. The transmission cell (TC) represents the group of transmitting stations and the waiting cell (WC) the group of stations that have deferred transmission. Although 2C algorithm has many desirable features it may incur in significant access delays when a large number of stations contend for the channel since, with only two cells, it takes a long time to randomly distribute the stations. Thus, 2C algorithm was improved in [4], leading to the definition of 2CS-WSN algorithm, where wireless communication and several cells are considered. Moreover, there are two main features that share in common 2C and 2CS-WSN. First, collision resolution is performed by using probabilities and, second, time is slotted, allowing stations to transmit only at the beginning of a time slot. A time slot is normally considered as the time a station needs to transmit a packet and receive a feedback message from a central station. The feedback message is binary; that is, it is a C (collision) message when a collision was detected and a NC (no collision) message otherwise. If only one station transmitted, the corresponding packet will be successfully transmitted. On the other hand, if there were several transmission attempts in the same slot, there will be a collision and its resolution shall begin in the following slot. The collision resolution ends when all colliding stations can successfully transmit. This time interval is known as a collision resolution interval (CRI). A station that generates a new transmission request, when a CRI is in progress, has to wait until the current CRI ends before attempting to access the channel. Thus, 2C (and 2CS-WSN) are able to provide some fairness in the access to the channel since all the participants will eventually transmit. As commented previously, 2CS-WSN is designed to be used with wireless communications although the original description of 2C algorithm is not tied to any specific transmission medium. Therefore, it has to be adapted to the particularities of the wireless medium. For instance, in 2C, it is assumed that there is a central station that is continuously monitoring the channel and providing feedback messages. However, in self-configuring wireless ad hoc networks, this assumption is unrealistic. In this case, the participants have to assume this role by monitoring the transmission medium and reacting accordingly. This leads to a second issue: how to detect a collision. In wired networks it is rather easy to detect a collision, but in wireless networks this is not a trivial matter. In 2CS-WSN, instead of detecting implicitly a collision, network nodes infer that a collision has happened. A wireless node can infer that its transmission has collided if the reply to its request does not arrive. In this case, the station has to randomly choose whether to retransmit (i.e., to remain in TC) or to join the waiting group (WC). We model this fact by using probabilities. Let us denote by TC the probability of remaining in TC and by WC the probability of moving to the first waiting group (with the obvious condition that WC = 1 − TC ). We suppose here that all nodes are provided with an unbiased coin to make the decision; that is, they stay in TC with probability 0.5 or they move to WC with the same probability. Figure 1 shows the flow chart of the 2CS-WSN algorithm.
For instance, we show in Figure 2 how 2CS-WSN behaves in a five-node network, where all nodes want to transmit in the same slot and a collision occurs. To solve this collision, each node uses its unbiased coin to decide its following step. For instance, nodes 1 and 5 decide to enter the waiting group WC 1 whereas nodes 2, 3, and 4 decide to remain in the transmission group TC. Therefore, in the next slot, nodes 2, 3, and 4 attempt to transmit and collide again. Then, let us suppose that nodes 3 and 4 decide to enter the waiting group WC 1 . Figure 2: Collision resolution example using 2CS-WSN algorithm with a five-node network.
Nodes 1 and 5 move from WC 1 to WC 2 . At this time only node 2 is in TC thus achieving a successful transmission. This successful transmission causes nodes in WC 1 (i.e., nodes 3 and 4) to move to TC and nodes in WC 2 (i.e., nodes 5 and 1) to move to WC 1 . This process is repeated until all nodes that participated in the initial collision can successfully transmit.
Formal Background
Now, we introduce briefly some formal background. We start by defining discrete time Markov chains (DTMCs) as it has been used as modelling formalism. Next, we briefly introduce probabilistic model checking and PRISM.
Discrete Time Markov Chain.
Basically, a Markov process is a special class of stochastic process that satisfies the Markov property (or memoryless), that is, given the state of the process at time , the future behaviour after is independent of the behaviour before . When it is considered discrete state (sample) space, they are called Markov chains and if one considers only discrete time steps, they are called discrete time Markov chains (DTMC). Moreover, if the conditional probability is invariant with respect to the time origin, then the DTMC is said to be time-homogeneous. We only consider time-homogeneous DTMC in this paper. For more details see [20].
(ii) A DTMC is said to be time-homogeneous if, for all = 0, 1, . . . and for all , in , (2) In this way, represents the probability that the process will, when in state , next make a transition into state ; that is, is the probability to move from the state to the state in one step.
(iii) The matrix of transitions probabilities (stochastic matrix) of a time-homogeneous DTMC is defined as where each is the probability of moving from state to state . Since the probabilities are positive numbers and due to the fact that the process must take a transition into some state, we have that Moreover, according to Chapman-Kolmogorov equations, the probability to reach the state from the state in steps, denoted by ( ) , is the element ( , ) of the matrix .
The behaviour of a DTMC is fully probabilistic; thus we can define a probability space over infinite paths through the model and it is possible to compute the probability of a particular event.
A DTMC can be also defined as a triple ( , 0 , ), where is the set of states, 0 is the initial state, and is the stochastic matrix. And a DTMC can be also represented by a state transition diagram, which is a directed graph where each node is a state (number of nodes = number of states if is finite), and there is an arc from to if and only if > 0. In this way, a state is accessible from 0 if there is a walk in the graph from 0 to , that is, an ordered string of nodes, ( 0 , 1 , . . . , , ), ≥ 0, in which there is a directed arc from to +1 and from to .
Probabilistic Model Checking. Probabilistic model check-
ing is a formal verification technique for the automatic analysis of systems that exhibit stochastic behaviour. It provides the likelihood of the occurrence of certain events. In conventional model checkers, it is used as input of the model of the system, represented in some formalism, and its specification, usually a formula in some temporal logic. After International Journal of Distributed Sensor Networks 5 computing the formula in the model, one gets as output "yes" or "no, " indicating whether or not the model satisfies it. Probabilistic model checking involves also reachability analysis in the state space, and the calculation of probabilities through appropriate numerical or analytical methods. The algorithms for probabilistic model checking are usually derived from conventional model checking, numerical linear algebra, and standard techniques for Markov chains. In this way, probabilistic model checking can be used to ascertain not only correctness, but also quantitative measures such as performance and reliability.
Probabilistic model checking can be applied to a range of probabilistic models, typically variants of Markov chains. The specification language is a probabilistic temporal logic, capable of expressing temporal relationships between events and likelihood of events. Probabilistic temporal logics are usually obtained from standard temporal logics by replacing the standard path quantifiers with a probabilistic quantifier. In this paper we use probabilistic computation tree logic (PCTL) [21] as probabilistic temporal logic, which is based on the well-known branching-time computation tree logic (CTL) [22]. It allows us to verify properties such as if the model "finishes or not properly" (all nodes have successfully transmitted) and/or to reason about quantitative measures such as "what is the probability that a certain number of nodes have successfully managed to transmit within a certain time" or "what is the probability that all nodes have transmitted before a certain bound" or "the expected time that all nodes have transmitted" and so on. [5] is an open source probabilistic model checker developed initially at the University of Birmingham and currently maintained and extended at the University of Oxford. PRISM supports several types of probabilistic models such as discrete time Markov chains (DTMCs), continuous time Markov chains (CTMCs), Markov decision processes (MDPs), probabilistic automata (PAs), and probabilistic timed automata (PTAs), considering also extensions of these models with costs and rewards.
PRISM. PRISM
Models are described using the PRISM modelling language, a state-based language based on reactive modules. The fundamental components of the PRISM language are modules and variables. A model is a set of modules which can interact with each other. Typically, a probabilistic model is constructed in PRISM as the parallel composition of a set of modules. In every state of the model, there is a set of commands (belonging to any of the modules) which are enabled, that is, whose guards are satisfied in that state. The choice between which command is performed (i.e., the scheduling) depends on the model type. PRISM includes also support for the specification and analysis of properties based on rewards (and costs). Thus, it is possible to assign different rewards to different states or transitions, depending on the values of model variables in each one.
PRISM provides also support for the automatic analysis of a wide range of quantitative properties. The property specification languages provided by PRISM are PCTL, CSL, LTL, and PCTL * , as well as extensions for quantitative specifications and costs/rewards. One of the key features of PRISM is its symbolic implementation technique. It uses data structures based on binary decision diagrams (BDDs), which allow compact representation and efficient manipulation of extremely large probabilistic models by exploiting structure and regularity derived from their high level description. As a proof of maturity, PRISM has been used to analyse systems from many different application domains, including communication and multimedia protocols (see the PRISM website [23] for multiple examples).
Modelling 2CS-WSN in PRISM
A model of 2CS-WSN in terms of DTMCs has been developed using PRISM language. In Section 3, we showed how 2CS-WSN behaves in a network with five colliding nodes and we will leverage this example to show how the algorithm has been modelled in PRISM. We recall that this does not mean that the network has only 5 nodes, but, among the nodes in the network, there are 5 trying to transmit at the same slot.
Box 1 shows the PRISM model for this scenario. A state consists of a triple (TC, WC 1 , WC 2 ) where TC is the number of nodes in collision and WC 1 , WC 2 are the number of nodes waiting to retransmit in each waiting cell. This initial state of the model is represented by the triple (5, 0, 0), meaning that there are five nodes in TC and zero nodes in each one of the waiting cells. On the other hand, Figure 3 shows the state transition diagram associated to the DTMC described in Box 1. It consists of 42 states. Each state is represented as a rectangle with an identifier and a triple (TC, WC 1 , WC 2 ). Each directed arc from the state to the state is labelled with the probability , which indicates the probability of moving from the state to the state according to the 2CS-WSN algorithm.
The initial state is set to the node number 41 labelled with the tuple (5, 0, 0). The node number 0 is the final state and it is labelled with (0, 0, 0), that is, no nodes either in TC or in WC , meaning that the five nodes have been able to transmit successfully and the initial collision has been resolved.
Carefully analysing the state transition diagram, it can be appreciated that the DTMC generates all possible alternatives to solve this collision. In particular, in Figure 3, the trace followed in Example 2 has been pointed out with dashed lines.
Hence, for instance, the probability to move in one step from the initial node (5, 0, 0) (node number 41 in Figure 3) to the node (3, 2, 0) (node number 37 in Figure 3) is obtained by considering that 3 of the 5 nodes in the transmission cell remain in it and the other 2 move to the first waiting cell. Let the probability to retransmit of each node be 0.5 (i.e., to remain in TC); then the probability to move is computed as ( 5 2 ) ⋅ 0.5 5 = 0.3125. Notice that the model takes into account any 2 nodes taken from TC, and not only nodes 1 and 5 which were chosen in the example of Figure 2.
Once the model has been explained with a specific example, this model can be generalised as follows.
Let be the number of nodes in collision and the number of waiting cells. A state is defined as a tuple possibilities for the number of nodes that remain in TC, then the behaviour of variable TC follows a binomial distribution = ( ; ), where = TC . Whenever ≥ 2, nodes in WC move to WC +1 with ∈ [2, . . . , − 1] in the next step. Nodes in the last waiting cell make no movement. Nodes in TC choose to retransmit with probability TC or to move to WC 1 with probability 1 − TC . Let be the number of nodes that remain in TC and the number of nodes that move to WC 1 , where + = . The probability that nodes remain in TC is defined as ( )⋅( TC ) (1− TC ) with ∈ [0, . . . , ] and = − . If TC = 0.5, the probability is ( )⋅(0.5) . Whenever ≤ 1 nodes in WC move to WC −1 with ∈ [2, . . . , ] and the last waiting cell must be empty. Therefore, next state is defined as follows: where is the number of nodes in TC and = + .
Verification and Performance Evaluation
To evaluate the DTMC model in PRISM, the following parameters have been considered. As commented previously, we have used PCTL logic for expressing significant properties. First of all, we want to check if the model eventually finishes. To this end, we need to know how many nodes have successfully transmitted (finish). This is computed by using the following formula, where is the number of the waiting cells: formula finish = nodes − (TC + WC 1 + ⋅ ⋅ ⋅ + WC ) .
(6)
This formula computes the number of finished nodes. Thus, nodes is the number of nodes initially in collision (we consider 40 in this scenario), and TC, WC 1 , . . . , WC −1 , or WC represent the number of nodes in such a cell.
By using this formula, we can evaluate the following property 1 : If this property holds, we can ensure that the algorithm eventually terminates successfully. The result of the evaluation was true and, therefore, we can conclude that the model eventually finishes. This property turns out to be of great interest for some scenarios (emergencies, control sensors, etc.) as it ensures that all the nodes can eventually transmit in contrast to CSMA-based protocols where a backoff period is used for channel contention and some threshold will decide if transmission is rejected. These protocols cannot guarantee channel access for all nodes [24]. Once we know that the algorithm eventually finishes, we turn our attention to collect performance results about collision resolution time. To this end, we extend first our model with rewards so that a real value (reward) is associated with certain transitions of the model. In this case, and to be in accordance with [4], we defined a time slot of 1.6 ms; that is, each transition in our model has a reward associated of 1.6. In PRISM, we can analyse properties about the expected values of these rewards. This is achieved using the operator. In particular, we use "cumulative reward" property that also associates a reward with each path of a model (not only to states/transitions). In this case, we evaluate the expected conflict resolution time using 2 . We ran then PRISM experiments on 2 , with verification (we compute the whole state space), varying the probability of transmission ( TC ) and/or the number of waiting cells according to Table 1. The field Step value represents how much we vary the parameter in each experiment. The goal of this experiment is to demonstrate what is the best configuration for the algorithm since, unfortunately, the designers of 2CS-WSN (and 2C) have never studied how these parameters could affect its performance. Figure 4(b) (or Table 2) shows that the best choice in order to (Figure 4(b)). minimise the collision resolution time is considering TC = 0.5 as the parameter of probability of retransmission and 5 as the number of waiting cells. Surprisingly, the designers of 2C and 2CS-WSN chose the best option without studying the possible configurations. In contrast, in 2CS-WSN, they assumed that the number of waiting cells is unlimited (this assumption is not realistic in the real world since the resources are limited). Moreover, the results obtained in [19] for 2C algorithm can be improved considering 4 waiting cells instead of 2 (see Table 2).
To validate the conformity of our proposed probabilistic model with the model proposed in [4], we compare the results obtained here with PRISM and the results obtained in Table 3: Differences PRISM versus CASTALIA results (ms) (Figure 4(a)). the simulator CASTALIA. In Figure 4(a) (or Table 3) we can observe that the differences between the collision resolution time obtained taking 1000 simulations in CASTALIA and the expected time obtained by using PRISM is, in the worst case, about 5 ms, and normally less than 2 ms. Therefore, we can conclude undoubtedly that our model fits adequately. Furthermore, we can also use time-bounded probabilistic reachability properties. The main difference with other kind of properties is that one can associate a strict time deadline to relevant events. As it can be observed in Figure 4(b), the four best probabilities of retransmission are TC = 0.3, 0.4, 0.5, 0.6. Therefore, we ran PRISM experiments on 3 (using verification), varying the retransmission probability parameter from 0.3 to 0.6 and the deadline time ( ) from 0 to 400 (ms). Figure 5 (or the probability that all nodes have successfully within the deadline . Observe that we have chosen 400 ms since after this time all the nodes have successfully transmitted. For example, the probability to solve all the collisions in less than 128 ms is nearly 0 and practically 1 considering 320 ms or more. Besides, if we fix = 240 ms, we achieve to solve the collision with at least a probability of 0.98 ( TC = 0.4 or TC = 0.5) and with at least a probability of 0.9 ( TC = 0.3). Observe also that this probability is almost 0.94 considering TC = 0.6. Finally, note that this study was not conducted in [4,19].
Finally, considering the best option ( TC = 0.5 and the number of waiting cells as 5), we can ask about the probability that nodes have transmitted within ms. To answer this question we ran again PRISM experiments, using verification, on 4 , where we change the number of nodes ( ) from 5 to 40 and the deadline ( ) from 0 to 300 ms. Figure 6 (or Table 5) shows the results. For instance, considering 160 ms, the probability that at least 20 nodes had transmitted within 160 ms is more than 0.99, the probability that 25 nodes had transmitted within 160 ms is less than 0.9, and the probability that 35 nodes had transmitted within these 160 ms is less than 0.1. This kind of questions could help in critical situations, where a fixed number of nodes must transmit to inform, for instance, about an emergency.
Conclusions
This paper presents the formal modelling and initial validation of a novel collision resolution algorithm for wireless sensor networks. 2CS-WSN is specifically designed to be used during the contention phase of IEEE 802.15.4. In our study, we try to find any error/inconsistency presented in the specification of the algorithm, and we have evaluated some properties for nontrivial, practical, and relevant scenarios.
In detail, we present the specification of the 2CS-WSN algorithm in terms of DTMCs and perform probabilistic model checking by using the well-known tool PRISM.We have used PCTL to formulate relevant properties. Forinstance, we have checked the absence of deadlock and theconformity with the former implementation in CASTALIA. We have focused here on temporal parameters since they are of great interest for the algorithm designers. In particular, we have studied the expected collision resolution time and properties that cannot be evaluated with general simulators such as "the probability that a certain number of nodes have successfully transmitted within a certain time" or "the probability that all nodes have transmitted before a certain time. " Furthermore, we have found the best configuration ( TC = 0.5 and = 5) for the algorithm. Now, our next step is aimed at finding possible improvements to the algorithm and, thus, we are collaborating with the designers in future versions of 2CS-WSN. For instance, we want to evaluate the effect of using adaptive probabilities (each node can use its own transmission probabilities regarding some parameter we are studying or we can use probabilities to move up in the stack instead of using only when | 2018-04-03T00:44:17.980Z | 2015-05-01T00:00:00.000 | {
"year": 2015,
"sha1": "fc99e228252a469703df72333a4710b7b2ee28b2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2015/285396",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "16c15d2e6d5faf15ca44cf58d717d578ea896a76",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
91599935 | pes2o/s2orc | v3-fos-license | Heterosis for Yield and its Components in Okra (Abelmoschus esculentus L. Moench)
Okra is originated in tropical Africa. It is an introduced vegetable crop in India. Although, it is a multipurpose and multifarious crop, it is extensively grown for its tender pods, which are used as a very popular, tasty and gelatinous vegetable. Okra is the most important vegetable crop in India. Among the vegetables grown in India, okra occupies fifth position, next to tomato in area. Yield plateau seems to have been reached in openpollinated varieties of okra. However, it could be improved through hybridization. Marked heterosis of 38 to 71 percent has been reported in okra for yield and its components International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 8 Number 01 (2019) Journal homepage: http://www.ijcmas.com
Introduction
Okra is originated in tropical Africa. It is an introduced vegetable crop in India. Although, it is a multipurpose and multifarious crop, it is extensively grown for its tender pods, which are used as a very popular, tasty and gelatinous vegetable. Okra is the most important vegetable crop in India. Among the vegetables grown in India, okra occupies fifth position, next to tomato in area. Yield plateau seems to have been reached in openpollinated varieties of okra. However, it could be improved through hybridization. Marked heterosis of 38 to 71 percent has been reported in okra for yield and its components
ISSN: 2319-7706 Volume 8 Number 01 (2019)
Journal homepage: http://www.ijcmas.com The study of heterosis would help in selection of heterotic crosses for commercial exploitation of F 1 hybrids in okra (Abelmoschus esculentus (L.) Moench). 45 F 1 s were developed by crossing 10 elite lines of okra in half diallel fashion during summer 2016. All 45 F 1 s along with their 10 parents and one standard control (Nunhems hybrid Shakti) were evaluated in a randomized complete block design with three replicates during late kharif (July to October) 2016 at ICAR-Krishi Vigyan Kendra, Babbur Farm, Hiriyur, Chitradurga, Karnataka, India, for heterosis of yield and its components of okra. Significance of mean squares due to genotypes revealed the presence of considerable genetic variability among the material studied for almost all yield and yield attributes. The overall maximum positive significant heterosis for total yield per plant was observed in cross IIHR-875 x IIHR-478 (112.89%) over relative heterosis, (83.78%) over heterobeltiosis and (168.55%) over standard heterosis. Negatively heterotic crosses like IIHR-562 x IIHR-444 for days to 50% flowering (-8.70%) and IIHR-567 x IIHR-107 for fruiting nodes (-9.03%) respectively, are important to exploit heterosis for earliness in okra. Out of 45 F 1 s, 44 F 1 s crosses exhibit significant standard heterosis in any given direction for total yield per plant except cross IIHR-604 x IIHR-107 (-0.13%). The F 1 hybrid IIHR-875 x IIHR-478 with high yield potential has the potential for commercial cultivation after further evaluation for late kharif season of Karnataka. (Laxmiprasanna, 1996, Singh et al., 1975. Heterosis breeding has been the most successful approach in increasing the productivity in cross-pollinated vegetable crops. Okra is one often-cross pollinated vegetable crop where the presence of heterosis was demonstrated for the first time by Vijayaraghavan and Warrier (1946). Since then, heterosis for yield and its components were extensively studied. Selection of parents on the basis of phenotypic performance alone is not a sound procedure. It is therefore essential that parents should be chosen on the basis of their combining ability. The half diallel mating design has been used in the present study to assess the genetic potentialities of the parents in hybrid combination through systematic studies in relation to general and specific combining abilities which are due to additive and nonadditive gene effects respectively (Griffing, 1956;Kempthrone, 1957). Several research workers have reported occurrence of heterosis in considerable quantities for fruit yield and its various components (Venkataramani, 1952;Joshi et al., 1958;Partap and Dhankar, 1980;Elangovan et al., 1981;Partap et al., 1981;Mehta et al., 2007;Weerasekara et al., 2007;Jindal et al., 2009). The ease in emasculation and very high percentage of fruit setting indicates the possibilities of exploitation of hybrid vigour in okra. The presence of sufficient hybrid vigour is an important prerequisite for successful production of hybrid varieties. Therefore, the heterotic studies can provide the basis for the exploitation of valuable hybrid combinations in the future breeding programmes and their commercial utilization. Variation in most of the agronomical and horticultural traits is available in the germplasm of cultivated okra (Dhall et al., 2003;Singh et al., 2006;Dakahe et al., 2007;Mohapatra et al., 2007;Reddy, 2010). The initial selection of parents to be involved in any effective hybridization programme depends upon the nature and magnitude of relative heterosis (heterosis over mid parent), heterobeltiosis (heterosis over better parent), and economic heterosis (heterosis over check) present in genetic stocks. Heterosis breeding based on the identification of the parents and their cross combinations is capable of producing the highest level of transgressive segregates (Falconer, 1960). The choice of the best parental matings is crucial for the development of superior hybrids and because combinations of hybrids grow exponentially with the potential number of parents to be used, this is one of the most expensive and time-consuming steps in hybrid development programmes (Agrawal, 1998). The present investigation aims primarily to study the direction and extent of relative heterosis, heterobeltiosis and economic heterosis for yield and its associated traits in 10 × 10 half diallel crosses for utilization of existing genetic diversity to develop heterotic F 1 hybrids in okra.
Materials and Methods
Ten elite and nearly homozygous lines of okra namely IIHR -875, IIHR -478, IIHR-604, IIHR-567, IIHR-182, IIHR-595, IIHR-562, IIHR-347, IIHR-444, IIHR-107 selected from the germplasm collected by ICAR-Indian Institute of Horticulture Research Institute, Bengaluru, Karnataka and were crossed in n(n -1)/2 possible combinations during summer 2016 to generate the breeding material. The resulting 45 one way crosses along with their 10 counterpart parental lines and one standard control (Nunhems Hybrid Shakti) were evaluated in a randomized complete block design with three replicates. The experiment was conducted at the Experimental Farm, ICAR-Krishi Vigyan Kendra, Babbur Farm, Hiriyur, Chitradurga, Karnataka. The experiment was conducted during late kharif (July-October) 2016. Nutrition, irrigation, weed control and other cultural practices were followed as per the standard package of practices of UHS, Bagalkot. Biometric data were recorded for 12 quantitative characters. Observations on the characters like plant height (cm), number of branches per plant, internodal length (cm), stem girth (mm), first fruit producing node, fruit length (cm), fruit diameter (mm), number of ridges per fruit and average fruit weight (g) were recorded on five randomly selected competitive plants, while the observations on the characters like days to 50% flowering, total number of fruits per plant and total yield per plant (g) were recorded on whole plot basis in each entry in each replicate.
Mean performance
From the mean performance of the genotypes, it is evident that, in general, the mean values of crosses were desirably higher than those of the parents (Table 1) Internodal length varied from 9.20 to 12.39 and 9.23-13.33 cm among the parents and crosses, respectively. Number of branches per plant among the parents and crosses varied from 2.20 to 4.21 and 2.20 to 4.35, respectively. Stem girth varied from 18.55 to 24.33 and 17.92 to 26.89 mm among the parents and crosses, respectively. First fruit producing node among the parents and crosses varied from 4.53 to 6.99 and 5.10 to 8.04, respectively. Days to 50% flowering varied from 44.00 to 45.66 and 42.00 to 46.33 among the parents and crosses, respectively. Fruit length among the parents and crosses varied from 11.78 to15.05 and 11.07 to 16.81 cm, respectively. Fruit diameter varied from 17.54 to 21.59 and 16.31 to 21.33 mm among the parents and crosses, respectively. Average fruit weight among the parents and crosses varied from 13.10 to 18.83 and 13.06 to 21.66 g, respectively. No. of ridges per fruit varied from 5.03 to 5.86 and 5.00 to 6.10 among the parents and crosses, respectively. No. of fruits per plant among the parents and crosses varied from 24.0 to 30.66 and 22.00 to 45.67, respectively. Yield per plant varied from 357.53 to 536.50 and 336.33 to 904.40 g among the parents and crosses, respectively.
Heterosis
The range of heterosis and the number of crosses displaying significantly positive and negative heterosis over the mid parent, better parent and standard control (Nunhems hybrid Shakti) are presented in Table 2. There was huge amount of variation in heterotic effects as they varied differently for different characters. For plant height, heterosis over mid parent, better parent and standard control ranged from-20.92 to 14.39, -33.30 to 13.93 and -23.82 to 30.12 respectively. For this trait, 16 cross over mid parent, nine cross over better parent and 36 cross over standard control manifested significantly positive heterosis. Heterosis over mid parent, better parent and standard control ranged from -19.94 to 29.81, -22.03 to 20.01 and -19.79 to 15.91 respectively for internodal length. For internodal length 15 cross over mid parent, 21 cross over better parent and 11 cross over standard control manifested significantly negative heterosis.
For number of branches per plant, heterosis over mid parent, better parent and standard control ranged from -30.61 to 48.93, -40.11 to 40.54 and -35.64 to 26.97 respectively. For this trait, 22 cross over mid parent, eight cross over better parent and four cross over standard control manifested significantly positive heterosis. For stem girth, heterosis over mid parent, better parent and standard control ranged from -18.17 to 27.90, -20.70 to 24.64 and -14.57 to 28.23 respectively. For this trait, 21 cross over mid parent, seven cross over better parent and 18 cross over standard control manifested significantly positive heterosis.
For first fruit producing node, heterosis over mid parent, better parent and standard control ranged from -19.57 to 55.31, -25.44 to 54.32 and -9.03 to 43.32 respectively. For this trait, 10 cross over mid parent, 15 cross over better parent and six cross over standard control manifested significant heterosis in desirable direction (negative). For days to 50% flowering, heterosis over mid parent, better parent and standard control ranged from -7.69 to 4.58, -8.03 to 3.79 and -8.70 to 0.72 respectively. For this trait, seven cross over mid parent, nine cross over better parent and nine cross over standard control manifested significantly negative heterosis.
For fruit length, heterosis over mid parent, better parent and standard control ranged from -19.74 to 32.83, -25.40 to 24.30 and -25.24 to 13.53 respectively ( (Table 2). For this trait, 28 cross over mid parent, 20 cross over better parent and 44 cross over standard control manifested positively significant heterosis. From the results of the heterosis studies, it is evident that none of the 45 F 1 hybrids of okra showed consistency in direction and degree of heterosis over three bases for all the characters studied. Some of them manifested positive heterosis while others exhibited negative heterosis (data not shown), mainly due to varying extent of genetic diversity between parents of different cross combinations for the component characters. Significant heterosis was observed for all the growth, earliness and yield attributes. It is inferred that the magnitude of economic heterosis was higher for most of the growth and earliness characters under study. In the present study, the estimates of relative heterosis, heterobeltiosis, and standard heterosis were found to be highly variable in direction and magnitude among crosses for all the characters under study. Weerasekara et al., (2007) and Jindal et al., (2009) also reported such a variation in heterosis for different characters. The manifestation of negative heterosis observed in some of the crosses for different traits may be due to the combination of the unfavorable genes of the parents.
Of the 12 characters under study, plant height, number of branches per plant and internodal length largely determine the fruit bearing surface and thus considered as growth attributes. Okra bears pods at almost all nodes on main stem and primary branches. Higher the plant height with more number of branches on the main stem, higher is the number of fruits per plant because of accommodation of more number of nodes for a given internodal length. Shorter distance between nodes accommodates more number of nodes on main stem, which will ultimately lead to higher fruit number and higher fruit production. Hence, positive heterosis is desirable for plant height and number of branches, while negative heterosis is desirable for internodal length to accommodate more number of nodes and to get higher fruit yield in okra. Appreciable amount of the crosses displayed positive standard heterosis for plant height (up to 30.12%), no. of branches per plant (up to 26.97%), internodal length (up to -19.79%). Ahmed et al., (1999), and Rewale et al., (2003), Singh et al., (2004), Weerasekara et al., (2007) and Jindal et al., (2009) also reported the similar projections for number of branches in okra. For internodal length, similar projections were also made by Rewale et al., (2003), Singh et al., (2004), andJindal et al., (2009).
Days to 50% flowering and first fruit producing node are the indicators of earliness in okra. Early flowering not only gives early pickings and better returns but also widens fruiting period of the plant. Fruiting at lower nodes is helpful in increasing the number of fruits per plant as well as getting early yields. Negative heterosis is highly desirable for all these three attributes of earliness. In the present study, cross IIHR-562 x IIHR-444 exhibiting high negative heterosis over standard control for days to 50% flowering (-8.70%) out of 45 hybrids, 7, 9 and 9 hybrids showed significant heterosis in desirable direction (negative) over mid parent, over better parent and over standard parent respectively. The cross IIHR-567 x IIHR-107 displaying high negative heterosis over standard control for first fruit producing node (-9.03%) among the 45 hybrids developed, 10 hybrids over mid parent, 25 hybrids over better parent and 6 hybrids over standard parent showed significantly negative heterosis therefore, it is important to exploit heterosis for earliness in okra. Weerasekara et al., (2007) and Jaiprakashnarayan et al., (2008) also noticed heterosis in desirable direction for days to 50% flowering in okra. The negative estimates of heterobeltiosis and economic heterosis for earliness revealed the presence of genes for the development of earliness in okra. Mandal and Das (1991), Tippeswamy et al., (2005) and Jindal et al., (2009) also noticed desirable heterosis for first fruit producing node in okra.
Total number of fruits per plant and fruit length, width, and weight are considered to be associated directly with total yield per plant, for which positive heterosis is desirable. The trait fruit length exhibit high magnitude significant heterosis in both the direction over mid parent, better parent and standard parent. Maximum positive and significant heterosis over mid parent (32.83%), over better parent (24.30%) and over standard parent (13.53%) was observed in crosses IIHR-478 x IIHR-567. Among 45 hybrids developed, 18 hybrids over mid parent, 10 hybrids over better parent and 5 hybrids over standard parent exhibited positive and significant heterosis. The trait fruit diameter exhibit high magnitude significant heterosis in both the direction over mid parent, better parent and standard parent. The cross IIHR-478 x IIHR-444 exhibited maximum positively significant heterosis over mid parent (16.81%), over better parent (16.01%) and over standard parent (11.90%). Out of 45 hybrids, 12, 7 and 4 hybrids showed positive and significant heterosis over mid parent, over better parent and over standard parent respectively. The trait average fruit weight exhibit high magnitude significant heterosis in both the direction over mid parent, better parent and standard parent. Positively significant heterosis is preferred for this trait. The cross IIHR-604 x IIHR-182, showed maximum positive significant heterosis over mid parent (43.79%) and over better parent (31.76%). Whereas, the cross IIHR-478 x IIHR-567 showed maximum significant heterosis over standard parent (42.25%). Among 45 hybrids, 25 hybrids over mid parent, 13 hybrids over better parent and 26 hybrids over standard parent exhibited significant positive heterosis. Similar results were also reported by Ahmed et al., (1999), Weerasekara et al., (2007) and Jaiprakashnarayan et al., (2008) in okra. The magnitude of heterosis for number of fruits per plant was significant in both the direction over mid parent, better parent and where only positive direction was seen in standard parent. Maximum positive significant heterosis was observed in cross IIHR-875 x IIHR-478 (61.18%) over mid parent, (48.91%) over better parent and (98.55%) over standard parent. Majority of crosses exhibits positive and significant heterosis. Out of 45 hybrids, 29 hybrids over mid parent, 19 hybrids over better parent and 42 hybrids over standard parent exhibited positive and significant heterosis. The magnitude of heterosis for total yield per plant was significant in both the direction over mid parent, better parent and where maximum positive direction was seen in standard parent. Maximum positive significant heterosis was observed in cross IIHR-875 x IIHR-478 (112.89%) over mid parent, (83.78%) over better parent and (168.55%) over standard parent. Majority of crosses exhibits positive and significant heterosis. Out of the 45 hybrids, 29 hybrids over mid parent, 20 hybrids over better parent and 44 hybrids over standard parent exhibited positive and significant heterosis. Similar results were also reported by Singh et al., (2012), Solankey and Singh, (2010), Sheela et al., (1998), Kumbhani et al., (1993) and Shukla and Gautam, (1990). Indicating that, the predominance of non additive type of genes action and this cross can be commercially exploited to get benefits of non additive type of gene action. Higher magnitude of heterosis observed for fruit yield in the present investigation is attributed to wide genetic variability existing in the germplasm. High magnitude of standard heterosis for fruit yield was also reported in earlier studies (Wankhade et al., 1997;Shukla and Gautam, 1990;Sheela et al., 1998 andSingh et al., 1975). The heterosis observed for total yield per plant was attributed to the heterosis exhibited for growth, earliness and yield parameters. As there is significant genotypic association between yield and yield parameters like fruit length, average fruit weight and number of fruits per plant. Heterosis observed for these component characters have greatly contributed for higher magnitude of heterosis observed for total yield. However, for exploitation of heterosis the information on gca should be supplemented with sca and hybrid performance. Griffing (1956) has suggested the possibility of working with yield components which are likely to be more simply inherited than is by itself. Grafius (1959) suggested that there is no separate gene system for yield per se and that the yield is an end product of the multiplication interaction between the yield components. The contribution of components of yield is through component compensation mechanism (Adams, 1967). Since then component breeding rather than direct selection on yield has commonly been practiced. It is obvious that high heterosis for yield was built up by the yield components. Hybrid vigour of even small magnitude for individual components may result in significant hybrid vigour for yield per se. This was confirmed by the present investigation where none showed hybrid vigor for yield alone. The high heterosis for fruit yield observed in these crosses could probably be due to combined heterosis of their component characters, as these hybrids were not only heterotic in respect of fruit yield but were also found superior for one or the other yield components. Thus, the observed high heterosis for total yield seems to be due to increase in the total number of fruits per plant rather than increase in the size and weight of fruits, which is a desirable requirement in okra improvement. The results obtained in the present investigation were encouraging and tremendous increase in yield was obtained in most of the hybrids. Based on the overall performance of the hybrids and parental lines, some of the lines could be used as parents of hybrids of okra with high to moderate yield potential.
The significantly positive heterobeltiosis for total yield per plant could be apparently due to preponderance of fixable gene effects, which is also reported by Elangovan et al., (1981) and Singh et al., (1996).
Higher magnitude of heterosis observed for fruit yield in the present investigation is attributed to wide genetic variability existing in the germplasm. High magnitude of standard heterosis for fruit yield was also reported in earlier studies (Wankhade et al., 1997;Shukla and Gautam, 1990;Sheela et al., 1998 andSingh et al., 1975). The heterosis observed for total yield per plant was attributed to the heterosis exhibited for growth, earliness and yield parameters. As there is significant genotypic association between yield and yield parameters like fruit length, average fruit weight and number of fruits per plant. Heterosis observed for these component characters have greatly contributed for higher magnitude of heterosis observed for total yield. However, for exploitation of heterosis the information on gca should be supplemented with sca and hybrid performance. In the present study, it is apparent that high heterosis for yield may probably be due to dominance nature of genes. For yield attributes, some crosses were nonheterotic which may be ascribed to cancellation of positive and negative effects exhibited by the parents involved in a cross combination and can also happen when the dominance is not unidirectional as also pointed out by Gardner and Eberhart (1966) and Mather and Jinks (1982). Heterosis is thought to result from the combined action and interaction of allelic and non allelic factors and is usually closely and positively correlated with heterozygosity (Falconer and Mackay, 1986). According to Swaminathan et al., (1972) heterobeltiosis of more than 20% over better parent could offset the cost of hybrid seed. Thus, the crosses showing more than 20% of heterobeltiosis viz. may be exploited for hybrid okra production. Although the range of average heterosis and heterobeltiosis manifested by the crosses for different characters was comparatively wide, but it might not be of much significance unless it shows sufficient gain over the standard control. In the present study, the moderate extent of relative heterosis and heterobeltiosis as observed for yield and yield components could be attributed to its often-cross pollinated nature.
Total yield per plant in crosses IIHR-875 x IIHR-478, IIHR-478 x IIHR-567 and IIHR-604 x IIHR-347 displaying 168.55%, 159.53% and 133.05% respectively significant standard heterosis in any given direction are as promising as that of standard control (Table 4) and mean performance was higher significant differences than standard control (Nunhems Hybrid Shakti) which can be exploited for commercial cultivation after further testing during late kharif of Karnataka.
In conclusions on an average, okra displays heterosis for yield and its component traits studied. However, for each trait important differences exist among hybrids for the individual values of heterosis. Yield components should be considered to increase the yield through selections. The overall maximum positive significant heterosis for total yield per plant was observed in cross IIHR-875 x IIHR-478 (112.89%) over relative heterosis, (83.78%) over heterobeltiosis and (168.55%) over standard heterosis. Negatively heterotic crosses like IIHR-562 x IIHR-444 for days to 50% flowering (-8.70%) and IIHR-567 x IIHR-107 for fruiting nodes (-9.03%) respectively, are important to exploit heterosis for earliness in okra. The F 1 hybrid IIHR-875 x IIHR-478 with high yield potential has the potential for commercial cultivation after further evaluation for late kharif season of Karnataka. | 2019-04-03T13:10:14.354Z | 2019-01-10T00:00:00.000 | {
"year": 2019,
"sha1": "0fb44ea9381455b3e1b0f18d3f41c3c0c564d415",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/8-1-2019/Prakash%20Kerure,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "900ff63c45bd6ddd1cf21d9016bea34db3b98b79",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
233380746 | pes2o/s2orc | v3-fos-license | Dual-band MIMO coplanar waveguide-fed-slot antenna for 5G communications
This paper presents two new designs of MIMO dual band coplanar waveguide (CPW)-fed-slot antennas operating in the 5G frequency band (28 and 38 GHz). The first antenna is an XX MIMO antenna and the second antenna is an XY MIMO antenna. Simulated results for the S-parameters are presented for the two antennas using HFSS. Measured results are also presented for the return loss and gain with both results showing good agreement. The current distribution, group delay, envelope correlation coefficients (ECC), and diversity gain, are also presented for both antennas. The two antennas are fabricated on a substrate having dielectric constant εr = 10.7 and substrate thickness 0.635 mm. The size of the antenna is 4.4 mm x 4.1 mm x 0.635 mm.
Introduction
The year 2020 is widely considered to be the year of commercial launching of 5G worldwide. The spectra of 5G can be split into sub 6 GHz and mm wave bands such as 28, 38, 60 and 70 GHz. The mm wave bands are more desirable for bandwidth availability since the sub 6 GHz band is mostly occupied [1]. For 5G communication, Ka band (at 28/38 GHz) can be allocated for frequency division duplexing (FDD), where a dual-band single antenna system is preferred as a transceiver [2,3,4,5,6].
As a key component of the LTE wireless system, the multipleinput-multiple-output (MIMO) antenna, which utilizes multiple transmitting and receiving antennas, has attracted significant attention in the past few years for its ability to increase transmission capacity and reduce multipath fading [7,8,9]. MIMO antennas also provide spatial diversity, polarization diversity and/or pattern diversity [4]. Integrating MIMO technology with multiband antennas can further increase the channel capacity as compared to conventional MIMO systems for narrowband applications [8].
Dual frequency antennas are used in a variety of applications including satellite communications, global positioning systems, synthetic aperture radar and personal communications systems [10,11,12].
The use of coplanar waveguide in multi band antennas as antenna feed is important so that the antenna will be compatible with monolithic microwave integrated circuits. Coplanar waveguide has several advantages such as easy integration with active and passive elements, low frequency dispersion, the extra design freedom through the ability to vary the characteristic impedance and phase constant by changing the slot and strip widths, and avoiding the excessively thin, and therefore fragile, substrates as in microstrip line [13,14,15,16,17,18,19,20].
The design of CPW dual band MIMO antennas operating in the mm wave band for 5G has recently attracted much attention. Wei Hu et al [21], designed a dual-band eight-element MIMO array using multi-slot decoupling technique for 5G terminals with multiple PCB boards. Parchin et al [22], proposed a new broadband MIMO antenna system for sub 6 GHz 5G cellular communications that employs 4 pairs of compact CPW. Barani, et al [23], proposed low-profile wideband conjoined open-slot antennas fed by grounded coplanar Waveguides for 4 Â 4 5G MIMO operation. Abed and Jawad [24] proposed a compact size MIMO fractal slot antenna for 3G, LTE (4G), WLAN, WiMAX, ISM and 5G communications using CPW feed. This paper proposes a MIMO coplanar waveguide (CPW) fed double folded slot dual band antenna operating at 5G (28 and 38 GHz) bands. This antenna can be used for future 5G cellular communication systems achieving high speed data rate that is possible through millimeter-wave communications with wide bandwidth system. The basic folded slot antenna cell operating in the frequency range 5 GHz and 7 GHz was designed using the technique in [20] which relies on generating dual band antenna by using a double folded slot antenna which is used to reduce the impedance for matching, as shown in Figure 1 (a). In this technique, we made use of the study by [12] and [13], which indicated that the impedance of CPW folded slot antenna can be lowered so that it approaches that of the feed line for matching by increasing the width of the slot arm that is farther from the feed. No other dimension had to be changed. The proposed MIMO (XX) and MIMO (XY) antennas are generated by mirroring the basic antenna cell in two orthogonal directions, as shown in Figure 1 (b) and (c). The same method was used in [9] to generate the Pacman-shaped UWB MIMO antenna. In the following, we design the MIMO antennas to operate at 28 GHz and 38 GHz. Then the return loss and gain pattern are obtained using simulation and measurement. We also find the envelope correlation coefficient (ECC) [9,25,26,27] which determines how much the communication channels are isolated. In other words, it describes how much the radiation patterns of two adjacent antennas affect each other. We also find the group delay and directive gain.
The advantage of the proposed antenna is that it operates in the 5G high frequency range (28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38) with what this entails in terms of future applications in the wireless communications industry. It is an easy to design dual band MIMO antenna which employs CPW as the feed with the advantages of CPW in terms of its compatibility with monolithic microwave integrated circuits (MMIC). The size of the antenna is also small compared to other antennas. The antenna showed very good performance in terms of ECC, Diversity Gain DG, group delay, return loss and radiation pattern.
Antenna measurement and analysis
The basic antenna to be considered is a dual band antenna that operates at 28 and 38 GHz. It consists of a CPW line of 50 Ω impedance having a strip width of 0.255 mm and a slot width of 0.129 mm feeding a double folded slot antenna, as shown in Figure 1 used to reduce the impedance of the slot antenna for matching purposes, more details can be found in [20]. The proposed antenna is simple to design. It consists of a feed CPW line that has an impedance of 50 Ω (strip ¼ 0.255 mm, slot ¼ 0.129 mm). This can be obtained using quasi static formulas available in the software IE3D of Zeland Inc. [28]. The feed CPW is connected to two folded slot antennas as shown in Figure 1a. The total (outer) slot antenna loop resonates when its length is about λg (¼4.514 mm) where λg is the guided wavelength of CPW at the lower operating frequency of 28 GHz. The smaller (inner) slot antenna resonates when its length is about λg (¼3.326 mm) where λg is the guided wavelength of a CPW at the higher operating frequency of 38 GHz.
This yields approximately h 1 ¼ 1.924 mm and h 2 ¼ 1.387 mm (see Figure 1a) which are chosen as the starting values to be entered in HFSS. These values ignore the coupling between the loops and the effect of changing the width of the outer slot. Therefore h 1 The basic antenna of Figure 1 (a) is flipped horizontally and vertically to generate two different antenna types, a MIMO XX and MIMO XY antenna as shown in Figure 1 (b) and (c), respectively. Simulations of the antennas of Figure 1 were carried out using HFSS.
As for measurement, our Network Analyzer operates up to 20 GHz. For this reason we had to scale the dimensions of Figure 1, including the substrate thickness, by multiplying by a factor of 3. This is shown in Figure 2 (a, b, c). The measured substrate is 1.905 mm thick with ε r ¼
10.7.
The return loss of the single antenna for measured and simulated results are given in Figure 3, the measured and simulated results compare very well.
The scattering parameters measurements for the two MIMO antennas are shown in Figures 4 and 5. Figure 4 showes the return loss of the measured and simulated results for the two MIMO designs, and The current distribution of the MIMO antenna is calculated using HFSS as shown in Figure 6 and 7 for the MIMO XX and MIMO XY configurations, respectively. It is clear that the larger slot loop is responsible for radiation at the lower frequency of 28 GHz and the inner slot loop is responsible for radiation at the upper frequency of 38 GHz.
The group delay was obtained using HFSS. It is shown in Figure 8 (a,b) for the MIMO XX and MIMO XY, respectively. The group delay is small and increases at the two operating frequencies of 28 GHz and 38 GHz.
The antenna radiation patterns were measured using Desktop Antenna Measurement System (DAMS), DAMS is a versatile multiple axis antenna measurement system used for antenna radiation pattern measurements. This system features 360 degrees of azimuth with up to þ/-90 degrees of tilt. Rotary tables with stepper motors, linear actuators and vector network analyzers are incorporated in this system to facilitate the measurement process of the radiation characteristics of the antenna under test. Software used for automated antenna measurement is Antenna Measurement Studio by Diamond Engineering. This provides precision antenna measurements with data processing capability. The measurement setup used includes a stationary calibrated horn antenna (Reference/Transmitter Antenna), the DAMS system and Vector network analyzer. The simulated and measured gain pattern versus frequency were plotted in the elevation (y-z) plane as shown in Figure 9 (a, b, c) for the single and the two MIMO antennas. Good agreement is obtained.
The envelope correlation coefficient ECC can be obtained from the 3D radiation pattern, but this involves numerical integrations and 3D radiation pattern measurements [25]. For a single mode lossless 2 MIMO antenna, a simplified expression for the ECC using the scattering parameters, can be expressed as follows [26,27]: The envelope correlation coefficient ECC was calculated using the formula whereS 11 ,S 12 are the complex conjugates of S 11 and S 12 , respectively. A value of 0.5 for ECC or less is adequate for low correlation between the antenna elements [25]. Figure 10 (a,b) shows the ECC versus Diversity gain (DG) is defined as the difference between the combined signal from all the antennas of the diversity system and the signal from a single antenna, which can evaluate the diversity performance of MIMO antenna [29]. In a simple word, if DG is higher, the improvement in diversity performance is better. The DG is obtained using the formula [29]: Where ρ e is the cross correlation between the far fields of the MIMO when antenna element 1 and 2 are excited resulting in far fields F ! 1 ðθ; φÞ and F ! 2 ðθ; φÞ , respectively. DG is shown in Figure 11 for the MIMO XX and MIMO XY antennas. The simulated DG indicates good improvement in diversity due to the MIMO structures.
Mathematical modelling of the single antenna
There are two ways of looking at the proposed antenna structure of Figure 1a. The first considers two folded slot antennas. This was explained before and used for initial prediction of the antenna lengths. The second considers that the antenna structure is based on two dipoles which leads to dual band operation. Regular dipole is directly driven by the CPW feed, while the other dipole is a folded dipole which is parasitically driven, as shown in Figure 12. Regular dipole is designed for a frequency of 38 GHz and the folded dipole is designed for a frequency of 28 GHz.
Regular dipole
The mathematical modelling of regular dipole is based on the length of the arms of the dipole. The total length, L of the regular dipole arm should be approximately half guided-wavelength of its fundamental mode [30]. The dipole width can be adjusted to tune the input impedance of the antenna to the CPW impedance (see Figure 13).
The total length of the arms of the regular dipole is denoted as L. The length L can be calculated using Eq. (4) as [31].
The main dipole is designed to resonate at a frequency, f ¼ 38 GHz, the dipole length is calculated as L ¼ 1:2 mm, and The dipole width, d ¼ 0:351 mm is optimized using HFSS.
Folded dipole
The input impedance of folded dipole (shown in Figure 14) is given by Eq. (5): Where Z A is the input impedance of strip dipole antenna with length L and width W1 in antenna mode. Z t ¼ jZ c tanðk ÂL 0 =2Þ is input impedance in transmission line mode where Z c is the characteristic impedance of the coplanar strip in the homogenous medium of relative permittivity ε r as expressed in Eq. (6), k is wave number and α is current division factor [32,33].
The complete elliptic function of first kind KðkÞ is approximated by [34] Where k and e are calculated using The current division factor for non-uniform dipole radius transformed into equivalent radius for very thin strip dipole can be expressed as Eq. (8). Where
Conclusions
Two designs of coplanar waveguide fed slot antenna were proposed. Each design consists of a MIMO dual band antenna operating in the 5G mm frequency range (28 and 38 GHz). The simulated and measured results for the return loss and gain pattern are in good agreement. The simulated ECC indicates low correlation between the antenna elements for both the XX and XY antennas. The group delay is small and increases at the two operating frequencies. The DG has very good performance (close to unity) throughout the frequency band.
Declarations
Author contribution statement Amjad Omar: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Mousa Hussein: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Indu J. Rajmohan: Performed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability statement
Data will be made available on request. | 2021-04-25T05:27:50.080Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "206f6e21d03661190c3432d7f5f3f55d82a95e34",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844021008823/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "206f6e21d03661190c3432d7f5f3f55d82a95e34",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213395402 | pes2o/s2orc | v3-fos-license | Mass Spectral Fragmentation of Pelargonium graveolens Essential Oil Using GC–MS Semi-Empirical Calculations and Biological Potential
The volatile constituents of the essential oil of local Pelargonium graveolens growing in Egypt was investigated by gas chromatography–mass spectrometry (GC–MS), and the main constituents were citronellol (27.67%), cis-Menthone (10.23%), linalool (10.05%), eudesmol (9.40%), geraniol formate 6.87%, and rose oxide (5.77%), which represent the major components in the obtained GC total ion chromatogram. The structural determination of the main constitutes based on their electron ionization mass spectra have been investigated. The MS of these compounds are absolutely identical in mass values of peaks of fragment ions, where their relative intensities have minor differences. In the spectra of all studied compounds, the observed characteristic ions were [M-H2O] and [M-CH3]. The latter has a structure with m/z 69, 83. Different quantum parameters were obtained using Modified Neglect of Diatomic Overlap (MNDO) semi-empirical method as total energy, binding energy, heat of formations, ionization energy, the energy of highest occupied molecular orbital (HOMO), the energy of the lowest unoccupied molecular orbital (LUMO), energy gap ∆, and dipole moment. The antibacterial and antifungal activities of P. graveolens essential oil and identified compounds were tested against wide collection of organisms. The individual identified compounds in the essential oil—citronellol, cis-Menthone, and linalool (except eudesmol)—showed comparable activity to antibiotics. The most active isolated compound was the citronellol and the lowest MIC was found against E. coli. The essential oil showed high antifungal effects and this activity was attributed to cis-Menthone, eudesmol, and citronellol (excluding linalool). cis-Menthone was the most active compound against selected fungi followed by the eudesmol The study recommends local P. graveolens and identified active compounds for further applications in the pharmaceutical industries.
Introduction
Essential oils are natural complex and volatile compounds that are produced by plants as secondary metabolites that may control pests, bacteria, fungi, and viruses [1][2][3]. The antifungal, antioxidant, antitumor, antiviral, and antibacterial activities of these compounds is widely studied [4][5][6][7]. Pelargonium graveolens (Geraniaceae) is a species in the Pelargonium genus, whichs contain ~250 species originated from South Africa and is often called a geranium. The essential oil of P. graveolens is one of the most expensive essential oils used in the perfumery, flavoring, and cosmetics [8].
Mass spectrometry (MS) techniques have an important role in the development of natural products industry over the five decade. It provided a starting point for the identification or structure determination of the most of natural products as well as the molecular weight [9]. MS techniques can provide a lot of structural information with little expenditure of the studied molecules based on their electron ionization (EI) mass spectra and electron ionization (EI) mass spectra [10]. The fragmentation of ionized molecule depends mainly on their internal energy [11]. Also, semi-empirical (MNDO) is a quantum mechanical method for the determination of thermochemical properties of molecules used in chemistry and physics to determine the electronic structure of molecules, and represents the most successful method to calculate the structure of matter and complement experimental investigations [12][13][14]. The chemical composition of the essential oil using gas chromatograph mass spectrometer (GC-MS) of different Pelargonium species such as P. odoratissimum and P. graveolens have been reported [15,16]. However, to the best of our knowledge, neither complete mass spectra nor structure elucidation of the essential oil constituents of P. graveolens using EI mass spectra have been reported. Further, the thermodynamics properties of these essential oil constituents using MNDO calculations have not been reported before [16][17][18].
Bacterial and fungal infections may cause severe diseases in humans and animals [1, 19,20]. Secondary metabolites, including essential oils, are considered natural products commonly used to control fungal and bacterial diseases [1, 2,7,21,22]. P. graveolens essential oils had been reported in few studies; however, a full picture of the activity and the magnitude of each major essential oil constitute as antimicrobial agent had not been investigated before [15][16][17][18]. Furthermore, the large number of species belonging to Pelargonium and ecotype variation influenced the results of these studies.
The aim of the present study is to investigate the use of GC-MS in the identification of the chemical composition of P. graveolens essential oils and to elucidate the molecular structure of the main essential oil constituents based on their electron ionization mass spectra. In addition, we applied the theoretical MNDO calculations, including geometrical structures optimization and thermodynamic parameters. The obtained data will aid in understanding the bioactivity of the studies components. Further, this study aims to investigate individual essential oil constitutes antimicrobial activities against different bacteria and fungi.
Mass Spectrometric Observations about the Fragmentation of the Studied Compounds Under EI Conditions
The fragmentation of the main components of P. Graveolens by electron ionization at 70 eV with single quadruple mass spectrometer based on its mass spectra were reported, as shown as in Scheme 1. The compounds investigated in this study fit into two groups. The first group comprises the compounds citronellol, linalool, and eudesmol, which have a hydroxy (OH) group through its structures. The second group comprises menthone and geraniol formate with oxygen as carbonyl-C-O group. Although the last studied compound (rose oxide) has a hetero oxygen atom in six cyclic ring.
Fragmentation Pattern of Citronellol Compound
When studying the mass spectra of all main components of P. graveolens essential oil, it could be concluded that the molecular ions of these compounds have low abundance as smaller peaks, which indicates that the molecular ions are unstable at 70 eV. The first fragmentation pathway of the molecular ion of citronellol is the formation of the fragment ion at m/z 138 (Scheme 1). This could be could be explained by the formation of [M-H2O] +• ion, which loses the H2O from the molecular ion. In addition, the ion [M-H2O] +• can be fragment by three ways. First by loss of a CH3 • radical to produce the fragment ion [M-H2O-CH3] + at m/z 123. The second way is by the loss of a C3H7 • radical to produce [M-H2O-C3H7] + at m/z 95. The third way is by the loss of C4H9 • to produce [M-H2O-C4H9] + at m/z 81.
The second fragmentation pathway of the molecular ion of citronellol is by the simple cleavage to produce directly the fragment ion C5H9 + with m/z 69, which represents the base peak in the mass spectrum as shown in Figure 2a.
Fragmentation Pattern of Linalool Compound
The mass spectrum of linalool shows very low abundance of the molecular ion peak, which could be explained by the instability of the hydroxyl group at C3 atom. The first fragmentation pathway of the molecular ion of linalool is the formation of the fragment ion at m/z 136. It is certainly due to the formation of [M-H2O] +• ion by the loss of H2O from the molecular ion. In addition, the fragment ion [M-H2O] +• could be fragmented by the loss of CH3 • radical to produce the fragment ion [M-H2O-CH3] + at m/z 121. This fragment ion could be fragmented by two ways: First, by loss C3H5 • radical to produce [M-H2O-CH3-C3H5] + ion at m/z 80. Second, by the loss of C2H4 to produce [M-H2O-CH3-C2H4] + ion at m/z 93. In addition, it may be formed directly from [M-H2O] +• ion by the loss of C3H7 • radical, which is reflected in the chromatogram with highest peak as shown in Figure 2b. The second fragmentation pathway of the molecular ion of linalool is with the simple cleavage to produce directly the fragment ion C5H9 + at m/z 69 as shown in Scheme 1.
Fragmentation Pattern of Menthone Compound
The menthone mass spectrum is shown in Figure 2c. It is clear that the loss of C3H6 with McLafferty rearrangement is eliminated from menthone molecular ion to form the C7H12O + ion at m/z 112, which represents the base peak in the spectrum. This ion is fragmented by the loss of CH3 • radical to form C6H9O + ion at m/z 97, the latter is fragmented by the loss of CO to form C5H9 + at m/z 69.
The second fragmentation process of menthone molecular ion is the loss of CH3 • radical directly to form C9H15O + ion at m/z 139, which could be fragmented by the loss of CO to produce the C8H15 + ion at m/z 111. On the other hand, the molecular ion could be fragmented directly to form the two fragment ions C5H8O + and C5H9 + at m/z 84 and 69 as shown in Scheme 1, respectively.
Fragmentation Pattern of Eudesmol Compound
The most characteristic fragmentation pathway for eudesmol molecular ion is the formation of the fragment ion C15H24 +• at m/z 204 in the mass spectrum (Figure 2d), which is certainly due to the ion [M-H2O] +• as shown in Scheme 1. This fragment ion undergoes fragmentation by two pathways: the first is the formation of the fragment ion C14H21 + at m/z189 by the loss of CH3 • radical. The second is the formation of the fragment ion C12H17 + at m/z 161 by the loss of C3H7 • radical, which represent the base peak in the mass spectrum. The fragment ion C12H17 + at m/z 161 undergoes fragmentation by the loss of C2H4 molecule to form the fragment ion C10H13 + at m/z 133. This fragment could be fragment by three pathways: first, by the loss of C2H4 molecule to form the fragment ion C8H9 + at m/z 105. Second, by the loss of C3H6 to form the fragment ion C7H7 + at m/z 91. Third, by the loss of C4H4 to form the fragment ion C6H9 + at m/z 81 as shown in Scheme 1.
Fragmentation Pattern of Geraniol Formate Compound
The electron ionization mass spectrum of geraniol formate shows low relative abundance of the molecular ion peak as shown in Figure 2e. The molecular ion of geraniol formate undergoes fragmentation to form the fragment ion C5H9 + at m/z 69 directly by simple cleavage which represent the base peak in the spectrum. In addition, the molecular ion could be fragmented with McLafferty rearrangement process by the loss of H2CO2 to form the fragment ion C10H16 + (monoterpene hydrocarbon) at m/z 136, which consequently form the fragment ion C7H9 + at m/z 93 by the loss of CH3 • radical as shown in Scheme 1.
Fragmentation Pattern of Rose Oxide Compound
The mass spectrum of the rose oxide gives the molecular ion C10H18O +• at m/z 154 with small relative intensity as shown in Figure 2f. The first fragmentation process of the molecular ion of rose oxide is the formation of the fragment ion C9H15O + at m/z 139 by the loss of CH3 • radical with simple cleavage which represent the base peak in the mass spectrum. The second fragmentation process is to form the fragment ions C5H9 + at m/z and C3H7O + at m/z 83, which are formed by hydrogen atom rearrangement processes as shown in Scheme 1.
Computation Method
The geometry of major essential oil constitutes of P. graveolens has been optimized based on the molecular mechanics and semi-empirical calculations implemented in the molecular modelling program HyperChem7.0. program. Semi-empirical calculations were carried out using the routine MNDO and Ploak-Ribiere conjugated gradient algorithm. To optimize the structures, the Ploak-Ribiere conjugated gradient algorithm was used to obtain the molecular properties including heat of formation, ionization energy, highest occupied molecular orbital (HOMO), the energy of the lowest unoccupied molecular orbital (LUMO) energies, dipole moment, atom charges, total energy, binding energy, and nuclear energy of the studied molecules. The calculated thermochemical properties of the studied components considered are shown in Table 1. Note that all studied components (except rose oxide) have high and negative values heats of formation, which indicate that these are thermodynamically stable molecules. The positive value of the heat of formation of rose oxide could be attributed to the presence of a hetero oxygen atom in the six cyclic ring. The different structures of these components are determined based on the lowest energy that stabilizes the structure. The MNDO showed non-planar structures in all components (except rose oxide) as more stable molecules as shown in Figure 3. The total energies of the studied components have been calculated by this quantum chemical method, which is −42657 kcal.mol −1 for rose oxide and −60183 kcal.mol −1 for ç-Eudesmol. Electron affinity (EA) is a measure of the power of the molecule to granting electron when adding the electron LUMO. From Table 1, note that citronellol, linalool, and geraniol formate have the most electron affinities more than other studied components. This could be attributed to their long chain structure and the presence of the single and double oxygen bond, whereas cis-Menthone, eudesmol and rose oxide have the cyclic structure. Molecular orbitals (MO's), including the HOMO, LUMO, and the energy gap (Egap), are considered important parameters for quantum chemistry. The way the molecule interacts with other species could be determined, and then named as Frontier Molecular Orbitals (FMOs). HOMO, could be considered as the outermost orbital that contain electrons, and act as electron donor [25]. LUMO is the innermost orbital that have free places accepting electrons. A small frontier orbital gap molecules are more polarizable and associated with a high chemical activity as well as low kinetic stability [26]. To get insight to the energetic behaviour of the studied components, we have performed MNDO method calculations in ground state and the results are shown in Table 2. The 2D plots of the HOMO, LUMO frontier orbitals are depicted in Figure 4. The positive phase is presented in red colour and the negative one is in green. The energy gap of HOMO-LUMO explains the eventual charge transfer interaction within the molecule, the increasing value of the energy gap and of the molecule becomes more stable, as shown in Table 2. cis-Menthone is the most stable molecule with energy gap 9.737401 eV.
The ionization energy (IE) that resulted from the highest occupied molecular orbital is smaller for all studied molecules except cis-Menthone which have the highest IE = 9.5 eV. This could be explained by the presence of a double oxygen bond coupled with the sixth cyclic ring as a ketone group. In addition, rose oxide have the smallest IE = 7.1 eV due to the presence of the oxygen as a hetero atom in the sixth cyclic ring, which carries a positive charge.
The molecule dipole moment is an important electron property which represents a generalized measure of bond properties and charge densities [27]. The dipole moments values of the 1,2,4 molecules (1.4, 1.5 and 1.3 debye) have nearly the same values. This could be explained by the symmetric distribution of the charge atoms of these molecules. A molecule which have electron accepter group because of improved charge distribution and increasing distance have higher dipole moment, which is the case in geraniol formate and cis-Menthone molecules (Table 1), they have higher dipole moment (3.9, 2.4 Debye).
These high values could be attributed to the presence of single and double oxygen bonds in these structures as polar groups. Rose oxide has the lowest dipole moment value (0.6 Debye) because the electronic configuration (C-O-C) is nearly homogenous.
Possible Correlation Between the Activity and the Semi-Empirical Calculations
From the calculated heat of formations ∆HF (M) of the studied components (Table 1), it was obvious that all components have negative values (except rose oxide) that confirm that these components have relatively high stabilities. The dipole moment of the molecule gives information on the polarity of these molecules. The large dipole moment may increase its reactivity. This means that geraniol formate and cis-Menthone have a tendency to interact with other molecules, as shown in Table 1. The energies of HOMO and LUMO are not activity descriptors, but can be connected to the activity of molecules under study. Higher HOMO energies indicate better electron-donating properties of a molecule and the lower HOMO energy point to the lower activity of the molecule (e.g., antioxidant or antimicrobial) [28,29]. The values of HOMO, LUMO, and energy gap of the first group are listed in Table 2. It was clear that cis-Menthone, citronellol, and linalool compounds have better electron donating properties and high activities than the other studied compounds which was confirmed by antimicrobial activities assays and had not been studied before in P. graveolens.
Antibacterial Activities
P. graveolens essential oil showed high antibacterial effects against all tested bacteria (Table 3). Individual identified compounds in the essential oil: citronellol, cis-Menthone, and linalool, (except eudesmol) showed comparable activity to antibiotics. Eudesmol was almost inactive at low dosages. The most active isolated compound was the citronellol and the lowest MIC was found against E. coli (0.007 ± 0.0003 mg mL −1 ). E. coli, L. monocytogenes, and B. cereus were the most sensitive bacteria. In P. graveolens L'Her from Iran, the essential oil showed no activity against L. monocytogenes, which could be explained by that major constitutes were completely different than our study (β-citronellol 36.4%) [18]. In Pelargonium roseum from South Africa the major constitute identified was citronellol (39.97-43.67%) and showed antibacterial effects comparable to antibiotics against P. aeruginosa and S. aureus [17]. Previous investigation on citronellol reported comparable bioactivity against E. coli and S. aureus to that reported here [30]. In Mentha longifolia L. essential oils (menthone 20.7-28.8%), the antimicrobial study showed MIC ranging from 0.195 to 3.12 × 10 3 μg mL −1 against most bacteria [31]. In another study, the menthol (structurally related to menthol) showed moderate activity against E. coli and others [32]. In agreement with our results, no study revealed obvious antibacterial effects of eudesmol against most bacteria. The bioactivity of the essential oil of P. graveolens against bacteria is mainly attributed to the active compounds within the essential oils including citronellol, cis-Menthone, and linalool only.
Antifungal Activities
Pelargonium roseum essential oils showed high antifungal effects against selected fungi, as shown in Table 4. The MIC and the MFC were relatively low in the experiment. However, the linalool showed the lowest activities against fungi compared to other compounds. cis-Menthone was the most active compound against selected fungi (MIC: from 0.07 ± 0.01 to 0.17 ± 0.01 mg mL −1 ). Also, the eudesmol showed relatively high antifungal activities with MIC ranging from 0.21 ± 0.01 to 0.47± 0.02 mg mL −1 . The activity of the essential oil of P. graveolens is mainly attributed to cis-Menthone, eudesmol, and citronellol only. For our knowledge, this is the first study revealing this conclusion. In a previous investigation, the linalool was active against oral C. albicans at MIC = 1.000 mg mL −1 , which is comparable to this study [33]. The analyses of the antifungal activities of the essential oils (mainly composes of eudesmol derivatives) of heartwood in several trees of Cupressaceae, revealed comparable antifungal effects to reference drugs against C. albicans [19]. Using the inhibition zone technique, P. roseum essential oils were active against most bacteria and a complete inhibition was observed in C. albicans [17].
Materials and Methods
Pelargonium graveolens leaf samples were collected from plants growing in Zagazeg University campus in July 2018 (24-30 °C). The plants were identified at the Botany Department, Faculty of Agriculture, Zagazeg University, Egypt, and a voucher specimen has been created.
Essential Oil Extraction
Dried leaves (1.5 kg) of P. graveolens were subjected to 3 hours of hydro-distillation in a Clevenger-type apparatus. The oil was separated, then dried under anhydrous sodium sulfate (Na2SO4). The oil was stored in the dark at 6 °C until further analysis.
Gas Chromatography-Mass-Spectrometry Analysis
A Facous GC-DSQ mass spectrometer (Thermo Scientific, Waltham, MA, USA) was used in the Egyptian Atomic Energy authority, Nuclear Research Centre, Experimental Nuclear Physics Department, Atomic and Molecular Physics Unit. The machine was loaded with A3000 autosampler and TR-5MS Capillary column (0.25 μm film thickness, 30 m length, and 0.25 mm i.d.). Temperature increase was from 55 °C to 200 °C (5 °C min −1 ) and a final temperature of 300 °C. The carrier gas was the helium (1 mL min −1 ). The components were identified tentatively by comparing the retention times and mass spectra with those of WILEY (Wiley, 9th Edition Version 1.02) and NIST 05 (NIST version 2.0d) databases.
Antibacterial Activity
The antibacterial activities against Listeria monocytogenes (clinical isolate), Bacillus cereus (ATCC 14579), Micrococcus flavus (ATCC 10240), Escherichia coli (ATCC 35210), Staphylococcus aureus (ATCC 6538), and Pseudomonas aeruginosa (ATCC 27853) were assayed using the micro-dilution method [20,34]. A bacterial inoculum (1.0 × 104 CFU per well) was mixed in microtiter plates with essential oil serial dilutions then incubated at 37 °C for one day in a rotary shaker. The minimum inhibitory concentration (MIC) and the minimum bactericide concentration (MBC) were determined by subculturing using serial dilutions of essential oil. The optical density was determined at a wavelength of 655 nm. Streptomycin was used as antibiotic at 0.01-10 mg mL −1 . This array of bacteria causes several human diseases [35][36][37].
Conclusions
This is the first study applying MNDO on P. graveolens essential oil main constitutes and investigating the biological activities of these compounds. The experimental GC-MS data revealed that P. graveolens major constituents were citronellol (27.67%), cis-Menthone (10.23%), linalool (10.05%), eudesmol (9.40%), geraniol formate 6.87%, and rose oxide (5.77%). Using the obtained electron ionization mass spectra of these compounds, the fragmentation processes observed were the loss of H2O and CH3 radical, further fragmentation pathways were reported and discussed. These experimental data together with the MNDO revealed in-depth information about the chemical behavior of the studied molecules which are important for many chemical and medical applications. P. graveolens essential oil showed high antibacterial effects against all tested bacteria. The individual identified compounds in the essential oil-citronellol, cis-Menthone, and linalool (except eudesmol)-showed comparable activity to antibiotics. The most active isolated compound was the citronellol and the lowest MIC was found against E. coli (0.007 ± 0.0003 mg mL −1 ). The essential oil showed high antifungal effects. The activity of the essential oil of P. graveolens is mainly attributed to cis-Menthone, eudesmol and citronellol only (excluding linalool). For our knowledge, this is the first study revealing this conclusion. cis-Menthone was the most active compound against selected fungi (MIC: from 0.07 ± 0.01 to 0.17 ± 0.01 mg mL −1 ). Also, the eudesmol showed relatively high antifungal activities with MIC ranging from 0.21 ± 0.01 to 0.47 ± 0.02 mg mL −1 . The study recommends local P. graveolens and identified active compounds for further investigations to explore possible uses in the pharmaceutical industries. | 2020-01-23T13:57:37.379Z | 2020-01-21T00:00:00.000 | {
"year": 2020,
"sha1": "ddd8f60bdebc10a53b9ff4e961a9532547922183",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/8/2/128/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ddd8f60bdebc10a53b9ff4e961a9532547922183",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
250337185 | pes2o/s2orc | v3-fos-license | SARS-CoV-2 Infection in Children and Adolescents Living With HIV in Madrid
Multicenter study designed to describe epidemiologic and clinical characteristics of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) positive cases registered among children and adolescents living with HIV (CALWH). SARS-CoV-2 infection was confirmed in 13.3% of CALWH, with all patients presenting mild symptoms, and the outcome was good in all patients. None of the HIV- and antiretroviral treatment-related variables studied were associated with greater infection risk or could be considered protective.
S ince March 2020, when the new coronavirus SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) was declared a global pandemic, the virus has infected more than 250 million people all over the world, affecting vulnerable populations including children and adolescents living with HIV (CALWH). Coronavirus disease 2019 in the general population is fairly well described, but the interaction between HIV infection in the severity and outcomes of COVID-19 remains little understood, 1,2 and data are sometimes contradictory.
Some evidence suggests that patients with advanced HIV disease (low CD4+ lymphocyte cell count), high viral load or those who are not on antiretroviral treatment (ART) are at higher risk of SARS-CoV-2 infection and associated complications. 1 However, other groups reported comparable rates of infection and complications in people living with HIV on ART, in good clinical and immunological conditions. 2 In a recent meta-analysis, Wang et al 3 found an increased risk of COVID-19 mortality in patients with HIV, but probably modulated by age, region and study design. Whether ART might play an antiviral role against SARS-CoV-2 is also a question to be answered.
Data are lacking regarding COVID-19 in CALWH. The incidence of SARS-CoV-2, risk of complication and rate of seroconversion among CALWH have not been reported. The aims of the study were to describe the epidemiological and clinical characteristics of the first SARS-CoV-2 positive cases registered among CALWH and to assess possible HIV-and ART-related risk or protective factors.
METHODS
A prospective multicenter study including CALWH followed up in the pediatric outpatient clinics of 5 hospitals in Madrid (Spain) between June 2020 and March 2021.
SARS-CoV-2 infection was considered confirmed when either a polymerase chain reaction (PCR) or rapid SARS-CoV-2 antigen test (RAT) in nasopharyngeal swab returned positive. PCR or RAT was performed throughout the study period on patients with symptoms and in those who reported contact with someone infected with SARS-CoV-2, following the indications of the Spanish Ministry for Health.
Blood samples for serological testing were collected after confirmed infection when patients attended routine outpatient appointments. Depending on the availability, various chemiluminescence serologic assays were used to determine SARS-CoV-2 IgG: COVID-19 VIRCLIA IgG-monotest, Vircell; ADVIA Centaur SARS-CoV-2 Total, Siemens; Alinity SARS-CoV-2 IgG II Quant, Abbott. All assays were performed according to the manufacturer's package insert.
Epidemiologic, immunovirological and ART data were collected from medical reports. Symptoms related to SARS-CoV-2 were actively collected during routine medical visits, by means of a specific questionnaire. Clinical and epidemiological characteristics, immunovirological data (undetectable plasma viral load: <50 copies/mL) and ART treatment (specifically, tenofovir alafenamide or tenofovir disoproxil fumarate exposure) were compared in patients with SARS-CoV-2 confirmed infection and those uninfected. The study was approved by the ethical committees of the participating hospitals. For children under 18, a parent/guardian signed an informed consent. Informed assent forms were collected when applicable. Patients over 18 consented to participate themselves. Clinical symptom data were collected retrospectively for some patients to complete the gap between the beginning of the pandemic and the approval of the prospective study. Each patient received an anonymous number code to maintain confidentiality.
Median and interquartile ranges were used to describe continuous variables, and numbers and percentages to express categorical variables. To compare the characteristics of patients with confirmed SARS-CoV-2 infection and those without, Fisher's exact tests were used for categorical variables, and the Mann-Whitney test was used for continuous variables. A P value < 0.05 was considered statistically significant. Windows SPSS.20 (Madrid, Spain) was used for statistical analysis.
RESULTS
A total of 60 CALWH were studied during the study period. Among them, SARS-CoV-2 infection was confirmed in 8 (13.3%) patients: 7 diagnosed by PCR and 1 by RAT.
Median age of CALWH with SARS-CoV-2 infection was 19 years old (17-19.5 years), 62.5% were female. Three were Spanish (Caucasian) and 5 were born abroad (3 from Sub-Saharan area and 2 from Latin America). All were vertically infected, 5 patients were classified as CDC clinical stage A, 1 as stage B and 2 as stage C. By the time of SARS-CoV-2 infection, all were receiving ART. Plasma viral load was undetectable in 87.5% of patients, median CD4+ T-cell count was 671.5 cells/μL (582.5-817.5), and none had CD4+ T-cell count less than 500 cells/μL. SARS-CoV-2 symptoms were reported by 7 (87.5%) of the 8 patients (Table 1). The most common clinical manifestation was upper respiratory tract infection (62.5%). None presented with multisystem inflammatory syndrome in children or required hospital admission or SARS-CoV-2 specific treatment.
After confirmed infection, a SARS-CoV-2 IgG test was positive for 7 of 8 (87.5%) patients a median of 39 days (36.5-42.5 days) later. One asymptomatic patient with positive PCR, tested negative for SARS-CoV-2 IgG in 2 consecutive visits, at 1 and 6 months after the acute infection.
DISCUSSION
In our study of 60 CALWH in Madrid, the clinical presentation and outcome of cases diagnosed with SARS-CoV-2 infection were comparable to that in the general pediatric population. 4 We found 13.3% of confirmed infection, with all patients presenting mild symptoms or asymptomatic. None required admission or specific antiviral treatment. The seroconversion rate after acute infection was 87.5%, which does not appear to be different in healthy children, 5 although the numbers are small. All positive PCR were performed before November 2021, so we assume that infections were probably caused by alpha and delta variants (no microbiological confirmation).
To our knowledge, this is one of the first series describing the incidence and clinical outcomes of SARS-CoV-2 infection in CALWH. Our results are reassuring, as data suggest an incidence that seems comparable to that of the pediatric Spanish population. All data were actively collected according to a structured questionnaire, reducing the potential for recall bias. Symptoms related to SARS-CoV-2 were predominately cough/rhinorrhea, followed by fever, similar to previously published data in healthy children 6,7 with a similar rate of complications. Despite the deleterious effects of HIV on the immune system of vertically infected patients, including chronic inflammation and immunoactivation, 8 our results do not suggest that HIV infection since birth in patients with good immunovirological control leads to a greater risk of SARS-CoV-2 morbidity.
Some studies have found higher mortality among HIV-COVID-19 coinfected people 3 and other groups described with low levels of CD4+ cell count as a risk factor of poor outcome. 9 In contrast, other studies found no relationship between COVID-19 incidence and outcomes and with virological or immunological factors. 10,11 We found no differences regarding CDC clinical stage, CD4+ T-cell count or viral load among CALWH with and without confirmed COVID-19. Patients with lower CD4+ T-cell counts tended to present a higher risk of SARS-CoV-2 infection, but the differences did not reach statistical significance. However, the small sample size of our cohort may have limited our ability to detect any difference. In addition, all children were receiving ART and had good immunovirological control, limiting our ability to assess the possible influence of immunosuppression or ART on the outcome.
Patients with SARS-CoV-2 infection tended to be older in our series. This finding might be explained by the fact that adolescents probably have more social interaction and riskier behavior (meeting friends, breaking restrictive social rules, or being less aware of SARS-CoV-2 infection risks), whereas younger children would have been more consciously protected against virus exposure by their parents. For this investigation, a comprehensive review of electronic health records was conducted to assess demographic characteristics, social history, signs and symptoms at admission, laboratory test results, and treatment course of all patients in whom PeV was detected by the multiplex molecular panel during the cluster period.
Most patients became symptomatic in the community (22, 96%); 1 preterm infant became symptomatic while in the neonatal intensive care unit (NICU). One (4%) patient attended a child care facility, and 16 (70%) had siblings at home or were exposed to other children.
Leukopenia was detected in only 4 (17%) patients. All but one of the infants were admitted to the hospital; 4 (17%) infants developed severe disease that required treatment in the NICU. Brain magnetic resonance imaging was performed in 4 severely ill NICU patients, which detected diffusion within the white matter consistent with typical PeV meningoencephalitis in all of these patients.
Antibiotics were initially prescribed for the 23 patients but were discontinued for 13 (57%) within 24 hours of detection of PeV. Mean hospital stay was 4.5 days (range, 1-26 days). Twenty-one (91.3%) patients recovered without complications. One patient was scheduled for a 6-month follow-up for possible late onset hearing loss and hypercoagulation evaluation. One patient experienced persistent seizures and was anticipated to experience severe developmental delay.
Comment: The multiplex molecular panel was introduced at the children's hospital in May 2018. Nineteen cases were detected over 5 months in 2018, likely representing a baseline incidence of PeV CNS infections. Seven cases of PeV were detected in 2019-2021. The absence of a biennial peak in 2020 may reflect social isolation during the COVID-19 pandemic, suggesting that PeV transmission is closely associated with social activity. Twenty-nine cases, including the 23 cases described in this report, were detected at the children's hospital within a 6-week period in 2022. This peak in infections might be a result of relaxation of COVID-19 isolation measures, consistent with increased prevalence of other viruses (e.g., respiratory syncytial virus). When PeV is circulating, clinicians should consider testing for PeV in young infants, including those with normal CSF parameters. The rapid detection of PeV in CSF by multiplex molecular panels can limit antibiotic administration and improve patient management. | 2022-07-08T06:15:55.205Z | 2022-07-06T00:00:00.000 | {
"year": 2022,
"sha1": "8fd63582cca3fd1a95b4815ec5a361bb76869971",
"oa_license": null,
"oa_url": "https://journals.lww.com/pidj/Fulltext/9900/SARS_CoV_2_Infection_in_Children_and_Adolescents.119.aspx",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "03641c857b24bdd46983f3f6a1b9fd3e19916e39",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269039939 | pes2o/s2orc | v3-fos-license | Combined Anterior Cruciate Ligament Reconstruction (ACLR) and Lateral Extra-articular Tenodesis through the Modified Lemaire Technique versus Isolated ACLR: A Meta-analysis of Clinical Outcomes
Objective Lateral extra-articular tenodesis (LET) has been proposed to resolve rotatory instability following anterior cruciate ligament reconstruction (ACLR). The present meta-analysis aimed to compare the clinical outcomes of ACLR and ACLR with LET using the modified Lemaire technique. Materials and Methods We performed a meta-analysis following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) staement. The literature search was performed on the PubMed, EBSCOHost, Scopus, ScienceDirect, and WileyOnline databases. The data extracted from the studies included were the study characteristics, the failure rate (graft or clinical failure) as the primary outcome, and the functional score as the secondary outcome. Comparisons were made between the patients who underwent isolated ACLR (ACLR group) and those submitted to ACLR and LET through the modified Lemaire technique (ACLR + LET group). Results A total of 5 studies including 797 patients were evaluated. The ACLR + LET group presented a lower risk of failure and lower rate of rerupture than the ACLR group (risk ratio [RR] = 0.44; 95% confidence interval [95%CI]: 0.26 to 0.75; I 2 = 9%; p = 0.003). The ACLR + LET group presented higher scores on the Knee Injury and Osteoarthritis Outcome Score (KOOS) regarding the following outcomes: pain, activities of daily living (ADL), sports, and quality of life (QOL), with mean differences of 0.20 (95%CI: 0.10 to 0.30; I 2 = 0%; p < 0.0001), -0.20 (95%CI: -0.26 to -0.13; I 2 = 0%; p < 0.00001), 0.20 (95%CI: 0.02 to 0.38; I 2 = 0%; p = 0.03), and 0.50 (95%CI: 0.29 to 0.71; I 2 = 0%; p < 0.00001) respectively when compared with the ACLR group. Conclusion Adding LET through the modified Lemaire technique to ACLR may improve knee stability because of the lower rate of graft rerupture and the superiority in terms of clinical outcomes. Level of Evidence I.
Introduction
Anterior cruciate ligament (ACL) ruptures are among the most commonly studied injuries in orthopedic research, and their incidence is estimated to range from 30 to 78 cases per 100 thousand people a year. 1 After ACL reconstruction (ACLR), 61% to 89% of athletes successfully return to sports, typically between 8 and 18 months after the reconstruction, depending on the level of play. 1 Under certain conditions, a rerupture can occur, which may be devastating.The reported rate of ACL rerupture ranges from 1% to 11%, and they may be caused by traumatic reinjuries, biological graft failure, or technical surgical errors. 1,2he management of ACL injury in patients at a higher risk of rerupture remains controversial.It has been shown that the risk factors for graft rupture include younger patients (< 20 years of age), those with generalized hypermobility and physiologic Results A total of 5 studies including 797 patients were evaluated.The ACLR þ LET group presented a lower risk of failure and lower rate of rerupture than the ACLR group (risk ratio [RR] ¼ 0.44; 95% confidence interval [95%CI]: 0.26 to 0.75; I 2 ¼ 9%; p ¼ 0.003).The ACLR þ LET group presented higher scores on the Knee Injury and Osteoarthritis Outcome Score (KOOS) regarding the following outcomes: pain, activities of daily living (ADL), sports, and quality of life (QOL), with mean differences of 0.20 (95%CI: 0.10 to 0.30; I 2 ¼ 0%; p < 0.0001), -0.20 (95%CI: -0.26 to -0.13;I 2 ¼ 0%; p < 0.00001), 0.20 (95%CI: 0.02 to 0.38; I 2 ¼ 0%; p ¼ 0.03), and 0.50 (95%CI: 0.29 to 0.71; I 2 ¼ 0%; p < 0.00001) respectively when compared with the ACLR group.Conclusion Adding LET through the modified Lemaire technique to ACLR may improve knee stability because of the lower rate of graft rerupture and the superiority in terms of clinical outcomes.
Palavras-chave
► reconstrução do ligamento cruzado anterior ► instabilidade articular ► articulação do joelho ► tenodese ► resultado do tratamento knee hyperextension, and those returning to high-risk (pivoting) sports. 3Further, Saita et al. 4 showed that knee hyperextension and a small lateral condyle are associated with greater anterolateral rotatory instability, which is difficult to manage in patients who continue to show a positive pivot shift after isolated ACLR.In the literature, 3,5-7 the MacIntosh, Lemaire, and anterolateral ligament (ALL) reconstruction techniques have been shown to resolve anterolateral rotatory instability.Reconstruction of the ALL was found to reduce the graft failure rate in large series of patients at 2 years of follow-up. 8The modified Lemaire technique has been shown to present a low complication rate and to cause a reduction in pivot-shift instability. 6ne of the reasons to favor lateral extra-articular tenodesis (LET) rather than ALL reconstruction is because of the evidence indicating that ALL reconstruction could overconstrain the lateral joint while not being as mechanically advantageous in resisting rotation. 9,10The aim of LET is to decrease the rerupture rate by providing more stability to the knee joint.A cohort study by Cavaignac et al. 11 reported that ACLR with LET showed better graft maturity on magnetic resianace imaging (MRI) scans after one year of the procedures.Mayr et al. 12 focused on the modified Lemaire technique, which has recently been used to perform LET, and they showed that it may decrease the strain on the graft as well as residual rotational laxity, thus improving the clinical outcomes.Therefore, we conducted a meta-analysis to determine the impact of ACLR and LET through the modified Lemaire technique compared with ACLR on patients with ACL rupture in terms of the rerupture rate and clinical outcome.The objective of the present study was to determine the surgical outcome of ACLR with modified Lemaire LET for ACL rupture, which is be represented by the rerupture rate and clinical outcomes.
Search Strategy
We conducted a systematic review and meta-analysis based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. 13The study protocol was registered in the Open Science Framework.The literature search was conducted in June 2022 on several databases, including PubMed, EBSCOHost, Scopus, ScienceDirect, and WileyOnline, focusing on the Population, Intervention, Control, and Outcome (PICO) strategy.The population consisted of patients with ACL tears, the intervention was ACLR and LET through the modified Lemaire technique, and isolated ACLR was the comparator.The outcomes assessed were the rerupture rate as the primary outcome, and the patient-reported outcome measures (PROMs) and functional scores as secondary outcomes.
Study Selection
The exclusion criteria were animal studies, revision cases of ACLR, concomitant posterior cruciate ligament (PCL) or meniscus reconstruction, underlying congenital condition or neoplasm, ACLR with ALL reconstruction, patients treated with pharmacologic treatment, nutrition treatment, physical therapy or isolated rehabilitation, and ACLR with LET not through the modified Lemaire technique.Only studies published in English within the last twenty years were included.
Quality Appraisal and Risk of Bias Assessment
Two authors (ED and LC) performed the identification and selectionof studies, as well as data extraction.The quality assessment was performed by two other authors (MS, IJA).Differences in opinion between the two reviewers were resolved by reassessment and discussion with another author (EK).The selected studies were assessed using the Joanna Briggs Institute's tools for critical appraisal. 14
Data Extraction and Analysis
The data extracted from the included studies were characteristics such as author and year of publication, location, design, sample characteristics (age, gender, injury type), failure (graft or clinical failure), and outcome (Knee Injury and Osteoarthritis Outcome Score [KOOS], functional outcome, and clinical outcome).The studies were assessed qualitatively and quantitatively using the Review Manager (RevMan, The Cochrane Collaboration, London, United Kingdom) software, version 5.4.The random-effects model was used to calculate pooled ratio from each study based on the heterogeneity.The Cochrane I-squared (I 2 ) test was conducted to determine the heterogeneity.The results of the studies are presented in a forest plot with the pooled risk ratio (RR).
Results
In the initial screening, 163 studies were retrieved (►Fig.1).Among the ten remaining studies, two did not have a primary outcome (success rate), 12,15 one included skeletally- immature patients, 16 and one did not have adequate control. 17In the end, we found five studies [18][19][20][21][22] eligible for qualitative and quantitative analysis after the searching strategies were applied.][22] The appraisal of the studies using the Joanna Briggs Institute's critical appraisal tools showed that all of them were considered good in terms of methodological quality and lack of the possibility of bias in their design, conduct and analysis (►Table 1).►Table 2 shows the characteristics of the studies.including the intraoperative details. ►Table 3 shows the outcome parameters measured for each study.
In the present study, we found that the RR for failure was lower in the ACLR þ LET group with the modified Lemaire rechnique than in the ACLR group, with low heterogeneity among the studies (RR ¼ 0.44; 95% confidence interval [95% CI]: 0.26 to 0.75; I 2 ¼ 9%; p ¼ 0.003) (►Fig.2).
The meta-analysis showed a superiority of the ACLR þ LET group with the modified Lemaire Technique regarding of the following outcomes on the KOOS: pain, activities of daily living (ADL), sports, and quality of life (QoL), with mean differences of 0.20 (95%CI: 0.10 to 0.30; p < 0.0001), -0.20 (95%CI: -0.26 to -0.13; p < 0.00001), 0.20 (95%CI: 0.02 to 0.38; p ¼ 0.03) and 0.50 (95%CI: 0.29 to 0.71; p < 0.00001) respectively.However, there was no significant difference between the groups in the symptom scores on the KOOS, with a mean difference of 0.10 (95%CI: -0.03 to 0.2; p ¼ 0.13).Neither were there were differences between the groups regarding the scores on the Tegner Activity Scale (TAS) and Lysholm Knee Scoring Scale (LKSS), with mean differences of 0.19 (95%CI: -0.49 to 0.87; p ¼ 0.58) and 3.45 (95%CI: -6.22 to 13.22; p ¼ 0.48) respectively.However, there was a significant difference regarding the scores on the International Knee Documentation Committee (IKDC) Subjective Knee Form, with a mean difference of 0.70 (95%CI: 0.57 to 0.83; p < 0.00001).Low heterogeneity was found in the scores on the KOOS and IKDC Subjective Knee Form , but high heterogeneity was found in TAS and LKSS scores.(►Fig. 3).
Discussion
The most important findings of the current research were that, when compared with the ACLR group, the ACLR þ LET with modified Lemaire presented a lower failure rate and significant superiority regarding the functional outcome based on the mean differences in pain, ADL, sports, and QoL domains.
When compared with the ACLR group, the ACLR þ LET with modified Lemaire group was found to present a lower failure rate (RR ¼ 0.44; I 2 ¼ 9%; p ¼ 0.003).The ACLR þ LET with modified Lemaire group showed a significant superiority regarding the functional outcome based on the mean differences in the scores on the KOOS domains of pain, ADL, sports, and QoL (p < 0.00001; p < 0.03; p < 0.00001; and p < 0.00001 respectively) and the scores on the IKDC Subjective Knee Form (p < 00001).
Rotational stability was not recovered with isolated ACLR in a certain population. 23Therefore, both intra-and extra-Table 1 Critical appraisal the results of the selected studies Study Items on the Joanna -Briggs Institute's tools for critical appraisal articular procedures were necessary to improve ACL stability, thus improving the ability to perform sports in this population.It is known that LET is one of the extra-articular procedures that preserves knee stability.Na et al. 23 compared isolated ACLR to ACLR combined with anterolateral extra-articular procedures, and they noticed that both techniques improved pivot-shift grades and graft failure rates.However, in the ACLR þ LET group, there was an increased risk of knee stiffness and adverse events. 23These findings explain the significantly better KOOS and IKDC scores in the group submitted to ACLR þ LET with the modified Lemaire technique.
Various LET procedures, namely Lemaire, MacIntosh, and ALL reconstruction, are the choices to manage rotatory instability.However, a in a kinematic study published by Inderhaug et al. 10 in 2017, the authors found that ALL reconstruction is underconstrained procedure.Compared with ALL reconstruction, the modified Lemaire technique has been shown to present a low complication rate and to cause a reduction in pivot-shift instability.The modified Lemaire technique also showed good graft survival and PROMs in a high-risk population. 1 This may suggest that LET is an effective technique to restore joint stability to a knee with additional features of laxity. 2,10n a meta-analysis, Onggo et al. 24 compared ACLR and ACLR þ LET through any method, and the inclusion of studies with a minimum of two years of follow-up.They found improved stability (RR ¼ 0.59; 95%CI: 0.39 to 0.88) and improved clinical outcomes in the ACLR þ LET group, shown by mean differences in the IKDC and Lysholm scores of 2.31 (95%CI: 0.54 to 4.09) and 2.71 (95%CI 0.68 to 4.75) respectively.In addition, there was less likelihood of graft rerupture in the ACLR þ LET group, with an RR of 0.31 (95%CI: 0.17 to 0.58). 24In a single-armed systematic review involving 851 patients who underwent ACLR þ LET, Grassi et al. 25 showed favorable results in terms of KOOS scores, with 74% of the patients returning to their previous sports activities, as well as complication and failure rates of 8.0% and 3.6% respectively.
The combination of ACLR and LET has also been considered safe for the patients.Feller et al. 26 reported that, at the 12-month follow-up, a contact-related graft rupture occurred in one patient, accounting for 4% of the total.Two additional ACL injuries in the opposite knee were observed, making up 9% of the cases, with 1 of them being an ACL graft rupture at 11 months postoperatively and another occurring at 22 months.Furthermore, a separate incident of contralateral ACL graft rupture took place at the 26-month followup. 26Concerns were raised about the potential for excessive restriction of the lateral compartment of the knee and the subsequent development of lateral compartment osteoarthritis in relation to LET.However, a meta-analysis by Devitt et al. 27 provided strong evidence that the addition of LET reduces the movement of the lateral compartment.Biomechanical studies support these clinical findings, showing that both anatomic ALL reconstruction and LET procedures can overly restrict the lateral compartment.On the contrary, a recent systematic review indicated that adding LET to ACLR does not increase long-term osteoarthritis rates.While there is insufficient evidence to determine whether adding LET to primary ACLR improves various outcomes, there is strong evidence that LET effectively reduces laxity in the lateral compartment, as demonstrated by stress radiography. 28,29n the biomechanics study, there is still a controversy regarding ACLR þ LET with the modified Lemaire technique.A laboratory study 10 with a fresh frozen cadaver found that this technique might have overconstrained knee kinematics.However, a pilot study by Di Benedetto et al. 30 on 16 patients aged 21 to 37 years who underwent ACLR þ LET revealed reacquisition of sagittal knee stability and gait dynamics to the preoperative level.These findings are also supported by a meta-analysis by Feng et al., 31 who reported that, in 1,745 patients, ACLR þ LET provided reduced pivot-shif,t with an odds ratio of 0.48 (95% CI: 0.31 to 0.74), and better graft failure rate, with an odds ratio of 0.34 (95%CI: 0.20 to 0.55).
As a limitation of the present study, there is still a lack of raw data to make a more comprehensive functional outcome analysis.Therefore, future studies with large samples might be needed to find better evidence regarding the effectiveness of ACLR combined with LET through the modified Lemaire technique.
Conclusion
The combination of LET through the modified Lemaire technique and ACLR showed a reliable result to minimize the rate of graft rerupture, as well as superiority in terms of clinical outcomes compared with isolated ACLR due to its role in improving knee stability.
Fig. 2
Fig. 2 Risk ratio for failure in the group of ACLR þ LET through the modified Lemaire technique and the ACLR group.
Fig. 3
Fig.3Forest Plot of the secondary outcome of the included studies.
Table 2
Characteristics of the included studies | 2024-04-12T05:18:46.695Z | 2023-03-27T00:00:00.000 | {
"year": 2024,
"sha1": "e15b8fd912e74c9d421033f4ec7f0f237cce4f1a",
"oa_license": "CCBY",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0044-1785492.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e15b8fd912e74c9d421033f4ec7f0f237cce4f1a",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254917035 | pes2o/s2orc | v3-fos-license | Relationship between serum chloride and prognosis in non-ischaemic dilated cardiomyopathy: a large retrospective cohort study
Objectives Serum chloride has a unique homeostatic role in modulating neurohormonal pathways. Some studies have reported that hypochloremia has potential prognostic value in cardiovascular diseases; thus, we aimed to investigate the association of baseline serum chloride with clinical outcomes in elderly patients with non-ischaemic dilated cardiomyopathy (NIDCM). Design Retrospective study. Setting and participant A total of 1088 patients (age ≥60 years) diagnosed with NIDCM were enrolled from January 2010 to December 2019. Results Logistic regression analyses showed that serum chloride was significantly associated with in-hospital death. Receiver operating characteristic (ROC) curve analyses showed that serum chloride had excellent prognostic ability for in-hospital and long-term death (area under the curve (AUC)=0.690 and AUC=0.710, respectively). Kaplan-Meier survival analysis showed that the patients with hypochloremia had worse prognoses than those without hypochloremia (log-rank χ2=56.69, p<0.001). After adjusting for age, serum calcium, serum sodium, left ventricular ejection fraction, lg NT-proBNP and use of diuretics, serum chloride remained an independent predictor of long-term death (HR 0.934, 95% CI 0.913 to 0.954, p<0.001). Conclusions Serum chloride concentration was a prognostic indicator in elderly patients with NIDCM, and hypochloremia was significantly associated with both in-hospital and long-term poor outcomes.
Discussion section:
The role of chloride in heart failure syndrome is markedly developed not only in issue of prognostic marker but also in pathophysiologic roles, particularly in related to its physiologic (e.g., fluid distribution, hemodynamics), neurohormonal (i.e., central role of RAA system), and therapeutic aspects. The title says that 'how does hypochloremia influence the prognosis in heart failure patients', but comments on this description on heart failure pathophysiology is insufficient. Well summarizing the recent advancement of chloriderelated heart failure pathophysiology, incorporating suitable literatures, would strengthen the discussion section of this study. So, it is appropriate to include short comments on the advancement of the role of chloride in heart failure pathophysiology, described above, by citing the core literatures on this aspect as follows; Kataoka H (2017). Proposal for heart failure progression based on the 'chloride theory': worsening heart failure with increased vs. nonincreased serum chloride concentration. ESC Heart Failure, 4: pp 623-631. Kataoka H (2020) Review Article. Proposal for New Classification and Practical Use of Diuretics according to their Effects on the Serum Chloride Concentration: Rationale based on the Chloride Theory. Cardiology and Therapy 9 (2): 227-244. DOI: 10.1007/s40119-020-00172-9. Kataoka H (2021) Review Article. Chloride in Heart Failure Syndrome: Its Pathophysiologic Role and Therapeutic Implication. Cardiology and Therapy 9 (2): 407-428-244. DOI: 10.1007/s40119-021-00238-2 Kataoka H (2022) Mechanistic insights into chloride-related heart failure progression according to the plasma volume status. ESC Heart Fail 9: 2044-2048. Kataoka H (2021) Clinical significance of spot urinary chloride concentration measurements in patients with acute heart failure: investigation on the basis of the 'tubulo-glomerular feedback' mechanism. Cardio Open 6: 123-131.
REVIEWER
Ru Liu Chinese Academy of Medical Sciences, Department of Cardiology REVIEW RETURNED 08-Oct-2022
GENERAL COMMENTS
In this paper, the authors from Prof. Jiang's team aimed to explain how serum chloride influence the prognosis in elderly patients with nonischemic dilated cardiomyopathy, using a large sample data, including a total of 1,088 patients (age ≥ 60 years) diagnosed with nonischemic dilated cardiomyopathy (NIDCM) enrolled from January 2010 to December 2019. Large sample spanning a decade is really rare, and I suppose the nature of this study should be "prospective observational study", not "retrospective". They found that: in the COX regression model, serum chloride remained a significant predictor for long-term mortality after adjusting for age, serum calcium, serum sodium, LVEF, lg NT-proBNP and use of diuretics Secondly, I don't agree with the definition: "the primary endpoint of this study was in-hospital death. The secondary endpoint was allcause death during follow-up". In-hospital and long-term mortality should be considered as the primary endpoints. I suggest adding one or more secondary endpoints like readmission due to deterioration of heart failure. Thirdly, Discussion part, logical and hierarchical sense is not strong, should be restated. Especially highlight the consistency and inconsistency between the results of this study and previous studies, and analyze the reasons. It is important to make clear that this is a real-world observational study for a specific population with clinical problems that cannot be covered and addressed by RCT. This means that even if the baseline is somewhat biased, it is still meaningful data. This point should be reflected in Discussion.
Although there were limitations and room for improvement, this realworld large sample data of NIDCM still has weight and value. I suggest major revision.
Reviewer 1
Reviewer comment 1: Introduction section: Page 5, line 50: It is appropriate to cite suitable literatures that investigated or reviewed on the association between hypochloremia and its prognosis, such as review article by Rivera et al (Rivera FB et Authors' response: We thank the reviewer for their suggestion. We reviewed the current research on serum chlorine in HF and added more appropriate literatures, and modified the description in our revised manuscript.
Reviewer comment 2: Methods section: Page 6, line 25: Please provide short description on the criteria for NIDCM proposed by scientific committee of AHA.
Authors' response: We thank the reviewer for highlighting this point and have added a description on NIDCM in the revised manuscript.
Methods, page 5, line 86-88 According to the scientific statement established by the American Heart Association, NIDCM is defined as ventricular dilatation and systolic dysfunction excluding vascular diseases such as coronary heart disease and myocardial infarction [8].
Reviewer comment 3: Result section: 1) How was the heart failure status at baseline presentation recruited to the present study? Namely, were the study patients acutely decompensated heart failure status, or stable chronic heart failure status, or both of them?
Authors' response: We thank the reviewer for raising the issue. In this study, the mean LVEF of participants is lower than 35% that indicated most of them suffer from HFrEF. Patients with NIDCM especially old adults are frequently admitted because of HF symptoms, including chronic HF and acute decompensated HF. We have mentioned this point in our revised manuscript.
Results, page 6, line 126-128 Second, the mean LVEF was less than 35% in both groups and lower in hypochloremic group in which patients had higher values of serum creatinine on admission (Table 1). Discussion, page 10, line [190][191][192] Patients with NIDCM are frequently admitted because of HF symptoms, including chronic HF and acute decompensated HF, whereas those enrolled in this study presented a mean LVEF lower than 35% that suggests a higher risk of SCD [26,27].
Authors' response: We thank the reviewer for the comment. According to the standard of our hospital laboratory, the unit of serum chlorine is mmol/L, and the literatures also show so. We have modified the symbol in this sentence.
Reviewer comment 5: Discussion section: The role of chloride in heart failure syndrome is markedly developed not only in issue of prognostic marker but also in pathophysiologic roles, particularly in related to its physiologic (e.g., fluid distribution, hemodynamics), neurohormonal (i.e., central role of RAA system), and therapeutic aspects. The title says that 'how does hypochloremia influence the prognosis in heart failure patients', but comments on this description on heart failure pathophysiology is insufficient. Well summarizing the recent advancement of chloride-related heart failure pathophysiology, incorporating suitable literatures, would strengthen the discussion section of this study. So, it is appropriate to include short comments on the advancement of the role of chloride in heart failure pathophysiology, described above, by citing the core literatures on this aspect as follows.
Authors' response: Thanks for the reviewer's suggestion and we strongly agree with the comment. In our revised manuscript, we have rediscussed the relationship between hypochloremia and the pathophysiology of HF especially on activation of neurohormonal systems and influence of diuretic therapy and cited some appropriate literatures.
Discussion, page 10-11, line 192-204, 210-215 When cardiac ejection fraction decreases, compensatory homeostatic responses to a fall in cardiac output are activated, such as the activation of the sympathetic nervous system (SNS) and the reninangiotensin-aldosterone system (RAAS), however, chronic activation of these neurohormonal systems will exert deleterious effects on the heart [28]. It was reported that hypochloremia is related to higher renin secretion and then improves the RAAS activity, resulting in worsening HF [29,30]. Meanwhile, the stimulation of angiotensin II and aldosterone will also promote the excretion of chloride, involving in the development of hypochloremia [31]. That means the relationship between hypochloremia and RAAS activity is complex and closely related to HF pathophysiology.
In chronic HF, hypochloremia might be dilutional in nature and result from an increased release of arginine vasopressin that promotes free-water reabsorption in the renal collecting ducts, and increased angiotensin II activation can stimulate aldosterone secretion, resulting in fluid retention [25]. On the other hand, hypochloremia could also be depletional because of diuretic-induced salt wasting, especially when chloride is lower relative to sodium [14,25]. The use of loop and thiazide diuretics can effectively reduce the plasma volume by depleting serum chloride; however, these diuretics may induce hypochloremia that could lead to diuretic resistance [29]. Previous studies indicated that acetazolamide, sodium glucose cotransporter 2 inhibitors and vasopressin receptor antagonists have potential ability to increase serum chloride concentration while decreasing plasma volume, but further randomized controlled trials are required to verify the efficiency of these therapies [10].
Reviewer 2
Reviewer comment 1: I suppose the nature of this study should be "prospective observational study", not "retrospective".
Authors' response: We thank the reviewer for the consideration of this point. In 2021, we retrospectively collected medical information from patients with NIDCM admitted to our hospital from January 2010 to December 2019, without prospective observation.
Reviewer comment 2: Firstly, there were several variates showing significant differences in baseline analysis between the two groups. Please explain why those variables were chosen in your multivariable model. What is your principle? Please explain and list all included variables at the bottom of the table.
Authors' response: We thank the reviewer for their attention to this point and apologize that this information is missing in our original manuscript. In the revised manuscript, we supplemented the principle by which we chose the confounders enrolled in multivariate model and listed all included variables at the bottom of the table.
Discussion, page 9-10, line 174-182 Plasma levels of NT-proBNP and LVEF have proven to be powerful prognostic biomarkers of cardiac disease [21,22], after adjusting for them, serum chloride remained independently associated with clinical outcome in this study. In addition, patients with DCM often receive diuretic treatment especially in those with volume overload, which may promote the depletion of chloride and sodium [23]. Serum calcium is recognized as an important electrolyte for maintaining cardiac function. Our study showed that serum chloride concentrations were still independently associated with in-hospital and long-term death after multivariable adjustment for potential confounders including serum sodium and calcium levels and use of diuretics, while serum sodium levels were no longer related to prognosis.
Reviewer comment 3: Secondly, I don't agree with the definition: "the primary endpoint of this study was in-hospital death. The secondary endpoint was all-cause death during follow-up". In-hospital and long-term mortality should be considered as the primary endpoints. I suggest adding one or more secondary endpoints like readmission due to deterioration of heart failure.
Authors' response: We thank the reviewer for their comment and suggestion. We have revised the description on observational endpoints and apologized for the lack of other clinical events mentioned by reviewers in our database. | 2022-12-21T16:10:52.833Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "fb93a3c5031c5b5b30c1759649d00dcf7b44d530",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e5df820d0602c189a8fbcfd57fa235b62e3d1f68",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14398736 | pes2o/s2orc | v3-fos-license | Regional Relationship between Macular Retinal Thickness and Corresponding Central Visual Field Sensitivity in Glaucoma Patients
Purpose. To investigate the relationship between macular retinal thickness (MRT) and central visual field sensitivity (VFS) in patients with glaucoma. Methods. This retrospective study enrolled patients diagnosed with open-angle glaucoma. All study patients underwent Humphrey 10-2 visual field (VF) test and Spectralis spectral-domain optical coherence tomography (SD-OCT) exam for MRT measurement. Results. Sixty-eight eyes of 68 patients were examined. The correlation coefficients between VFS and MRT were 0.331 (P = 0.006) and 0.491 (P = 0.000) in the superior and inferior hemispheres, respectively. The average MRT in the eyes with abnormal 10-2 VF hemifields was significantly thinner than that in the eyes without abnormal hemifields in both hemispheres (P = 0.005 and 0.000 in the superior and inferior hemisphere, resp.). The average MRT values with an optimal sensitivity-specificity balance for discriminating the abnormal VF hemifield from the normal hemifield were 273.5 μm and 255.5 μm in the superior and inferior hemisphere, respectively. The area under the receiver operating characteristic curve was 0.701 in the superior hemisphere and 0.784 in the inferior hemisphere (both P < 0.05). Conclusions. MRT measured through SD-OCT was significantly correlated with central VFS. Lower MRT values might be a warning sign for central VF defects in glaucoma patients.
Introduction
Glaucoma is among the leading causes of blindness worldwide. It is a group of ocular diseases characterized by optic neuropathy associated with progressive thinning of the neuroretinal rim and loss of the retinal nerve fiber layer (RNFL) together with a particular pattern of visual field (VF) loss. Compared with standard automated perimetry, which is a functional and subjective test with greater intertest variability, optical coherence tomography (OCT) provides a highly qualitative, objective, and reproducible structural assessment of the optic nerve, RNFL, and macular thickness [1]. A previous study has reported that the correlation between peripapillary RNFL (pRNFL) thickness measured through OCT and visual function is high [2]. In addition, glaucomatous damage to the RNFL can precede future VF damage by up to 5 years [3]. OCT has been recognized as the most useful diagnostic tool in detecting early glaucoma among structural tests.
The macula contains more than 30% of the retinal ganglion cells and is vital for visual function [4]. There is growing evidence that early glaucoma can affect the macula and cause paracentral VF deficits [5,6]. However, the routine VF test using the Humphrey field analyzer (HFA) with the Swedish interactive threshold algorithm (SITA) 24-2 or 30-2 programs has test points spaced 6 degrees apart, with only 4 test points placed within the central 8 degrees of the visual field. Paracentral scotomas in certain patients can be overlooked in the routine 24-2 or 30-2 VF tests because of relatively poor central VF sampling [7][8][9]. By contrast, the 10-2 VF program has 68 test points within the central 10 degrees and thus provides more detailed information in the central VF. However, performing both the 24-2 and 10-2 VF testing for every glaucoma case is time consuming. A more objective and efficient method for evaluating the macula damage associated with glaucoma is necessary.
Since glaucomatous damage of the macula can compromise central visual function, even in the early stage, measuring macular retinal thickness (MRT) through OCT in early glaucoma patients is imperative and provides several advantages: the macular area contains the highest densities of retinal ganglion cells; measurements of MRT exhibit less variability than those of the peripapillary RNFL do. However, the nature of macula damage in early glaucoma and its relationship with the central VF are poorly understood. Rolle et al. showed that there is a significant structure-function correlation between MRT and central VF sensitivity (VFS) [10]. However, they extracted 16 central VF test points from the 24-2 VF for central VFS, which provided only gross information on the central VF. Because of the clinical importance of the central VF, more detailed information about central visual function should be evaluated, and the correlation between structure and central VFS should be elucidated. Therefore, in this study, we compared the VFS of the divided zones in the Humphrey 10-2 VF with the corresponding zones of MRT by using posterior pole asymmetry analysis in Spectralis spectraldomain OCT (SD-OCT). The purpose of our study was to determine the localized structure-function relationship in each corresponding area of the macula. It is crucial to clarify the structure-function relationship in the central VF to obtain diagnostic information on central VF defect detection based on MRT in the divided zones.
Materials and Methods
2.1. Participants. This retrospective study was conducted between August 2014 and July 2015. Participants were enrolled from the Glaucoma Clinic of the Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan. The study enrolled patients diagnosed with primary open-angle glaucoma or normal tension glaucoma. Patients with glaucomatous optic neuropathy on fundus examination and a mean deviation less than −20 dB on the 30-2 VF testing were included. The Institutional Review Board and Ethics Committee of Chang Gung Memorial Hospital approved this study, which adhered to the tenets of the Declaration of Helsinki.
All patients underwent comprehensive ophthalmic evaluation, including best-corrected visual acuity (BCVA) assessment, refraction, slit-lamp biomicroscopy, intraocular pressure measurement, central corneal thickness measurement, axial length (AL) measurement (AL-Scan Optical Biometer; Nidek, Japan), optic nerve head (ONH) evaluation and fundus examination, digital color fundus photography (Digital Non-Mydriatic Retinal Camera; Canon, Tokyo, Japan), VF testing using the Humphrey 30-2 SITA standard strategy (Carl Zeiss Meditec; Jena, Germany) and 10-2 SITA standard strategy, and retinal thickness measurement over the posterior pole through Spectralis SD-OCT examination (Heidelberg Retinal Engineering, Dossenheim, Germany). The worse eye from each patient was selected. All examinations were conducted within 6 months of SD-OCT examination. The exclusion criteria were BCVA lower than 20/40 in Snellen equivalents; a spherical equivalent refractive error higher than +6.00 or −10.00 diopters; an age younger than 20 or older than 80 years; previous intraocular surgery; ocular diseases other than cataract and glaucoma; diseases that could affect macular thickness, such as macular pucker, macular edema, drusen, and diabetic retinopathy; and unreliable 30-2 and 10-2 VF test results (fixation loss of >33%, false negative error or false positive error of >33%).
Glaucoma was diagnosed when the optic disc exhibited glaucomatous changes, such as localized or diffuse neuroretinal rim thinning of the ONH, a vertical cup-to-disc ratio asymmetry greater than 0.2, or RNFL defects corresponding to the glaucomatous VF defects. Glaucomatous VF defects were defined on the basis of the Humphrey 30-2 VF testing and confirmed through at least 2 reliable examinations for which 1 or more of the following criteria were met: a cluster of 3 or more nonedge points with a probability of less than 5%, including 1 point or more with a probability of less than 1%, on the pattern deviation map in at least 1 hemifield; a pattern standard deviation with a probability of less than 5%; and glaucoma hemifield test results outside the normal limits [11]. Central VF defects on the 30-2 VF was defined as the involvement of at least one of the central 12 cardinal points corresponding to the region in the central 10 degrees tested by the 10-2 VF test points with the threshold of which was depressed by an amount significant at P < 5%. The 10-2 VFs were classified as abnormal by applying the cluster rule: a cluster of 3 or more contiguous points (5%, 5%, and 1% or 5%, 2%, and 2%) within a hemifield on either total deviation or pattern deviation maps [8]. CT volume scan centered on the fovea; this area was divided into an 8 × 8 mm grid, consisting of 3°× 3°squares. The average retinal thicknesses of the superior and inferior hemispheres for each grid, as well as the total retinal thickness, were calculated. Only high-quality scans with signal strengths of more than 15 dB were used for analysis.
Statistical Analysis.
To analyze the structure-function relationship, we divided the posterior pole retinal thickness map and the 10-2 VF threshold map into 16 corresponding zones ( Figure 1). The posterior pole retinal thickness map was divided into 8 zones in each hemisphere. Average retinal thickness values of the 4 adjacent square cells in each zone were used for statistical analysis. VF sensitivities were modified using the Lambert factor on the basis of the formula dB = 10 * log 1/L , and the average threshold values in each zone were used for statistical analysis. Each divided area of the posterior pole retinal thickness map was labeled from 1 to 8 for each hemisphere. The numbers were ordered from the temporal to the nasal retina and from the peripheral to the central retina. VFS values and SD-OCT data were all registered at the right eye orientation. The correlation between average MRT and VFS in each corresponding zone, as well as the average hemisphere and total average values, was evaluated. Superior MRT (S) was matched with inferior VFS (i); inferior MRT (L) was matched with superior VFS (s).
Pearson correlation was used to express the relationship between MRT and VFS. We also divided all patients into 2 groups: eyes with abnormal 10-2 VFs within a hemifield and eyes without abnormal 10-2 VFs within a hemifield. Mann-Whitney test was used to compare MRT in the 2 groups. The areas under the receiver operating characteristic (AUROCs) curve were calculated to assess the power of MRT to discriminate the 10-2 VF involvement. The best cut-off values of MRT for predicting the 10-2 VF involvement with the optimal sensitivity-specificity balance were derived from the Youden index [12]. All statistical analyses were performed using SPSS software version 19.0 (SPSS, Inc., Chicago, Illinois, USA). Data were expressed as the mean ± standard deviation. A P value less than 0.05 was considered statistically significant.
Results
Sixty-eight eyes of 68 patients were examined in this study. Table 1 summarizes the demographics and clinical characteristics. The mean deviation of the 30-2 VF was −6.93 dB. Table 2 and Figure 2 show the structure-function correlations of the total MRT, hemisphere MRT, and MRT of the 16 divided zones, with the corresponding VFSs. The correlation coefficients between VFS and MRT were 0.331 and 0.491 in the superior and inferior hemispheres, respectively, and 0.079-0.526 in the divided zones. Significant correlations between MRT and VFS were found in both hemispheres Each divided zone was labeled from 1 to 8 for each hemisphere. The average MRT of the superior temporal zone (S1) was matched with the VFS in the inferior nasal zone (i1), and the average MRT of the inferior nasal zone (i1) was matched with the VFS in the superior temporal zone (S1). and in most of the corresponding divided zones, particularly in the parafoveal areas (S1-S4, S6, S7, L2, L3, and L5-L7). The areas with higher correlation coefficients were located in the inferior parafoveal (L6 and L7), the inferior and temporal areas (L2, L3, and L5) in the inferior hemisphere, and the superior and nasal areas (S2-S4) in the superior hemisphere. Table 3 demonstrates the agreement between the 30-2 VF and the 10-2 VF for central visual involvement. In particular, 10 (33.3%) of the 30 eyes for the inferior hemifield and 10 (20.0%) of the 50 eyes for the superior hemifield were classified as abnormal on the 10-2 VF but normal on the 30-2 VF.
The MRT values of the eyes classified as normal on the 30-2 VF but abnormal on the 10-2 VF were significantly thinner than those of the eyes classified as normal on both the 30-2 VF and the 10-2 VF (267 20 ± 10 72 μm versus 278 19 ± 16 76 μm, P = 0 044 for the superior hemisphere and 265 10 ± 14 76 μm versus 277 55 ± 12 76 μm, P = 0 013 for the inferior hemisphere). Table 4 illustrates the differences in average MRT in the divided zones and hemispheres between the eyes with and those without abnormal 10-2 VF hemifields. The average MRT in the eyes with abnormal 10-2 VF hemifields was significantly lesser than that of the eyes without abnormal hemifields in both hemispheres and in most of the divided zones, except S1, S5, S6, L1, and L8, which comprised the more temporal peripheral areas. The AUROC and best cutoff values derived from the Youden index with optimal sensitivity-specificity balances for MRT values are listed in Table 5. To discriminate normal and abnormal VFs, the sensitivity and specificity with a cut-off value of 273.5 μm in the superior hemisphere were 83.3% and 52.6%, respectively; the sensitivity and specificity with a cut-off value of 255.5 μm in the inferior hemisphere were 56.0% and 94.4%, respectively. The discriminating power for central VF involvement was generally fair (AUROC range: 0.607-0.819) except in zone S1, S5, and L8. The diagnostic power of MRT was best in the inferior temporal parafoveal area (AUROC = 0 819,
Discussion
In the present study, we demonstrated the structure-function relationship between MRT and central VFS. To the best of our knowledge, this is the first study to evaluate the regional correlation between MRT and central VFS by using the Spectralis SD-OCT device and Humphrey 10-2 VF test. The MRT values were shown to be significantly correlated with central VFS. Lower MRT values might be a warning sign for central VF deficits in early to moderate glaucoma. The essential pathologic process in glaucoma is the loss of retinal ganglion cells and their axons, leading to a reduction in the thickness of the nerve fiber layer [13]. The loss of retinal ganglion cells and the nerve fiber layer can occur in the posterior pole, where these cells comprise 30%-35% of the retinal thickness. Macular thickness is a surrogate indicator of retinal ganglion cell thickness for glaucoma diagnosis [14]. Zeimer et al. first described losses in retinal thickness at the posterior pole of patients with early glaucoma by using a retinal topographer (Retinal Thickness Analyzer; Talia Technology Ltd., Neve Ilan, Israel) [15]. After the introduction of time-domain OCT, Greenfield et al. also reported reduced macular thickness in early-and moderate-stage glaucoma, and the changes in macular thickness were shown to correlate highly with visual function [14]. The newer generation of OCT, namely SD-OCT, enables an increased speed of image acquisition and improvements in eye tracking and signal-to-noise ratio, providing higher-resolution images and revealing larger areas of the macular region. By using SD-OCT with 3D OCT, Nakatani et al. demonstrated that macular thickness was significantly correlated with VFS in early glaucoma [16]. In addition, they found that the macular parameters of SD-OCT had higher reproducibility than those of pRNFL. Because pRNFL measurement is prone to be affected by misalignments and disc anomalies such as tilted discs and peripapillary atrophy, and because fixation is relatively easier for macular scans, macular thickness measurements may be more reproducible than pRNFL parameter measurements for glaucoma diagnosis. Ohkubo et al., by using SD-OCT with 3D OCT, further considered the thickness of the RNFL, ganglion cell layer (GCL), GCL and inner plexiform layer (IPL), and RNFL, GCL, and IPL (GCC) in the macular area. They found that the correlation with VFS was best to use in the GCL or GCL + IPL in the central 5.8 degree [17]. Ohkubo et al. claimed that the GCC is the most sensitive predictor for detecting macula damage.
In agreement with the aforementioned previous studies, we observed positive correlations between MRT and central VFS in most of the divided zones, particularly in the parafoveal area. The zones with lower correlations were located more peripherally or closer to the optic disc. The thickness of the areas close to the disc may have interfered with the peripapillary atrophy variables. The use of the 10-2 VF for central VF testing is unique in our study because it more accurately represents visual function in the macula. There is growing evidence that early glaucomatous damage involves the macula and causes corresponding central field change [5,6]. However, because most previous studies used the 24-2 VF for central field testing and extracted only 16 Table 5: AUROC value, 95% confidence interval, significance, optimal sensitivity-specificity balance, positive likelihood ratio (+LR), and negative likelihood ratio (−LR) for total MRT values and values for matching hemispheres, the 16 divided zones (MRT in superior hemisphere (S); MRT in superior hemisphere (L)), and matching quadrants. central VF test points for structure-function correlation, the central field defects may have been underdetected as a result of poor spatial sampling [18]. By using the 10-2 VF, this study provided integrated VFS in the central VF, thus achieving higher accuracy in defining the structure-function relationship. This study shows significant structure-function correlations in most of the divided zones, particularly in the parafoveal area, with comparative correlation coefficients to previous study using the 24-2 VF [10]. Furthermore, this study not only demonstrates the central VF deficits might be overlooked by routine 30-2 VF in as high as 33% of eyes but also reveals the important role of MRT in detecting central VF deficits which might be missed by 30-2 VF. The areas with significant different MRT between normal and abnormal VFS were L2-L4 and L5-L7 in the inferior hemisphere and S2-S4, S7, and S8 in the superior hemisphere. Hood et al., by using frequency-domain OCT, proposed a schematic model, termed the macular vulnerability zone, to describe the glaucomatous damage of the macula [5,6,9]. In their model, the inferior macula was more susceptible to glaucomatous arcuate damage, and the macular vulnerability zone was narrower close to the disc and wider close to the temporal parafoveal zone. Furthermore, the vulnerable zone in the superior hemisphere was farther from the macula compared with that in the lower hemisphere. This model facilitates explaining why in the present study, greater differences in macular thickness between normal and abnormal VF hemispheres were found in the inferior and temporal parafoveal area and outer zones of the superior hemisphere.
The AUROC for MRT was greater in the inferior hemisphere than that in the superior hemisphere, particularly in the inferior temporal parafoveal area; this is in agreement with previous studies. By using the device employed in the present study (i.e., the Spectralis OCT), Rolle et al. divided the posterior pole into 4 quadrants and identified the highest discriminating power in the inferior nasal quadrant (AUROC = 0 82) [10]. Dave et al. also found the highest AUROC for the average inferior macular thickness (AUROC = 0 833) [19]. However, in these 2 studies, the 24-2 VF was used for central field sensitivity. Nakatani et al., by using SD-OCT with 3D OCT, found significant differences for all macular parameters in glaucoma patients and healthy participants, with the highest AUROC for the temporal outer macular thickness (AUROC = 0 79) [16].
There were several limitations to our study. First, the exact correspondence of central VF points and macular thickness measurement points remained uncertain and the OCT measurement area is slightly wider than that of the 10-2 VF. Although the VF and MRT maps had similar visual angles, the reciprocal matching points of the 2 measurements may have differed in the absolute position [20]. Moreover, considering the morphologic displacement of retinal ganglion cells in the fovea area, correction for retinal ganglion cell displacement may be required to improve the focal structure-function correlation [17,21]. However, there is currently no commercial OCT device that accounts for the displacement of retinal ganglion cells. In addition, taking the retinal ganglion cell displacement into account, the real anatomical areas in OCT corresponding to stimulation location of VF point would be wider than the extent of VF degree. It is thus reasonable to match the area of the 10-2 VF with posterior pole macular thickness in Spectralis OCT as shown in the present study. Second, only ethnically Chinese patients from Taiwan were analyzed in this study; differences may exist among ethnic groups. Another limitation is that a healthy group was not enrolled for evaluating the diagnostic power of MRT. Instead, we compared the MRT of patients with abnormal hemifields with that of patients without abnormal hemifields. Therefore, subtle structural changes may have remained to some extent in patients without abnormal hemifields, and the AUROC values in our study are slightly lower than those in a previous study [10]. Besides, the reliability criteria for VF testing were less strict than the reliability parameters of Humphrey Instruments Inc. However, some authors have recommended that relaxing the fixation loss criterion to less than 33% cut-off might increase the percentage of fields graded reliable with minimal effect on the sensitivity or specificity of the test [22]. The Spectralis SD-OCT device provides certain advantages: TruTrack active eye tracking and Heidelberg noise reduction enable acquiring accurate, reproducible, and high-quality maps, and the posterior pole analysis covers an area as wide as 8 × 8 mm in the macula. Furthermore, this device measures the total retinal thickness at the macular area, including layers which are not affected by glaucoma, rather than measuring the retinal ganglion cell complex, as in other OCT devices. Therefore, the Spectralis SD-OCT device may be relatively less sensitive to glaucomatous change but is less prone to segmentation error and allows for higher reproducibility.
In conclusion, our study revealed significant correlations between regional MRT and central VFS. The reduction of retinal thickness at the macular area was associated with the loss of the central VF in early-and moderate-stage glaucoma patients. Lower MRT values might be a warning sign for central VF defects which is easily missed when clinicians perform only standard perimetry with the 24-2 or 30-2 VF.
Disclosure
The study has been presented in part as a poster at the 31st Asia-Pacific Academy of Ophthalmology Congress held in conjunction with the 57th Annual Meeting of the Chinese Ophthalmological Society of Chinese Taipei. | 2018-04-03T05:39:05.658Z | 2017-03-21T00:00:00.000 | {
"year": 2017,
"sha1": "422e9ba988218132f78bb1d53ee746698b66e49f",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/joph/2017/3720157.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2242bd2c516643cad46bd3a38fbf8f56b51194e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265081064 | pes2o/s2orc | v3-fos-license | The use of emergency medical services for palliative situations in Western Cape Province, South Africa: A retrospective, descriptive analysis of patient records
.
RESEARCH
The World Health Organization (WHO) has noted an increasing global demand for palliative care owing to ageing populations and consequent increasing rates of non-communicable disease. [1,2]Despite this growing demand, there has been an inadequate corresponding supply of palliative care services. [1]Estimates indicate that 56.8 million people require palliative care annually, while only 14% receive such care. [3]This imbalance is particularly acute in low-to middle-income countries (LMICs), where up to 80% of patients requiring palliative care reside. [3,4]These increased LMIC palliative needs result from greater disease burdens, resource limitations and underdeveloped palliative care provision. [3,4]To correct this imbalance, integration between palliative services and other disciplines has been recommended. [5]One such developing area of integration is between palliative care and emergency medical services (EMS). [6,7]The limited existing data suggest that up to 10% of EMS call-outs may involve palliative situations and, given this intersection, palliative care should be integrated within EMS systems. [8]aking LMIC challenges, recommendations for cross-disciplinary palliative care integration and the intersection between EMS and palliative situations into account, the South African (SA) setting is pertinent.SA falls into the LMIC category and suffers a 'quadruple burden of disease' due to communicable diseases, particularly HIV and AIDS, high maternal and paediatric mortality rates, non-communicable disease and injury. [9]The ensuing chronic, lifelimiting illnesses have resulted in an increased need for palliative care in the country, as noted by the SA Ministry of Health. [10]Using mortality data alone, an estimated 0.52% (n=286 000) of the SA population require palliative care annually. [11]Palliative care literature in the country has stated, 'to meet this need, additional services within the public health sector, including community and homebased care will need to be developed.' [11] SA previously supported a World Health Assembly (WHA) resolution to strengthen palliative care systems, making palliative care development a priority in the country. [12,13]Accordingly, palliative care has been included as an essential service in the new National Health Insurance (NHI) proposal, and is considered a human right. [13]Furthermore, progress has recently been made in SA palliative care integration, with cross-disciplinary training in some locations being provided to nurses, doctors, correctional service facility workers and traditional healers. [11,12]One area in SA where palliative care remains nonintegrated is within EMS systems. [14]Currently, palliative care does not form part of SA EMS training, protocols or patient management, nor do palliative care systems make formal use of EMS to assist in palliative care provision. [14]This results in poor management of palliative situations by EMS and represents an opportunity for enhanced palliative care provision within the country. [14]Moreover,
RESEARCH
there is a dearth of SA-specific research concerning EMS and palliative care.Therefore, the characteristics of patients managed in palliative situations by SA EMS are unclear, as is the extent of intersection between the two.The aim of this study, therefore, was to examine EMS use for palliative situations in the Western Cape (WC) Province of SA by describing the frequency of intersection, patient characteristics and outcomes.For the purposes of this study, 'palliative situation' refers to any incident involving the care of a patient with palliative needs.
Design
An observational, descriptive, retrospective patient record review was employed.This study was compiled according to the REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) extension of the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) checklist. [15]
Setting
The WC province of SA has a population of 7.3 million, [16] accounting for ~12% of the total SA population. [17]Like the rest of the country, the WC maintains two distinct healthcare systems: private and state. [18]State healthcare is supplied by the government to all citizens, while private healthcare is accessible only to those with healthcare insurance.Currently, only 17% of the SA population (24% in the WC) hold healthcare insurance. [19]State hospitals are divided into varying levels: district (level 1), regional (level 2) and tertiary (level 3). [20]District hospitals are frequently the entry point into the healthcare system, as regional and tertiary facilities are often geographically distant.They offer 24-hour emergency departments (EDs) and basic outpatient, inpatient, diagnostic and therapeutic services.Patients requiring care beyond the capabilities of district hospitals are referred to regional or tertiary hospitals, which are larger and capable of more complex, specialist diagnostic and therapeutic procedures.Patient records from two state hospitals, one district and one tertiary, within the WC were used in the present study.Both facilities have established palliative care services.
Within the EMS sector, both private and state, out-of-hospital emergency care is provided using a paramedic-led rather than physician-led system. [14]Currently, formal training at a higher education (HE) institute is required to register as an EMS provider.However, this was a relatively recent change and many providers with basic short-course qualifications remain registered. [21]The HE courses range from 1 (basic) to 4 (advanced) years in duration.Owing to the relatively low number and unequal distribution of advanced EMS providers in the country, ambulances are largely staffed with basic providers, while advanced providers frequently operate alone in rapid response vehicles.
Sample and sampling
All patient records of those who arrived at the district and tertiary hospitals between 1 January 2020 and 31 December 2020 via EMS conveyance leading to palliative care provision were included in the study.Illegible patient records, duplicates and those missing data pertaining to the mode of hospital arrival were excluded.Patient records of those who were not both conveyed by EMS and recipients of palliative care were likewise excluded.
Patient variables were extracted from a combination of EMS, hospital palliative service and ED records, all of which were included within individual patient files at both hospitals.Patient files at the district hospital were available in a digital database to which CHG was granted access.Patient files at the tertiary hospital were physically stored in the facility's records department; however, the palliative service and ED maintained digital databases.BS and KC were granted access to both the physical and digital records.They linked patients across the platforms with a heuristic that included unique hospital numbers, folder numbers, patient names and dates of birth.
Data were collected from November 2022 to February 2023 by CHG at the district hospital and BS and KC at the tertiary hospital.This was performed according to the recommendations of Gilbert et al. [22] to improve accuracy and minimise inconsistencies: • Training: data collectors were trained in the study aims, objectives and data extraction tool prior to the study.• Case selection: well-defined protocols and inclusion and exclusion criteria were developed and applied to the patient records.• Definitions: all variables analysed were precisely defined.
• Extraction tool: a standardised extraction tool was used to guide data collection and uniformly handle data that were conflicting, ambiguous or missing.• Meetings: throughout the data collection process, frequent meetings were held among the research team to ensure consistency in data handling.• Monitoring: LG, WS and CHG closely monitored data-extracting performance.• Testing of inter-rater agreement: CHG re-extracted data from a random sample of tertiary hospital patient care records (10%), blinded to the extracted data of BS and KC.Findings were compared, and an inter-rater reliability (IRR) of 1.0, calculated using Cohen's kappa (κ), was achieved.Furthermore, the data extraction tool was piloted by CHG and BS prior to the study to enhance consistency.
Data were recorded and cleaned using Excel (Microsoft Corp., USA).
No missing data techniques were employed.
Data analysis
An a priori data extraction tool developed by the research team, based on previous international studies, [23,24] was used to extract the following variables: • patient characteristics: age, sex, primary home caregiver, chief complaint, diagnosis • outcome: length of stay, disposition.
Patient age was recorded in years.Sex was classified as male or female.Primary caregiver referred to the person(s) who mostly cared for the patient at home (e.g.family members or home palliative services).
Chief complaint referred to a patient's primary symptom upon EMS arrival.Diagnosis was the primary documented reason in the hospitals for the patient receiving palliative care.This was linked to the categories into which the palliative centres divided diagnoses: cancer, cardiovascular, respiratory, renal, hepatic, neurological, frailty/dementia, HIV/AIDS and other.
Length of stay was calculated from the time of hospital arrival to the time of final disposition, and was recorded in days.Disposition referred to patient outcome in terms of the following: discharged home, discharged to hospice, death or other.
Summary descriptive statistics (medians, ranges) were used to describe the numerical data: patient age, frequencies and length of stay.The remaining variables were analysed as categorical data.Data were analysed using SPSS Statistics for Windows version 28.0 (IBM Corp., USA).
Ethical approval
Ethical approval, including a waiver of informed consent, was gained from the University of Cape Town Faculty of Health Sciences Human Research Ethics Committee (ref.no.589/2021).Institutional approvals were gained from both the district and the tertiary hospital.
Results
In total, 1 207 unique patients received palliative care at both hospitals from 1 January 2020 to 31 December 2020.Of these, 395 (33%) made use of EMS for hospital conveyance and were included in the study.During the course of the year, these patients were transported on 494 occasions, resulting in an average of 41 EMS transports of patients with palliative needs per month.Fig. 1 demonstrates case selection.
Table 1 summarises the characteristics of the palliative situations managed by EMS.The characteristics were calculated based on number of EMS transports, unless otherwise specified, as the aim of the study was to examine EMS use for all palliative situations, which included repeat patient presentations.The median (range) patient age was 60 (20 -93) years, and most transports involved male patients (54%, n=265).Family members were the primary caregivers in most instances (89%, n=440).Dyspnoea was the most common chief complaint (36%, n=178), and cancer was the most frequent diagnosis (32%, n=159).The median length of hospital stay was 6 days, with most (60%, n=295) patients ultimately discharged home.
Discussion
This study aimed to examine EMS use for palliative situations in the WC Province of SA by describing the frequency of intersection, patient characteristics and outcomes.This was to assist in filling the knowledge gap of EMS and palliative care intersection within SA.To our knowledge, this is the largest study to gather quantitative data on the subject in the country.
A previous qualitative study in SA gathered the perspectives of EMS providers on palliative care and found that these providers reported frequently encountering palliative situations. [14]Because of this intersection, the providers viewed EMS and palliative care integration positively, elaborating on the role of EMS in palliative situations. [14]The current study supports these findings as it indicates substantial and frequent intersection between EMS and palliative situations in the WC, with onethird of patients who received palliative care at the two hospitals in 2020 conveyed by EMS on 494 occasions throughout the year.High-income countries (HICs) have, likewise, found substantial intersection between EMS and palliative situations.A German study found palliative situations may represent up to 10% of EMS caseload. [25]n Australia, a study found palliative situations comprised 0.5% (n=4 348) of annual EMS caseload. [26]Within LMICs, and SA in particular, it has been argued that these percentages are likely higher owing to their increased burdens of disease. [14]A further contributing factor within SA may be a lack of personal transport options for patients, resulting in increased reliance on EMS to meet this need.Future research could quantify annual EMS palliative situation caseload in SA, including proportions of conveyance and non-conveyance.
Frequently documented chief complaints of patients with palliative needs for which EMS are called include dyspnoea, pain, convulsions and severe anxiety. [8,27,28]We found dyspnoea (36%, n=178) and pain (16%, n=80) to be the most common chief complaints, while convulsions accounted for only a small percentage of cases (2%, n=9).While no cases of anxiety as a chief complaint were found, it may be that where such cases occurred, EMS providers were able to provide relief and these patients were not conveyed.Our findings are in line with several HIC studies in which dyspnoea and pain were likewise the most frequent chief complaints of palliative situations where EMS were called. [23,24,26]Although this may indicate that EMS encounter similar palliative situations in both LMIC and HIC settings, variance in symptom aetiology and patient socioeconomic status is likely present, representing an area for further study.
Significantly, the management of dyspnoea and pain falls well within the scope of practice of EMS providers, who are trained in the management of these symptoms, including in SA.However, EMS providers are trained to manage dyspnoea and pain in emergency situations rather than palliative contexts.For example, while morphine, an essential palliative medication, [29] is included in the scope of practice for advanced EMS providers in SA, it is not used in the management of severe dyspnoea in palliative situations.EMS providers are not trained in this application, and this indication does not explicitly form part of their scope of practice.Wiese et al. [30] have recommended opioids be used for this purpose, demonstrating that dyspnoea in this population is significantly relieved with opioid administration by EMS providers.To achieve this benefit, EMS providers would require training not only in opioid administration for dyspnoea, but also in identifying which situations require a palliative approach to care, including family support and the use of non-pharmacological approaches.
A further benefit of EMS and palliative care integration described in the literature is the provision of home-based care without medical facility conveyance. [31]arter et al. [24] demonstrated that provision of home-based palliative care by EMS improved patient and family quality of life, satisfaction and confidence.This in turn has potential
RESEARCH
][34] Such integration may, likewise, enhance respect for patient autonomy, as the majority of patients with palliative needs may wish to die in the comfort of their homes rather than in a medical facility. [35]Within SA, a study in Soweto found home to be the preferred place of death in 67% (n=126) of advanced cancer patients. [36]These benefits would be valuable within the WC context as most hospital transports in our study ultimately resulted in discharge home (60%, n=295), many after only a brief 0 -3-day stay in hospital (27%, n=129).With EMS and palliative care integration, many of these hospital transports may have been avoided.
Of concern in our study, 36% (n=142) of patients with palliative needs conveyed by EMS died in hospital, with an even higher proportion present in the tertiary hospital (39%).Clinicians in district hospitals may be more likely to discharge patients home owing to greater knowledge of local home-care services and limited space in their facilities, whereas clinicians in tertiary hospitals may be more intervention focused and slower to refer patients to palliative care services.Several factors may have contributed to these high percentages of hospital deaths.Not all patients and their family members desire death in their home, and some may have chosen hospital conveyance regardless of outcome. [36,37]Inadequate homecare resources in the area may contribute to these decisions, as a local medical facility may be the only place to receive care.Our data support this, with only 4 (1%) cases of palliative home care services identified as the primary source of care.Alternatively, insufficient patient, family or EMS knowledge of available home-care resources may likewise impact decision-making.From the EMS perspective, RESEARCH there are system constraints that compel providers to convey patients to a medical facility, regardless of their wishes, particularly once medications have been administered or other care rendered. [14]While the advanced EMS provider scope of practice within SA allows for the performance of an on-scene discharge to avoid unnecessary conveyance and keep patients at home, it is not currently used, as no policies or guidelines for its use exist.Whatever factors are involved, integration between EMS and palliative care would improve respect for patient autonomy, enhance homecare provision and avoid in-hospital deaths where patients wish to stay home.Such integration could make use of the existing SA EMS scope of practice, for example, opioid administration and on-scene discharge.
Healthcare in SA, as in other LMICs, suffers from budget and resource constraints in addition to its quadruple burden of disease. [9]onsequently, medical facilities within the country, particularly state hospitals, regularly operate with patient numbers well above their capacity while being under-resourced.Avoiding unnecessary admissions and in-hospital deaths, and decreasing costs through EMS and palliative care integration, may represent an effective intervention to assist in alleviating these problems.In the USA, such integration has been successfully implemented with the use of specialist mobile hospice units working alongside EMS. [38]Given the resource constraints of SA and other LMICs, the development of new systems may be too costly, and the integration of two existing systems (EMS and palliative), which frequently intersect, represents a logical and more efficient use of scarce resources.Simply improving communication between the two systems through telephonic consultations may be a cost-effective intervention. [7,31]Alternative models of care, such as the community paramedic in Canada [24] or extended care paramedic in Australia, [2] which integrate palliative care, may be also more feasible within SA, and should be researched.
Limitations
This study is limited by its retrospective design.Furthermore, as this study was performed with patient records at two state hospitals in a single province of SA, its external validity is restricted both within SA and internationally.However, within SA it is likely that provinces share a similar intersection between EMS and palliative situations, as the quadruple burden of disease is ubiquitous.In addition, the real intersection across all provinces is likely greater, as our study only focused on two state hospitals and primarily involved state EMS.The private healthcare sector and those patients not conveyed by EMS, or with unmet palliative needs, were not observed.Owing to the COVID-19 pandemic, data collected in the year 2020 may be atypical, though only 10% of cases in this study involved a COVID-19 diagnosis.While the pandemic may have increased the number of patients with palliative needs conveyed by EMS, it is more likely the associated national 'lockdowns' resulted in fewer overall cases, as many patients avoided medical facilities.
Conclusion
SA suffers from a quadruple burden of disease resulting in an increased need for palliative care.While this need has been prioritised and palliative care is recognised as a human right, there are insufficient resources to adequately meet the demand, necessitating palliative integration with other services.While progress has been made integrating palliative care in SA with nurses, doctors and other allied healthcare workers, no integration exists with EMS systems.From our findings, EMS in SA frequently encounter palliative situations for symptoms that may be managed within their scope of practice.Therefore, it appears that EMS have an important role to fulfil in the care of patients with palliative needs.Integrating EMS and palliative care may result in improved palliative care provision, respect for patient autonomy, decreased rates of unnecessary hospital admission and in-hospital death, and reduced overall healthcare system costs.These benefits are particularly germane to the SA context and, therefore, EMS and palliative care integration would be beneficial to the country.
Declaration.This study was performed as part of CHG's PhD in Emergency Medicine. | 2023-11-10T16:35:14.642Z | 2023-11-06T00:00:00.000 | {
"year": 2023,
"sha1": "7edcc987d481487e3afc04aaaa1436c52be833e6",
"oa_license": "CCBYNC",
"oa_url": "https://samajournals.co.za/index.php/samj/article/download/1136/709",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2943ef4fef3fa123bd261686bb0dfaea77f56d2a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238410522 | pes2o/s2orc | v3-fos-license | IS1294 Reorganizes Plasmids in a Multidrug-Resistant Escherichia coli Strain
ABSTRACT The aims of this study were to elucidate the role of IS1294 in plasmid reorganization and to analyze biological characteristics of cointegrates derived from different daughter plasmids. The genetic profiles of plasmids in Escherichia coli strain C21 and its transconjugants were characterized by conjugation, S1 nuclease pulsed-field gel electrophoresis (S1-PFGE), Southern hybridization, whole-genome sequencing (WGS) analysis, and PCR. The traits of cointegrates were characterized by conjugation and stability assays. blaCTX-M-55-bearing IncI2 pC21-1 and nonresistant IncI1 pC21-3, as conjugative helper plasmids, were fused with nonconjugative rmtB-bearing IncN-X1 pC21-2, generating cointegrates pC21-F1 and pC21-F2. Similarly, pC21-1 and pC21-3 were fused with nonconjugative IncF33:A−:B− pHB37-2 from another E. coli strain to generate cointegrates pC21-F3 and pC21-F4 under experimental conditions. Four cointegrates were further conjugated into the E. coli strain J53 recipient at high conjugation frequencies, ranging from 2.8 × 10−3 to 3.2 × 10−2. The formation of pC21-F1 and pC21-F4 was the result of host- and IS1294-mediated reactions and occurred at high fusion frequencies of 9.9 × 10−4 and 2.1 × 10−4, respectively. Knockout of RecA resulted in a 100-fold decrease in the frequency of plasmid reorganization. The phenomenon of cointegrate pC21-F2 and its daughter plasmids coexisting in transconjugants was detected for the first time in plasmid stability experiments. IS26-orf-oqxAB was excised from cointegrate pC21-F2 through a circular intermediate at a very low frequency, which was experimentally observed. To the best of our knowledge, this is the first report of IS1294-mediated fusion between plasmids with different replicons. This study provides insight into the formation and evolution of cointegrate plasmids under different drug selection pressures, which can promote the dissemination of MDR plasmids. IMPORTANCE The increasing resistance to β-lactams and aminoglycoside antibiotics, mainly due to extended-spectrum β-lactamases (ESBLs) and 16S rRNA methylase genes, is becoming a serious problem in Gram-negative bacteria. Plasmids, as the vehicles for resistance gene capture and horizontal gene transfer, serve a key role in terms of antibiotic resistance emergence and transmission. IS26, present in many antibiotic-resistant plasmids from Gram-negative bacteria, plays a critical role in the spread, clustering, and reorganization of resistance determinant-encoding plasmids and in plasmid reorganization through replicative transposition mechanisms and homologous recombination. However, the role of IS1294, present in many MDR plasmids, in the formation of cointegrates remains unclear. Here, we investigated experimentally the intermolecular recombination of IS1294, which occurred with high frequencies and led to the formation of conjugative MDR cointegrates and facilitated the cotransfer of blaCTX-M-55 and rmtB, and we further uncovered the significance of IS1294 in the formation of cointegrates and the common features of IS1294-driven cointegration of plasmids.
the cotransfer of bla CTX-M-55 and rmtB, and we further uncovered the significance of IS1294 in the formation of cointegrates and the common features of IS1294-driven cointegration of plasmids.
KEYWORDS 16S rRNA methylase, cointegrate, IS1294, recombination, extendedspectrum b-lactamases, ESBLs T he emergence and dissemination of antibiotic resistance is a major clinical problem that poses a serious threat to public health (1). Antibiotic resistance genes are associated with mobile genetic elements like plasmids, transposons, and integrons (2). Among them, plasmids play a key role as vehicles for resistance gene capture and subsequent dissemination (3). Plasmid interaction is important for the maintenance and conjugal transfer of plasmids, particularly the mobilization of nonconjugative plasmids (4). The fusion of nonconjugative plasmids and conjugative helper plasmids is often related to different recombination events, namely, homologous recombination and replicative transposition, facilitating the dispersal of resistance genes and the evolution of multidrug resistance (MDR) plasmids and extending the resistance profiles of cointegrate plasmids, which has raised wide concerns (5)(6)(7)(8)(9)(10)(11).
Insertion sequences IS26 and IS1294 are present in many antibiotic-resistant isolates and play critical roles in the diversity of the variable region of F33:A2:B2 plasmids carrying bla CTX-M-55 or bla CTX-M-65 (12). Three well-characterized fusion plasmids mediated by IS26 have been reported in clinical strains, namely, pSL131_IncA/C_IncX3, pD72C, and pSE380T (5)(6)(7). IS1294, a member of the IS91 family, is an atypical insertion sequence that lacks terminal inverted repeats, does not generate target site duplication, and transposes using rolling-circle replication (13). The IS1294-mediated formation of cointegrate plasmids is rarely reported. In our previous study, the bla CTX-M-55and rmtB-bearing sequence type 156 (ST156) Escherichia coli strain C21 from a chicken in China was characterized, and the ISEcp1 element located upstream from bla CTX-M-55 was found to be disrupted by IS1294 (14). Here, two plasmids, used as conjugative helper plasmids, were fused with the nonconjugative rmtB-carrying plasmid in strain C21 at high fusion frequencies, generating two conjugative cointegrates that could be further transferred into recipient E. coli strain J53 at high conjugation frequencies.
Consequently, the role of IS1294 in the formation of cointegrate plasmids was experimentally verified.
Sequence analysis of plasmids in C21. The bla CTX-M-55 -positive pC21-1 harbored an IncI2 replicon and typical IncI2-associated genetic modules responsible for plasmid replication, transfer, maintenance, and stability functions. Sequence analysis revealed that pC21-1 shared high degrees of genetic identity (99 to 100% identity at 97 to 99% coverage) with several known bla CTX-M -bearing IncI2 plasmids, including pHNY2, pHN1122-1, pHNAH46-1, and pHNLDH19, in E. coli strains isolated from different sources (Fig. S1A), and the ISEcp1 located upstream from bla CTX-M-55 in pC21-1 differed from the IncI2 plasmids mentioned above by the insertion of an IS1294 (Fig. 1A).
The multireplicon IncN-X1 plasmid pC21-2, with repE and pir genes, which are responsible for the replication initiation of IncN and IncX1, harbored the resistance genes rmtB, oqxAB, bla TEM-1b , floR, tet(A), strAB, sul1, sul2, aac(3)-IId, and aph(39)-IIa and a class 1 integron cassette array, dfrA12-orfF-aadA2, as well as mobile elements, including one IS1294 and five intact IS26 copies with no direct repeats (DRs) ( Table 1 and Fig. 1A). The fusion of segments in pC21-2 containing replication regions from the conjugative IncX1 plasmid pOLA52 in a swine E. coli strain and the classical IncN plasmid R46 in Salmonella enterica serovar Typhimurium (15,16) might be mediated by IS26 through homologous recombination ( Fig. 1A and Fig. S1A). A BLASTN search revealed that pC21-2 exhibited high homology to the bla NDM-1 -positive IncN-X1 plasmid p1108-NDM, FIG 1 The proposed mechanism of plasmid fusion. (A) Linear sequence comparison of two fusion plasmids, pC21-F1 and pC21-F2, with daughter plasmids pC21-1, pC21-2, and pC21-3. Colored arrows represent open reading frames, with blue, cyan, red, yellow, maroon, and gray arrows representing replicon genes, transfer-associated genes, resistance genes, mobile elements, stability associated genes, and hypothetical proteins, respectively. The shaded areas indicate 100% identity. (B) The proposed model for the IS1294-mediated formation of fusion plasmids. Plasmid names are shown in red on a gray background. Arrowheads indicate orientation. The cointegrates were brought about by intermolecular homologous recombination. Cointegrates pC21-F1 and pC21-F2 could subsequently be resolved into two plasmids identical to the original donor plasmids except for the excision of IS26-orf-oqxAB. Yellow arrows represent IS elements, and gray arrows represent hypothetical proteins. with 99.9% identity at 81% coverage; however, the main multidrug resistance regions of pC21-2 were almost identical with those of IncI1/ST136 pEC008 (accession number KY748190) (Fig. S1B) (17). The pC21-2 plasmid, without a transfer region, was not selftransmissible, which was determined by conjugation assays showing that no transconjugant was obtained after numerous attempts using the transformant TC21-2 carrying pC21-2 as the donor. pC21-3, without an antimicrobial resistance gene, belonged to IncI1/ST134 except for one nucleotide substitution (G!A) in the conjugative transfer gene trbA. BLAST analysis showed that pC21-3 exhibited 98.2 and 98.7% identity at 93 and 95% coverage with two conjugative helper plasmids, the nonresistant pSa27-HP (accession number MH884654) and the CTX-M-130-producing pSa44-CRO (accession number MH430883), recovered from Salmonella strains (8,9). pC21-4, a phage-like IncY plasmid without any antimicrobial resistance gene, had a single pO111 plasmid replicon and exhibited high homology to p1108-IncY in E. coli (accession number MG825379), with 99% identity at 93% coverage.
Identification of fusion plasmids. In previous work, we showed that two important resistance determinants, bla CTX-M-55 and rmtB, were present in separate plasmids in strain C21 and could be cotransferred into the recipient strain (14). In this work, three representative transconjugants, TC21-1, TC21-F1, and TC21-F2, were screened successfully by conjugation experiments using different antibiotics (Table 1). S1 nuclease pulsed-field gel electrophoresis (S1-PFGE) and Southern blot hybridization confirmed that bla CTX-M-55 and rmtB were located on the ;60-kb pC21-1 plasmid and the ;60-kb pC21-2 plasmid, respectively, in the parental strain C21. However, bla CTX-M-55 coexisted with rmtB on a single ;120-kb plasmid, pC21-F1, in TC21-F1, and rmtB was located on a single ;140-kb plasmid, pC21-F2, in TC21-F2. The pC21-F1 and pC21-F2 plasmids were larger than any plasmid in the original strain C21 (Fig. S2A). In view of the plasmid sizes, we proposed that pC21-F1 might be the recombinant product of pC21-1 (63,878 bp) and pC21-2 (62,933 bp) and pC21-F2 might be the recombinant product of pC21-2 (62,933 bp) and pC21-3 (87,627 bp). To further probe the sources of fusion plasmids, the complete sequences of plasmids in the transconjugant strains were obtained by WGS, combining the Illumina short-read and PacBio long-read sequencing data.
Based on the sequence analysis detailed above and the observed structure, we proposed the model of cointegrate formation shown in Fig. 1B. In the model, the IS1294 element in non-self-transmissible pC21-2 (IncN-X1) attacked another IS1294 in the conjugative pC21-1 or pC21-3, resulting in the occurrence of cointegrates. Linearized pC21-1 or pC21-3 was incorporated into pC21-2, creating the cointegrates pC21-F1 and pC21-F2, and then, two same-orientation IS1294 elements surrounded the insertion fragment. The sequences spanning the cointegrate junctions were confirmed using primers P1-P2 and P3-P4 for pC21-F2 and P2-P5 and P3-P6 for pC21-F1 and sequences of PCR amplicons corresponding to the result of WGS ( Fig. 2A and B). A dynamic process occurred between cointegrates and daughter plasmids in transconjugants, which was identified by PCR and sequencing, and several amplicons of combinations of primers to detect the flanking sequence of IS1294 were obtained (Fig. 2B). IS1294 lacking terminal inverted repeats does not generate DRs of the target site and transposes by rolling-circle replication (13). Although DRs of IS1294 surrounding the insertion fragments were not detected in this study, IS1294-mediated intermolecular recombination was likely to be related to the formation of cointegrates.
The fusion frequency of cointegrate pC21-F1 from pC21-1 and pC21-2 was 9.9 Â 10 24 transconjugants per cefotaxime-resistant transconjugant (Table S3). The fusion frequency of pC21-F2 could not be determined because pC21-3 did not have an antibiotic resistance marker (Table 1). However, the number of transconjugants carrying pC21-F2 from the parental strain was significantly higher than that of transconjugants carrying pC21-F1 from the parental strain in conjugation assays. Based on these data, we speculated that the fusion frequency of pC21-F2 was higher than that of pC21-F1, which was further confirmed by a conjugation assay using E. coli C21 as the donor and E. coli C600 as the recipient. The results of the assay showed that 80 randomly selected transconjugants screened by rifampin and amikacin carried the rmtB gene but not bla CTX-M-55 . The fusion frequency of pC21-F4 (IncI2-F33:A2:B2) was 2.1 Â 10 24 transconjugants per cefotaxime-resistant transconjugant (Table S3). Comparative assays were performed in wild-type and recombination-deficient (DrecA) donor strains, with the results showing that host-and IS1294-mediated reactions were involved in the formation of cointegrate plasmids and that knockout of recA resulted in a 100-fold decrease (from 3.0 Â 10 24 to 4.8 Â 10 26 ) in the frequency of plasmid reorganization (Table S4).
Stability assays in vitro showed that ,10% losses of fusion plasmids pC21-F1 and pC21-F2 in transconjugants occurred from day 1 to day 15, which suggested that fusion plasmids were stable in E. coli for at least 15 days of passage in an antibiotic-free environment (Fig. S3). A total of 40 amikacin-and cefotaxime-susceptible colonies from TC21-F1 were detected among 1,800 colonies screened at 0, 3, 6, 9, 12, and 15 days (100 colonies screened at six time points in each of three independent experiments). S1-PFGE showed that randomly selected colonies with the resistant phenotype originating from TC21-F1 harbored a single fusion plasmid, pC21-F1 (data not shown), suggesting that the fusion plasmid pC21-F1 was not easily lost and cleaved. However, 123 amikacin-susceptible colonies from TC21-F2 were detected among 1,800 colonies screened, and 2 of 14 colonies carried the daughter plasmids at 12 and 15 days (Fig. S4). As shown in the electrophoretic bands of lane 1 presented in Fig. S4B, the fusion plasmid pC21-F2 and its daughter plasmids coexisted in transconjugant TC21-F2 at 12 days, suggesting that the cointegrate and daughter plasmids may be in a dynamic process.
DISCUSSION
In an exploration of the evolutionary process of F33:A2:B2 plasmids, Wang et al. found that several IS26 and IS1294 elements were interspersed in MDR regions of F33: A2:B2 plasmids carrying bla CTX-M-55 or bla CTX-M-65 , causing diversity in the variable regions of the plasmids (12). The IS26-mediated formation of fusion plasmids in transconjugants has been well described in the hybrid resistance plasmid pD72C and the virulence and resistance plasmid pSE380T (5,7). However, the IS26-mediated fusion plasmid pSL131_IncA/C_IncX3 was identified in the parental strain, and its daughter plasmid pSL131T_IncX3 carrying bla NDM-1 was detected in the corresponding transconjugant (6). The ISPa40-mediated fusion plasmid pSa44-CIP-CRO was also illustrated in the parental strain, and two corresponding transconjugants selected in eosin methylene blue agar supplemented with different agents harbored the fusion plasmid pSa44-CIP-CRO and its daughter plasmid pSa44-CRO (8). In the present study, three transconjugants were obtained from the parental strain C21 under selective pressure by different agents; one of them carried a daughter plasmid, and the other two carried different fusion plasmids mediated by IS1294. Two cointegrates, pC21-F1 and pC21-F2, were not observed in the parental strain by S1-PFGE and complete sequencing; however, a dynamic process occurred between cointegrate and daughter plasmids in the transconjugants. The different states in the cointegrate plasmids between the original strain and transconjugants may be due to the abundance of cointegrate plasmids. Although the cointegrate plasmids may be in low abundance in the original strain harboring daughter plasmids, they were in high abundance after antibiotic drug selection. Taken together, the findings indicated that the cointegrate plasmid was easily selected and disseminated under pressure by different agents. Furthermore, the cointegrate plasmids mediated by IS elements were ubiquitous, and the replicon typing of the daughter plasmids from fusion plasmids was diverse. Studies have demonstrated that the cointegrates were formed between two DNA molecules in a process mediated by IS26 through a replicative transposition mechanism (7,18,19). However, in the present study, IS1294-mediated intermolecular recombination was involved in the formation of cointegrates.
The differences in the abundance of fusion plasmids pC21-F1 and pC21-F2 between the parental strain and transconjugants were consistent with their conjugation frequencies. A 4 Â 10 5 -fold increase in the conjugation frequency of pC21-F1 from transconjugant TC21-F1 to recipient E. coli J53 was noted when compared with the conjugation frequency of pC21-F1 from the parental strain C21 to recipient E. coli C600 (from 7.1 Â 10 29 to 2.8 Â 10 23 ), and a 1.2 Â 10 5 -fold increase in the conjugation frequency for pC21-F2 was noted (from 2.6 Â 10 27 to 3.2 Â 10 22 ). Similar conjugation frequency results were obtained for cointegrate plasmids pC21-F3 and pC21-F4 (Table S2). These findings indicated that pC21-1 and pC21-3 may act as conjugative helper plasmids, providing nonconjugative plasmids pC21-2 (IncN-X1) and pHB37-2 (IncF33:A2:B2) with self-transmission capacity through the formation of cointegrates. In addition, their activity may lead to the rapid transmission of resistance genes in nonconjugative plasmids under selection by antibiotics, as well as promoting the evolution of MDR plasmids.
The average fusion frequencies were 9.9 Â 10 24 and 2.1 Â 10 24 , respectively, for cointegrates pC21-F1 and pC21-F4, which resulted from host-mediated homologous recombination and IS1294-mediated intermolecular reactions. Comparison analysis performed in the wild-type and recombination-deficient (DrecA) donor strains showed that knockout of recA resulted in a 100-fold decrease (from 3.0 Â 10 24 to 4.8 Â 10 26 ) in the fusion frequency of cointegrate pC21-F1, which suggested that IS1294-mediated reactions, with an average transposition frequency of 4.8 Â 10 26 for pC21-F1, and intrinsic homologous recombination played major roles in plasmid reorganization. The frequency of cointegrate formation mediated by IS26 between pRMH762 and the construct R388::IS26 was 1.8 Â 10 24 per R388::IS26 transconjugant in transposition experiments (18). The high fusion efficiency mediated by IS1294 or IS26 highlighted the important role of IS1294 and IS26 in the generation of cointegrate plasmids and the dissemination of resistance genes.
In summary, this study characterized the complete genetic features of four plasmids and elucidated the mechanism underlying the reorganization of fusion plasmids. To the best of our knowledge, this is the first description of the role of IS1294 in the formation of fusion plasmids derived from three plasmids in an original strain. This study provided insight into the formation and evolution of cointegrates under the selective pressure of one or more antimicrobials, which poses a serious threat to public health. Therefore, more prudent use of antimicrobial agents in clinical practice, particularly the use of antibiotic combinations, is important to avoid the occurrence, dissemination, and further evolution of MDR fusion plasmids.
MATERIALS AND METHODS
Bacterial strain. Multidrug-resistant ST156 E. coli strain C21, carrying two important resistance determinants, bla CTX-M-55 and rmtB, in separate plasmids was characterized from a chicken in China in September 2009 as described in our previous study (14).
Conjugation, transformation, S1-PFGE, and Southern hybridization. E. coli C21 as the donor and E. coli C600 (resistant to rifampin) as the recipient were used in conjugation experiments. Three representative transconjugants were screened on MacConkey agar supplemented with rifampin (450 mg/liter), cefotaxime (2 mg/liter), and/or amikacin (20 mg/liter). The conjugation frequencies were calculated as the number of transconjugants per donor. The plasmids in the donor strain C21 were transformed into E. coli DH5a by electroporation; the rmtB-bearing transformant TC21-2 was selected on LB agar supplemented with amikacin (20 mg/liter), and transformant TC21-3, harboring a single pC21-3 without any antibiotic resistance genes, was selected on antibiotic-free LB agar. Plasmid profiles in the donor strain, transconjugants, and transformants were subjected to S1-PFGE and Southern blot hybridization with bla CTX-M-55 , rmtB, and trbA for the IncI1 plasmid as probes.
WGS and bioinformatics analysis. To explore the genetic basis of plasmid size alteration in the donor and transconjugant strains, total genomic DNA was extracted from C21 and the plasmids in transconjugants TC21-F1 and TC21-F2 using the Omega bacterial DNA kit (Omega Bio-Tek, USA) and the Qiagen plasmid midi kit (Qiagen, Hilden, Germany) and subjected to whole-genome sequencing (WGS) using Illumina NovaSeq 6000 and the PacBio RSII single-molecule real-time (SMRT) platforms. The longread data were assembled de novo using the hierarchical genome assembly process (HGAP) with the SMRT Analysis version 2.3.0 software package for the PacBio RSII platform, in combination with complementary short reads (21). The plasmid sequences were initially annotated using the Subsystem Technology (RAST version 2.0) server (http://rast.nmpdr.org) and curated manually using the BLASTn and BLASTp algorithms (http://blast.ncbi.nlm.nih.gov/blast). The plasmid replicon genotype and resistance genes were identified by using the CGE server (https://cge.cbs.dtu.dk/services/). The comparative analysis and plasmid maps were generated using Easyfig and BRIG (22,23).
Identification of circular intermediates carrying oqxAB. Reverse PCR was performed to detect the potential circular form of the IS26-flanked transposon carrying oqxAB in the parental strain C21, transconjugants, and transformants. PCR with TaKaRa Taq DNA polymerase was carried out with an initial denaturation at 94°C for 5 min, followed by 30 cycles of amplification (denaturation at 94°C for 30 s, annealing at 57°C for 30 s, and extension at 72°C for 2 min) and a final extension at 72°C for 10 min. To further assess the excision of IS26-orf-oqxAB-IS26, a conjugation assay was performed under rifampin and amikacin selection, and oqxAB was identified in 80 randomly selected transconjugants by PCR using the oqxAB-F/R primers listed in Table S1.
Recombination and conjugation frequencies of fusion plasmids. To investigate the ability to form the fusion plasmid pC21-F1 from conjugative bla CTX-M-55 -positive pC21-1 (IncI2) and nonconjugative rmtB-positive pC21-2 (IncN-X1), recombination frequencies were identified by conjugation assay using strain C21 as the donor and E. coli C600 as the recipient. The recombination frequency was calculated as the number of transconjugants carrying fusion plasmid pC21-F1 per cefotaxime-resistant transconjugant. The recombination frequency for the fusion plasmid pC21-F2 from conjugative pC21-3 (IncI1) and pC21-2 could not be determined because of the lack of a selective marker for pC21-3.
To explore the role of IS1294 in plasmid reorganization, a comparative analysis between wild-type and recombination-deficient (DrecA) donor strains was performed. Both pC21-2 and pC21-1 were transformed into E. coli C600 and recombination-deficient (DrecA) E. coli C600, respectively, generating two corresponding transformants, and then the transformants as the donor and E. coli J53 (DrecA) as the recipient were used in conjugation experiments. Transconjugants carrying cointegrate pC21-F1 were selected on LB agar plates supplemented with cefotaxime (2 mg/liter) and amikacin (20 mg/liter). All the transformants and transconjugants were confirmed by the presence of bla CTX-M-55 , rmtB, and fusion points by PCR and Sanger sequencing. Recombination frequencies were calculated as the number of transconjugants carrying pC21-F1 per cefotaxime-resistant transconjugant.
To assess the self-transferability of the fusion plasmids pC21-F1 and pC21-F2 in transconjugants, conjugation assays were further performed using E. coli C600 transconjugants TC21-F1 and TC21-F2 as the donor and azide-resistant E. coli J53 as the recipient, and conjugation frequencies were calculated as the number of transconjugants per donor. All the transformants and transconjugants were verified by PCR and antimicrobial susceptibility testing. Plasmid profiles in the transconjugant and transformant strains were subjected to S1-PFGE and Southern blot hybridization, and the fusion points were detected by PCR and sequencing. The sequences and approximate positions of the primers are shown in Table S1 and Fig. 2A and C.
Plasmid stability. The stability of fusion plasmids pC21-F1 and pC21-F2 was assessed as described previously (24). In brief, transconjugants TC21-F1 and TC21-F2 were propagated by serial transfer for 15 days of passage. The culture broths were serially diluted in 0.85% saline and plated onto antibioticfree LB agar at 0, 3, 6, 9, 12, and 15 days. A total of 100 colonies were randomly chosen and plated onto LB agar supplemented with amikacin and cefotaxime for TC21-F1 and with amikacin for TC21-F2, and then PCR was performed to confirm the presence of bla CTX-M-55 and rmtB in TC21-F1 colonies and rmtB and IncI1 replicon types for TC21-F2 colonies. The numbers of colonies were calculated at six time points in each of three independent experiments. The plasmid profiles of 14 randomly selected colonies from TC21-F1 or TC21-F2 were further identified using S1-PFGE. In all instances, the patch counts were consistent with the colony counts.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.7 MB.
ACKNOWLEDGMENTS
This work was supported by grants from the Foundation of Henan Educational Committee (grant number 21A230014) and the National Natural Science Foundation of China (grant number 31702295).
We declare no conflicts of interest. | 2021-10-07T06:17:16.812Z | 2021-10-06T00:00:00.000 | {
"year": 2021,
"sha1": "71e49793a43341f1f809b0303dbfd8e6e0ac4614",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/spectrum.00503-21",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "93dea3f4fef78b7ad23e8f8422aa0fb580ece9af",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3048264 | pes2o/s2orc | v3-fos-license | Validity and failure of some entropy inequalities for CAR systems
Basic properties of von Neumann entropy such as the triangle inequality and what we call MONO-SSA are studied for CAR systems. We show that both inequalities hold for any even state. We construct a certain class of noneven states giving counter examples of those inequalities. It is not always possible to extend a set of prepared states on disjoint regions to some joint state on the whole region for CAR systems. However, for every even state, we have its `symmetric purification' by which the validity of those inequalities is shown. Some (realized) noneven states have peculiar state correlations among subsystems and induce the failure of those inequalities.
Introduction
Let H be a Hilbert space and D be a density matrix on H, i.e. a positive trace class operator on H whose trace is unity. The von Neumann entropy is given by where Tr denotes the trace which takes the value 1 on each minimal projection. Let ̺ be a normal state of B(H), the set of all bounded linear operators on H. Then ̺ has its density matrix D ̺ , and its von Neumann entropy S(̺) is given by (1) with D = D ̺ . It has been known that von Neumann entropy is useful for description and characterization of state correlation for composite systems. Among others, the following inequality called strong subadditivity (SSA) is remarkable: where I, J, I ∩ J and I ∪ J denote the indexes of subsystems and ϕ I denotes the restriction of a state ϕ to the subsystem indexed by I, and so on. Such entropy inequalities have been studied for quantum systems, see e.g. Refs. 4,5,9,11,13,15, and also their references. However, the composite systems considered there were mostly tensor product of matrix algebras to which we refer as the tensor product systems. We investigate some well known entropy inequalities, the triangle inequality and MONO-SSA (which will be specified soon), for CAR systems. This study is relevant to our previous works on state correlations such as quantumentanglement 7 and separability 8 for CAR systems. In a certain sense, the conditions of validity and failure of such entropy inequalities which we are going to establish will explain the similarities and differences in the possible forms of state correlations between CAR and tensor product systems.
Let L be an arbitrary discrete set. The canonical anticommutation relations (CAR) are {a * i , a j } = δ i,j 1, {a * i , a * j } = {a i , a j } = 0, where i, j ∈ L and {A, B} = AB + BA (anticommutator). For each subset I of L, A(I) denotes the subsystem on I given as the C * -algebra generated by all a * i and a i with i ∈ I. For I ⊂ J, A(I) is naturally imbedded in A(J) as its subalgebra.
We have already shown that SSA (2) holds for the CAR systems. For the convenience, we sketch its proof given in Ref. 6 where E denotes the conditional expectation with respect to the tracial state onto the subsystem with a specified index. From this property SSA follows for every state (without any assumption on the state, like its evenness) by a well-known proof method using the monotonicity of relative entropy under the action of conditional expectations. We move to entropy inequalities for which CAR makes difference. The following is usually referred to as the triangle inequality: where I and J are disjoint. While this is satisfied for the tensor product systems, 1 it is not valid in general for the CAR systems; there is a counter example. 7 We next introduce our main target, where I, J, and K are disjoint. We may call (4) "MONO-SSA", because it is equivalent to SSA (2) for the tensor product systems at least, and it obviously implies the monotonicity of the following function with respect to the inclusion of the index K. Our question is whether MONO-SSA holds for the CAR systems, if not, under what condition it is satisfied. The MONO-SSA for the tensor product systems is shown by what is called purification implying the equivalence of MONO-SSA and SSA for those systems (see 3.3 of Ref. 11). We note that the purification is a sort of state extension, and is not automatic for the CAR systems. We shall review the basic concept of state extension.
In the description of a quantum composite system, the total system is given by a C * -algebra A, and its subsystems are described by C * -subalgebras A i of A indexed by i = 1, 2, · · · . Let ϕ be a state of A. We denote its restrictions to A i by ϕ i . Surely ϕ i is a state of A i . Conversely, suppose that a set of states ϕ i of A i , i = 1, 2, · · · , are given. Then a state ϕ of A is called an extension of {ϕ i } if its restriction to each A i coincides with ϕ i .
For tensor product systems, there always exists a state extension for any given prepared states {ϕ i } on disjoint regions, at least their product state extension ϕ = ϕ 1 ⊗ · · · ⊗ ϕ i ⊗ · · · , and generically other extensions. On the contrary, it is not always the case for CAR systems. When two (or more than two) prepared states on disjoint regions are not even, there may be no state extension. We have shown that if all of them are noneven pure states, then there exists no state extension 7 . 3 We explain the above-mentioned purification in terms of state extension. We are given a state ̺ 1 of A(I). We then prepare some state ̺ 2 on some A(J) with J ∩ I = ∅ such that it has the same nonzero eigenvalues and their multiplicities as ̺ 1 for their density matrices. We want to construct their pure state extension to A(I ∪ J). We use the term "symmetric purification" to refer to this procedure where the "symmetric" may indicate the above specified property of ̺ 2 . For the tensor product systems, symmetric purification exists for every ̺ 1 . On the contrary for the CAR systems, though we can easily make a pure state extension of ̺ 1 , its pair ̺ 2 cannot be always chosen among those states which have the same nonzero eigenvalues and their multiplicities as ̺ 1 . In the above and what follows, we shall identify states with their density matrices when there is no fear of confusion.
We will show that MONO-SSA is not satisfied in general in § 3. However it is shown to hold for every even state in § 2. TABLE 1 shows the truth ( ) and the falsity (×) of the entropy inequalities. We fix our notation. The even-odd grading Θ is determined by The even and odd parts of A(I) are given by For an element A ∈ A(I) we have the decomposition For a finite subset I, define By a simple computation, v I is a self-adjoint unitary operator in A(I) + implementing Θ, namely For a finite subset I, every even pure state of A(I) is given by an eigenvector of v I as its vector state.
The following is a simple consequence of the CAR given e.g. in § 4.5 of Ref.
In this note we restrict our discussion to finite-dimensional systems so as to exclude from the outset the cases where our statements themselves on von Neumann entropy do not make sense; for infinite-dimensional systems a density matrix does not exist in general for a given state. (However in the proof of Proposition 8 we shall mention possible infinite-dimensional extensions of some results.)
Symmetric Purification for Even States
Symmetric purification is a useful mathematical technique having a lot of applications. For example, we can derive MONO-SSA from SSA for the tensor product systems by using it.
We now discuss symmetric purification for the CAR systems. We shall show its existence for even states.
Lemma 2. Let I and J be mutually disjoint finite subsets. Let ̺ be an even pure state of A(I∪J), and let ̺ 1 and ̺ 2 be its restrictions to A(I) and A(J). Then the density matrix of ̺ 1 has the same nonzero eigenvalues and their multiplicities as those of ̺ 2 . In particular, S(̺ 1 ) = S(̺ 2 ).
Proof. We have
Since ̺ is a pure state of A(I ∪ J), its density matrix (with respect to the non-normalized trace Tr of B(H 1,2 )) is a one-dimensional projection operator of B(H 1,2 ), and hence there exists a unit vector ξ ∈ H 1,2 such that D ̺ η = (ξ, η)ξ for any η ∈ H 1,2 . By using the Schmidt decomposition, 12 we have the following decomposed form: where {ξ 1i } and {ξ 2i } are some orthonormal sets of vectors of H 1 and H 2 . For ν = 1, 2, let P (ξ νi ) denote the projection operator on the one-dimensional subspace of H ν containing ξ νi . We denote the restricted states of ̺ onto B(H 2 ) by ̺ 2 . By (10), the density matrices of ̺ 1 and ̺ 2 have the following symmetric forms: Since ̺ is an even state, its restriction ̺ 2 is even and hence its density matrix D ̺2 belongs to A(J) + .
On the other hand, the even state ̺ is invariant under the action of Ad Acting the conditional expectation onto B(H 2 ) with respect to the tracial state of B(H 1,2 ) on the above equality, we obtain (8), B(H 2 ) + is equal to A(J) + , and also to the set of all invariant elements under Ad(v J ) in B(H 2 ). Therefore both D ̺2 and D ̺2 belong to B(H 2 ) + . Accordingly, D ̺2 is equal to D ̺2 as the density of the state ̺ restricted to B(H 2 ) + , and hence From (11) and (12), it follows that ̺ 1 and ̺ 2 have the same nonzero eigenvalues and their multiplicities equal to {λ 2 i }. Thus For a subset I of L, |I| denotes the number of sites in I.
Moreover, if ̺ 1 is even, then the above ̺ can be taken to be even.
Proof. We use the same notation as in the proof of the preceding lemma and write A(I ∪ J) = B(H 1,2 ), A(I) = B(H 1 ), and A(I ′ | I ∪ J) = B(H 2 ). Let ̺ 1 = i λ 2 i P (ξ 1i ), where λ i > 0, {ξ 1i } is an orthonormal set of H 1 , and P (ξ 1i ) is the projection operator on the one-dimensional subspace of H 1 containing ξ 1i . Since |J| ≥ |I| and hence dim H 2 ≥ dim H 1 , we can take an orthonormal set of vectors {ξ 2i } of H 2 having the same cardinality as {ξ 1i }. Define a unit vector ξ ∈ H 1,2 by the same formula as (10) and let ̺ be its vector state, namely the state whose density matrix is the projection operator on the one-dimensional subspace of H 1,2 containing ξ. This ̺ is a pure state extension of ̺ 1 to A(I ∪ J) by its definition.
Assume now that ̺ 1 is even, and hence its density matrix is in A(I) + . For each eigenvalue, the associated spectral projection is also even and commutes with v I , and its range is invariant under v I . Therefore we can choose an orthonormal basis of the range of the projection which consists of eigenvectors of v I . We take {ξ 1i } to be a set of eigenvectors of v I .
Since v J belongs to B(H 2 ) + (= A(J) + ), there exists an orthonormal basis of H 2 consisting of eigenvectors of v J . Due to the assumption |J| ≥ |I|, we can take a set of different eigenvectors {ξ 2i } of v J such that for each i its eigenvalue, +1 or −1, is equal to that of ξ 1i for v I . Define a unit vector ξ by (10) using these {ξ 1i } and {ξ 2i }. Since this ξ is an eigenvector of v I∪J by its definition, its vector state ̺ is even.
Combining the above two lemmas we obtain the following.
Proposition 4. Let I be a finite subset and ̺ 1 be an even state of A(I). Let J be a finite subset such that J∩I = ∅ and |J| ≥ |I|. Then there exists an even pure state ̺ on A(I ∪ J) such that its restriction to A(I) is equal to ̺ 1 and the density matrix of its restricted state ̺ 2 ≡ ̺| A(J) has the same nonzero eigenvalues and their multiplicities as those of ̺ 1 .
We may call the above state extension from ̺ 1 to ̺ the symmetric purification. Thanks to this, we obtain the following two theorems.
Theorem 5. Let I, J and K be mutually disjoint finite subsets. For every even state ϕ, MONO-SSA is satisfied.
Proof. The equivalence of MONO-SSA and SSA for even states follows from Proposition 4 in the same way as (3) p164 Similarly, by using Proposition 4 we immediately obtain the triangle inequality for even states in much the same way as (3.1) of Ref. 1. We omit its proof.
Theorem 6. Let I and J be mutually disjoint finite subsets. For every even state ϕ, the triangle inequality holds.
Violation of MONO-SSA
In this section we give a certain class of noneven states. pure on I ∪ K tracial pure, noneven I K We shall give a sketch of our model indicated by FIG.1. We can take a pure state ̺ I∪K on I ∪ K whose restriction ̺ K is a pure state, but ̺ I is non-pure, say the tracial state. Such ̺ I∪K does not satisfy the triangle inequality, because the entropies on I and on K are different, whereas the entropy on I ∪ K is zero. It can be said that the pure state ̺ I∪K has the asymmetric restrictions in our terminology. This asymmetry is due to the large amount of the oddness of ̺ K , whose precise meaning will be given soon. (Note however that for the infinitedimensional case, the GNS representations π ̺K and π ̺KΘ should be unitarily equivalent, see Proposition 8 (i).) We take an arbitrary even state ̺ J on J. The desired state on ̺ I∪K∪J on I ∪ K ∪ J is given by the product state extension of ̺ I∪K and ̺ J , which will be denoted by ̺ I∪K • ̺ J .
We recall the definition of the transition probability. 14 For two states ϕ and ψ of A(I) (where |I| is finite or infinite), take any representation π of A(I) on a Hilbert space H containing vectors Φ and Ψ such that for all A ∈ A(I). The transition probability between ϕ and ψ is given by where the supremum is taken over all H, π, Φ and Ψ as described above. For a state ϕ of A(I), we define where ϕΘ denotes the state ϕΘ(A) = ϕ(Θ(A)), A ∈ A(I). Intuitively, p Θ (ϕ) quantifies the amount of oddness of the state ϕ. If p Θ (ϕ) = 0 or nearby, then we may say that the difference between ϕ and ϕΘ is large. If ϕ is even, p Θ (ϕ) takes obviously the maximum value 1.
The following is Lemma 3.1 of Ref. 2.
Lemma 7.
If ̺ 1 is a pure state of A(K) and π ̺1 and π ̺1Θ are unitarily equivalent, then there exists a self-adjoint unitary u 1 ∈ π ̺1 (A(K) + ) ′′ satisfying The next proposition is a basis of our construction. It is a generalization of Ref. 7. The first paragraph is in principle excerpted from Theorem 4 (4) and (5) of Ref. 2. The second paragraph is necessary for the argument of entropy. Proposition 8. Let K and I be mutually disjoint subsets. Assume that ̺ 1 is a (noneven) pure state of A(K) satisfying p Θ (̺ 1 ) = 0. Assume that ̺ 2 is an even state of A(I). There exists a joint extension of ̺ 1 and ̺ 2 other than their product state extension if and only if ̺ 1 and ̺ 2 satisfy the following pair of conditions: (i) π ̺1 and π ̺1Θ are unitarily equivalent. (ii) There exists a state ̺ 2 of A(I) such that ̺ 2 = ̺ 2 Θ and For each ̺ 2 above, there exists the joint extension of ̺ 1 and ̺ 2 to A(K ∪ I) denoted by ψ ̺2 which satisfies where ̺ 1 is the GNS-extension of ̺ 1 to π ̺1 (A(K)) ′′ . If K and I are finite subsets, then the entropy of ̺ 2 is equal to that of ψ ̺2 .
Proof. We shall show only the sufficiency of the pair of conditions (i) and (ii). For the necessity of (i) see 5.2 in Ref. 2, and for that of (ii) see (d) in the proof of its Theorem 4(4). Let (H ̺1 , π ̺1 , Ω ̺ ) be a GNS triplet for ̺ 1 and (H ̺2 , π ̺2 , Ω ̺ ) be that for for A 1 ∈ A(K), A 2 = A 2+ + A 2− , A 2± ∈ A(I) ± . Let 1 1 be the identity operator of H ̺1 and 1 2 be that of H ̺2 . We can check that the operators π(A(K ∪ I)) satisfy the CAR by using (19), and hence π extends to a representation of A(K ∪ I). We define the state ψ ̺2 on A(K ∪ I) as for A ∈ A(K ∪ I). The von Neumann algebra π(A(K ∪ I)) ′′ is generated by π ̺1 (A(K)) ′′ ⊗ 1 2 , 1 1 ⊗ π ̺2 (A(I) + ) ′′ , and the weak closure of 1 1 ⊗ π ̺2 (A(I) − ), where we have noted u 1 ∈ π ̺1 (A(K) + ) ′′ = B(H ̺1 ). Therefore From this it follows that the vector Ω is cyclic for the representation π of A(K∪I) in H. Hence (H, π, Ω) gives a GNS triplet for the state ψ ̺2 on A(K ∪ I).
We have for We will then show Under the condition of Lemma 7 (which is our case), we have because the transition probability between the vector states of the algebra B(H ̺1 ) = π ̺1 (A(K)) ′′ is equal to the (usual) transition probability of their vectors and hence p Θ (̺ 1 ) = |(Ω ̺ , u Ω ̺ )|. By the assumption p Θ (̺ 1 ) = 0, (29) implies Setting By (20), From (31) and (32), (28) follows. We have now shown that ψ ̺2 is an extension of ̺ 1 and ̺ 2 . We will show the second paragraph. By (25) and the commutant theorem, π(A(K ∪ I)) ′ = 1 1 ⊗ π ̺2 (A(I)) ′ , where the commutant is taken in each GNS space. Thus we have the following isomorphism: from π ̺2 (A(I)) ′ onto π(A(K ∪ I)) ′ . Furthermore by (22) we obtain From the assumption that K and I are finite subsets (which we have not used so far), π ̺2 (A(I)) ′ and π(A(K ∪ I)) ′ are both finite-dimensional type I factors.
Now we note the following basic fact about GNS representations for states of finite-dimensional type I factors which can be considered as the counterpart of Lemma 2 for the usual case, namely for a pair of isomorphic systems coupled by tensor product. Let ω be a state of a finite-dimensional type I factor A and (H ω , π ω , Ω ω ) denote a GNS triplet of ω. The GNS vector Ω ω of ω induces a state on the commutant π ω (A) ′ whose expectation value for a ∈ π ω (A) ′ is given by (Ω ω , aΩ ω ). We call this state on π ω (A) ′ "ω on the commutant". (In our terminology, the pure state with respect to Ω ω on B(H ω ) = π ω (A) ⊗ π ω (A) ′ gives a symmetric purification of ω. Also ω on the commutant is symmetric to ω.) Then the entropy of ω on A (equivalently that on π ω (A)) is equal to the entropy of ω on the commutant by the same reason described in Lemma 2. We note that this holds for a general C * -algebra if the GNS representation of a given state generates a type I von Neumann algebra with a discrete center. Similarly the extension of Lemma 2 is possible under the above condition on the state.
From the above fact with (33) and (34), we deduce the equality of the entropies of ̺ 2 and of ψ ̺2 Remark 1: Note that this ψ ̺2 is a state extension of ̺ 1 and ̺ 2 , not that of ̺ 1 and ̺ 2 . The possibility of the state extension of ̺ 1 and ̺ 2 is negated by Theorem 4 (3) of Ref. 3.
Remark 2:
We can easily make examples of states in this proposition. Take a finite subset K and an odd self-adjoint element A in the algebra A(K) which we will identify with B(H) on a finite-dimensional Hilbert space H. Let η ∈ H be a normalized eigenvector of this A and ω η denote the associated vector state. Then η ⊥ v K η, and ω η Θ becomes the vector state with respect to v K η. Hence p Θ (ω η ) = 0. This ω η obviously satisfies (i). For the existence of ̺ 2 satisfying (ii), take for example the above ω η for ̺ 2 (with i ∈ I), ω η Θ for ̺ 2 Θ, and their affine sum (20) for ̺ 2 . Theorem 9. Let K, I, and J be mutually disjoint subsets. Let ̺ K = ̺ 1 and ̺ I = ̺ 2 where ̺ 1 and ̺ 2 are those states on A(K) and on A(I) given in Proposition 8. Let ̺ K∪I be the state extension of ̺ K and ̺ I to A(K ∪ J) given by ψ ̺2 in the form of (21). Let ̺ J be an arbitrary even state of A(J). Then for such ̺ K∪I and ̺ J , there exists a (unique) product state extension ̺ K∪I • ̺ J on A(K ∪ I ∪ J). If all K, I, and J are finite subsets, then Since ̺ 2 = 1 2 ̺ 2 + ̺ 2 Θ and ̺ 2 = ̺ 2 Θ, we have By the product property of ̺ K∪J = ̺ K • ̺ J the density matrix of ̺ K∪J is a product of the density matrices of ̺ K and ̺ J which mutually commute because ̺ J ∈ A(J) + . Hence by a direct computation we have Since ̺ K is assumed to be pure and hence S(̺ K ) = 0, we have From (40) and (41), we obtain (35). From (40) and S(̺ K ) = 0, we obtain (36).
Remark 3:
Let H be a finite-dimensional Hilbert space. For states ϕ and ψ on the algebra B(H) and for 0 ≤ λ ≤ 1, the following von Neumann entropy inequalities are well-known: We refer to Proposition 6.2.25 of Ref. 4 for their proofs. We now see the strict concavity of von Neumann entropy which was used in the proof of Theorem 9, namely for 0 < λ < 1 the equality of (42) holds if and only if ϕ = ψ. We employ the proof method given in the above-mentioned reference. Let K be a two-dimensional Hilbert space and P denote a one-dimensional projection of B(K). We denote A 1 ≡ B(H), A 2 ≡ B(K), and A 1,2 ≡ B(H ⊗ K). Let ω denote a state on A 1,2 whose density matrix D ω is given by λD ϕ ⊗ P + (1 − λ)D ψ ⊗ (1 − P ). | 2014-10-01T00:00:00.000Z | 2004-05-14T00:00:00.000 | {
"year": 2004,
"sha1": "95d5bc8e8b6159fa95110ca47fb2d6a1b77cea86",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math-ph/0405042",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "95d5bc8e8b6159fa95110ca47fb2d6a1b77cea86",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
73591871 | pes2o/s2orc | v3-fos-license | Isoenzyme activity in maize hybrid seeds harvested with different moisture contents and treated 1
The analysis of isoenzyme activity is an important monitoring and characterization tool of the physiological quality of seeds and to understand the deterioration. The purpose of this work was to study the isoenzyme expression allied to the quality of maize hybrid seeds harvested at different moisture levels and subjected to chemical treatment. A completely randomized experimental design was used with four replicates, in a 3x2 factorial arrangement with three moisture levels (45%, 40% and 35%), and two forms of seeds tillage (with and without treatment). Seeds from maize hybrids, semi-hard BM 810 and dented BM 3061, were used. Seeds were manually gathered on ears. Chemical treatment was performed with commercial products Maxin® + K-obiol® + Actellic®. Seed quality was assessed by moisture test, incidence of mechanicals damage, first count of germination, germination, emergence, emergence speed index, mean emergence time, accelerated aging, and electrical conductivity. Isoenzyme expressions were assessed by means of the systems superoxide dismutase (SOD), catalase (CAT), esterase (EST), alcohol dehydrogenase (ADH), malate dehydrogenase (MDH), peroxidase (PO) and α-amilase. Isoenzyme expressions are different, depending on moisture levels at harvest, the hybrid maize and seeds quality. Seeds treatment does not interfere in their isoenzymes expression.
Introduction
The companies producing maize seeds every year have improved the cultivation system to obtain high quality seeds.This search triggers a high investment in technologies that enable a better use of time in relation to early harvesting.
The early harvest of maize seeds ensures higher quality due to less exposure to adverse environmental conditions, better use of planting areas, the possibility to vacate them earlier, besides enabling the planning of the drying processes, providing better utilization of the production and processing infrastructure (Ferreira et al., 2013).
However, the seeds harvested close to physiological maturity, a period during which the seeds have maximum vigor level, show high water content and this involves the improvement of postharvest techniques so that there is not a reduction in seeds quality for, on a molecular level, a number of mechanisms contribute to the deterioration.
The enzymes involved in the deterioration, such as esterase (EST), malate dehydrogenase (MDH), alcohol dehydrogenase (ADH), catalase (CAT) and peroxidase (PO) have great potential as molecular markers to monitor and characterize the seeds physiological quality (Veiga et al., 2010), besides providing an understanding of the causes of reduced vigor and viability (Galvão et al., 2014).
The delayed harvest leads to increased decay, and this is evident by the reduction of the peroxidase enzyme activity, which provides antioxidant protection of seeds (Tunes et al., 2014), and by the increased ADH enzyme activity, which increases anaerobic respiration (Caixeta et al., 2014).
Therefore, with the early harvest and the high moisture content of the seeds, there may be a significant increase in respiratory rate and deterioration if the processes subsequent to harvesting are not properly conducted (Galvão et al., 2014).
According to Caixeta et al. (2014), on seeds processed and stored for eight months, there is increased activity of the malate dehydrogenase enzyme, because the deteriorating process is steeper.
The loss of peroxidase enzyme activity due to increased deterioration can make seeds more sensitive to the effect of oxygen and free radicals on membrane unsaturated fatty acids, which will cause degeneration of these ones and compromise seed vigor (Martins et al., 2011).
The joint analysis of several enzyme systems enables the verification of changes that occur inside the seeds when subjected to some kind of treatment that influences quality and productivity (Tunes et al., 2014).
Studies of isoenzyme expression in maize seeds harvested with different moisture contents and changes in enzyme activity due to seeds chemical treatment are scarce.Thus, the aim of this research was to study the activity of isoenzymes superoxide dismutase (SOD), catalase (CAT), esterase (EST), alcohol dehydrogenase (ADH), malate dehydrogenase (MDH), peroxidase (PO) and α-amylase at the expense of the hybrid maize seed quality, treated and untreated, harvested with different moisture contents.
Material and Methods
Hybrid seeds used were BM 810 and BM 3061, classified as semi-hard and toothed, produced by the company Biomatrix in the city of Paracatu, located North-West of the Brazilian state of Minas Gerais, whose geodetic coordinates are 17° 13′ 19″ S of latitude, 46° 52′ 30″ W of longitude and 1,008 meters of altitude.
At random points in the defined experimental area, manual harvesting of maize ears was done when the seeds had 45%, 40% and 35% water content.The multi-grain apparatus Grain Analysis Computer 2100 was used to ascertain the moisture of the seeds.After harvest, the maize ears went through mechanical grain husking in machine to take the stover of the brand CWA in the rotation of 312 rpm and drying in a stationary dryer at 35 °C until the seeds reached 22% water content and 42 °C until they reached 12% water content.After drying, the seeds were mechanically threshed and treated with Maxim ® + K-obiol ® + Actellic ® , being 13.75 mL of Maxim XL, 0.45 mL of K-obiol, 0.45 mL of Actellic, 2.5 mL of dye, 100 mL of water (turned to 100 Kg).The treatments are specified in Table 1.The seeds were submitted to manual classification in sieves at Central Laboratory of Seeds of the Department of Agriculture of the Federal University of Lavras, MG, Brazil.For hybrid BM 810, the seeds used were the ones retained in the oblong sieve 18/64 and for hybrid BM 3061, the ones retained in the circular sieve 20/64, and these sieves were selected due to the higher amount of retained seeds.
The determinations performed for the assessment of the seeds quality were: Moisture content, done by the oven method, described in the Rules for Seed Testing, with results expressed as mean percentage per treatment (Table 1) (Brasil, 2009); Incidence of mechanical damage, where seeds were immersed in the dye solution amaranth ® at 0.1%, for 2 minutes, then rinsed under running water and assessment according to the methodology described by Oliveira et al. (1998); Germination, according to Brasil (2009), using 50 seeds for each of the four replications and assessment held at the fourth (first count of germination) and on the seventh day after sowing, computing the percentage of normal seedlings; Accelerated aging, with the use of gerbox-type plastic boxes adapted with hanging aluminum screen -in each germination box were added 40 mL of water and a single layer of seeds on the entire screen.They were then kept in a B.O.D. (Biochemical Oxygen Demand)type germination chamber at 42 °C for 96 hours (Marcos-Filho, 1999) and after this period, the seeds were submitted to the germination test; Seedling emergence: the seeds were sown in plastic trays containing soil + sand as substrate, in the 2:1 ratio, moistened to 60% of the holding capacity.The trays were kept in the chamber at the temperature of 25 °C and a photoperiod of 12 hours, with daily assessments of emergence of normal seedlings and a final score at 14 days after sowing.The final emergence percentage (% E), the mean emergence time (MET) and the emergence speed index (ESI) were considered (Maguire, 1962); Electrical conductivity: it was performed according to Vieira and Krzyzanowski (1999), with the aid of a conductivity meter Digimed CD-21 and the results expressed in μS.cm -1 .g - ; for the seeds treated, the conductivity value was subtracted from the blank test without seeds, only with the treatment product diluted in water so as to exclude the interference of seed treatment products in the values obtained.
The enzyme activities were also analyzed: α-amylase, catalase, esterase, peroxidase, superoxide dismutase, malate dehydrogenase and alcohol dehydrogenase.For each system were used two samples of 50 seeds from each treatment.The seeds were soaked in the presence of antioxidant PVP and liquid nitrogen and subsequently stored at -86 °C.For the extraction of enzymes catalase, esterase, peroxidase, superoxide dismutase, malate dehydrogenase and alcohol dehydrogenase was used buffer Tris HCl 0.2 M pH 8.0 + 0.1% of β-mercaptoethanol at the ratio of 250 µL per 100 mg of seeds.
For the extraction of α-amylase enzyme, the seeds were germinated on paper roll for a period of 70 hours.After this period, plumule and root seeds were discarded and the remainder was macerated in mortar on ice, in the presence of liquid nitrogen.For extraction, 200 mg of the powder of germinated seeds were resuspended in 600 µL of the extraction buffer (Tris-HCl 0.2 M, pH 8.0 + 0.4% of PVP).The material was homogenized in a stirrer and kept in a refrigerator overnight, followed by centrifugation at 16,000 x g for 60 minutes at 4 °C.
The electrophoretic technique was performed in polyacrylamide gels system at 7.5% (separating gel) and 4.5% (concentrating gel).For the system α-amylase was added 0.5% of soluble starch in the polyacrylamide gel.The gel/electrode system used was Tris-glycine pH 8.9.15 uL of the sample supernatant were applied and the technique was conducted at 150 V for 4 hours.
A completely randomized design was used in a 3 x 2 factorial arrangement, whose factors were the seed moisture content at harvest (45%, 40% and 35%), seed treatment (treated and untreated) with four replicates per treatment.The hybrids were analyzed separately.For the isoenzymes expression, visual analysis of the expression bands was performed.
To compare the averages, Tukey test at 5% probability was used, by software Sisvar (Ferreira, 2011).For testing water content and enzymes, statistical analyses of the data were not performed.
Results and Discussion
In the physiological tests of first count of germination, germination, seedling emergence, emergence speed index and electrical conductivity there was significance only for the harvest moisture factor, i.e., the seed treatment did not interfere in the results of both hybrids.There was interaction of harvest moisture and seeds treatment only for the accelerated aging test of hybrids BM 810 and BM 3061.
For hybrid BM 810, the seeds that were harvested at 45% water content showed a higher percentage of mechanical damage, particularly those classified into more serious damage (grades 3 and 2) and the seeds harvested at 40% and 35% water content were classified undamaged (grade 0) (Figure 1).
The germination of the seeds harvested at 35% water content was lower than the germination of the seeds that were harvested at 40% and 45% water content.The vigor of the seeds harvested at 45% water content was higher than the vigor of the seeds harvested at 35% and 40% water content in the tests of fi rst count of germination, seedling emergence and electrical conductivity.The emergence speed index was the same, regardless of the water content in which the seeds were harvested (Table 2).The vigor of the treated and untreated seeds assessed by the accelerated aging test was higher in seeds at 35% water content.For the seed harvested at 40% and 45% moisture content, the treatment showed better performance in the accelerated Means followed by the same letter in the column do not differ by Tukey test at 5% probability.
Grade aging test (Table 4).The seeds of hybrid BM 3061 were more resistant to severe mechanical damage, although slight damage has been assessed (grade 1) (Figure 2).It is observed that the higher the water content of the seeds when harvested, the higher the incidence of damage considered serious (grade 3).The incidence of serious damage to seeds harvested at 35% and 40% water content did not differ.This high incidence of mechanical damage was caused above all by the steps of husking and threshing which, being mechanized, cause injuries in seeds.The quality of the seeds harvested at 40% water content was higher than those harvested at 35% and 45% water content in the fi rst count of germination, germination and emergence speed index.The electrical conductivity of the seeds at 45% water content was superior to others (Table 3).By the accelerated aging test, the vigor of treated seeds overcame the vigor of untreated seeds in seeds harvested at 35% and 40% water content.In treated seeds, there was no vigor difference by the accelerated aging test, taking into consideration the water content in which the seeds were harvested (Table 4).The poor quality of seeds harvested at 45% water content is related to their susceptibility to mechanical damage during the husking and threshing mechanical processes.The damage caused in the seeds is a gateway to organisms that are harmful to quality and it is due to this fact that the vigor of untreated seeds was lower in the accelerated aging test when compared to the treated seeds Means followed by the same lowercase letter in the column and uppercase letter on the row do not differ by Tukey test at 5% probability.
With the increase of injuries in the seeds, the synthesis induction processes and activity of enzymes and hormones may be affected, reducing the activity of important enzymes in the respiratory process and removing free radicals, reducing the seeds physiological quality (Galvão et al., 2014).Marcos-Filho (2005) states that the increase of the enzyme catalase activity indicates the evolution of the deterioration due to the need for more intense action of the participating enzymes of the antioxidant complex.
These results corroborate the ones by Veiga et al. ( 2010) who stated that among the causes of decay events are changes in enzyme activity, which enable monitoring and characterizing the seeds quality.
It is observed in Figure 4 an increased expression of peroxidase enzyme in seeds harvested at 35% water content.The enzyme activity increases when the seeds water content is reduced to create protection mechanisms.Divergent results were found by Galvão et al. (2014), who observed that the delayed harvest reduced the peroxidase enzyme activity, indicating the occurrence of further deterioration of the seeds with the seeds delayed harvest.
deleterious effects of acetaldehyde with the highest activity of ADH (Carvalho et al., 2014).
It was observed that the changes occurring in the ADH enzyme activity seem to be due more to the mechanical damage that impairs the seeds metabolism, where higher enzyme activity was observed in both hybrids in the seeds harvested at 45% water content in both treated and untreated seeds.
The membrane systems of seeds harvested at 45% water content had a greater effect of exposure to oxygen due to the higher content of mechanical damage that they suffered during the process of husking, resulting in a lower expression of peroxidase enzyme, which can be also proven by the results of physiological tests where their poor performance is noted in both hybrids studied (Martins et al., 2011).
Most of the enzyme peroxidase activity in hybrid BM 3061 was observed when seeds were harvested at 40% water content and this may be related to their better quality and higher vigor, which can be proven by the germination tests and emergence speed index (Table 3).
In addition to protective enzymes, there are the deteriorative enzymes.Among them is esterase, which promotes hydrolysis of esters, where these reactions are directly related to the lipids metabolism.As an example, membrane phospholipids.Esterase promotes the destabilization of the lipid bilayer, accentuating the deterioration process (Vieira et al., 2006).Greater expression of this enzyme was observed in seeds harvested at 45% moisture content for both hybrids studied, which makes it clear that the seeds immaturity and the mechanical damage suffered during the processing and drying procedures contributed to the high activity of this enzyme (Figure 5).
For enzymes alcohol dehydrogenase (ADH) and malate dehydrogenase (MDH), there was a greater intensity of bands on seeds harvested at 45% water content (Figure 6).The absence of oxygen promotes the beginning of the fermentation metabolism by induction of ADH, wherein acetaldehyde is reduced to ethanol by nicotinamide adenine dinucleotide (NAD).According to Veiga et al. (2010) this enzyme is important since it converts acetaldehyde into ethanol, a compound with less toxicity, and reduces the speed of the deterioration process.Thus, the seeds are less susceptible to the Carvalho et al. (2014), when studying the expression of malate dehydrogenase (MDH) in soybean seeds, noted that the main differences were found with the advancement of the storage period at six and eight months and that in these storage periods the seeds stored in cold chamber had higher MDH activity compared to the ones stored in conventional warehouse, due to the higher stress suffered in uncontrolled conditions, particularly at eight months of storage.Vieira et al. (2013) found decreased activity of MDH, from six months of storage at 10 °C and 25 °C, but with greater effect at nine and twelve months of storage at 25 °C.
The highest expression of MDH enzyme in seeds harvested at 45% water content can be related to the physiological quality, since this fact is directly related to the incidence of mechanical damage, which was higher in these seeds.This fact is due to the damage to mitochondrial membranes, and this organelle is the one that is more susceptible to peroxidation.The increased activity may have occurred because of increased respiration in the seeds that were in the deteriorating process, since the enzymes involved in respiration can be activated in lower quality seeds (Tunes et al., 2014).
With respect to α-amylase, lower expression was observed in seeds harvested at 45% moisture content in both hybrids studied (Figure 7).The development of α-amylase activity is an important event that can be detected during early seed germination, and its main role is to provide substrates for seedling use until it becomes photosynthetically selfsufficient (Caixeta et al., 2014).
Conclusions
The isoenzymes expression varies according to the hybrid and the seeds quality.
There is an increase in the activity of α-amylase and peroxidase enzymes, and decreased activity of enzymes superoxide dismutase (SOD), catalase (CAT), esterase (EST), alcohol dehydrogenase (ADH) and malate dehydrogenase (MDH) as the seeds water content at harvest is reduced.
Seeds treatment does not interfere with the isoenzymes expression.
Seeds harvested at 35% moisture content showed higher enzyme activity, because as the seeds lose water during the maturation process, they gain greater tolerance to desiccation and thus become tolerant to high drying temperature, which, according to Rosa et al. (2005), makes these seeds present a higher synthesis of the α-amylase enzyme than intolerant seeds.
Figure 1 .
Figure 1.Incidence of mechanical damage in maize seeds hybrid BM 810 -semi-hard, harvested with different moisture contents (45%, 40% and 35%)and classifi ed into four levels (0, 1, 2 and 3) in accordance with the intensity of the damage.Means followed by the same letter for each grade do not differ by Tukey test at 5% probability.
Figure 2 .
Figure 2. Incidence of mechanical damage in maize seeds hybrid BM 3061 -toothed, harvested with different moisture contents and classifi ed into different grades.Means followed by the same letter for each grade do not differ by Tukey test at 5% probability.
Table 2 .
Average values of fi rst count of germination (FC), germination (G), emergence (E), emergence speed index (ESI) and electrical conductivity (EC) of maize seeds hybrid BM 810, harvested with different moisture content (M).
Table 3 .
Average values of first count of germination (FC), germination (G), emergence (E), emergence speed index (ESI) and electrical conductivity (EC) of maize seeds hybrid BM 3061, harvested with different moisture content (M).Means followed by the same letter in the column do not differ by the Tukey test at 5% probability. | 2018-12-21T14:17:16.247Z | 2015-07-07T00:00:00.000 | {
"year": 2015,
"sha1": "6705bcd0cb37d008c7a325310aa82ccfc37263c0",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/jss/v37n2/2317-1537-jss-37-02-00139.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6705bcd0cb37d008c7a325310aa82ccfc37263c0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
1227493 | pes2o/s2orc | v3-fos-license | Cross-reactivity of self-HLA-restricted Epstein-Barr virus-specific cytotoxic T lymphocytes for allo-HLA determinants.
Epstein-Barr (EB) virus-specific cytotoxic T cells, prepared from virus- immune donors by reactivation in vitro and maintained thereafter as IL- 2-dependent T cell lines, have been tested against large panels of EB virus-transformed lymphoblastoid cell lines of known HLA type. Whilst the pattern of lysis of the majority of targets was always consistent with HLA-A and HLA-B antigen restriction of effector function, in several cases it was noticed that certain HLA-mismatched targets were also reproducibly lysed. When this "anomalous" lysis was investigated in detail, it was found to be directed against allodeterminants on class I HLA antigens; thus, mitogen-stimulated as well as EB virus- transformed lymphoblasts from the relevant target cell donors were sensitive to the killing, and in each case the lysis could be specifically blocked by monoclonal antibodies to class I HLA antigens. In one example the target for this alloreactive lysis could be identified as a single serologically defined antigen, HLA-Bw57, while in another example lysis was directed against a "public" epitope common to HLA-Bw35, -Bw62, and a subset of -B12 antigens. Both cold target inhibition experiments and limiting dilution analysis strongly suggested that this alloreactive lysis was being mediated by the same effector T cells that recognize EB viral antigens in the context of self-HLA. This is the first demonstration in man that alloreactive responses can be derived from within the antigen-specific, self MHC- restricted T cell repertoire.
virus-transformed B cell lines has now begun to reveal instances in which certain EB virus-specific cytotoxic T cell preparations, which display classically HLArestricted recognition of the great majority of targets tested, also show an anomalous lysis of particular HLA-mismatched target cell lines. The present work shows that this anomalous lysis is directed against epitopes on class I HLA alio-antigens expressed not only on EB virus-transformed but also on mitogenstimulated target cells, and is mediated by the same effector cells as mediate virus-specific self-HLA-restricted cytolysis. This, to our knowledge, is the first demonstration that such cross-reactivity exists within the T cell repertoire of an outbred species.
Materials and Methods
Blood Donors and HLA Typing. Blood samples were obtained from healthy adult donors whose immune status with respect to EB virus was assessed by measuring antibodies to the EB virus capsid antigen (24), and who were typed for HLA-A, -B, and -C, and HLA-DR antigens using peripheral blood mononuclear (PBM) cells and EB virus-transformed lymphoblastoid cell lines as described previously (25).
Cell Lines and Culture Medium. Cell lines were prepared and passaged as described previously (25). RPMI 1640 culture medium supplemented with 2 mM glutamine, 100 U/ml penicillin, 100 #g/ml streptomycin, and 10% fetal calf serum (FCS) was used for maintenance of all cell lines and experimental cultures, unless otherwise stated. Preparation of EB Virus-specific Effector Cells. This has been described in detail elsewhere (22)(23)(24)(25). Briefly 2 X 10 6 PBM cells were cultured with 5 × 104 X-irradiated autologous EB virus-transformed lymphoblasts for 10 d; at this time the stimulated T cells were harvested by E-rosetting with sheep erythrocytes treated with 2-aminoethylisothiouronium bromide hydrobromide and cultured for 4-6 d with autologous X-irradiated stimulator cells at a responder/stimulator ratio of 5:2. The resulting effector population was expanded using I L-2-containing culture supernatants and repeated addition of X-irradiated stimulator cells. Q~totoxicitv Assays. The conduct of cytotoxicity assays, including cold t~rget competition experiments, and assays testing the effect of monoclonal antibodies upon cytotoxicity, was exactly as described previously (24, 25). Mitogen-stimulated lymphoblasts were prepared from cultures of PBM cells either 3 d after exposure to phytohaemagglutinin (PHA) or 5 d after exposure to pokeweed mitogen (PWM). On some occasions mitogen-stimulated lymphoblasts were cultured in medium containing 15% human AB serum in place of 10% FCS. Monoclonal Antibodies. The Analysis of Results of Cytotoxicity Assays. In order to allow comparison of results of repeated testing of the same effector/target cell combination, the specific lysis was expressed on each occasion as a percentage of the autoiogous target cell lysis observed in the same experiment at the same effector/target ratio. The mean and standard deviation of the relative percentage lysis was then calculated for each effector/target combination tested across a range of different effector/target ratios (2.5:1 to 20:1). Limiting Dilution Culture and Assay Procedure. Appropriate numbers of T cells from an IL-2-dependent cytotoxic T cell line from donor StG were cultured in U-shaped 0.2-ml volume wells in the presence of 2 × I0 "~ X-irradiated stimulator cells, either from the autologous cell line StG or from the HLA-Bw62-bearing cell line JU, and 25% IL-2-containing culture supernatant. IL-2 and culture medium were replaced twice weekly. After 10 d, wells in which a proliferating colony was visible were subcultured to 2-6 further U-wells, with 2 x 103 stimulator cells per well. Subculturing and feeding with IL-2 and stimulator cells was continued until each colony was of Sufficient size to allow assay for cytotoxicity.
In order to obtain further information on the cytotoxicity of a large number of colonies, an in situ cytotoxicity assay was used (29), instead of waiting for sufficient numbers of cells to be available for conventional cytotoxicity testing. This was necessary as colonies were often slow-growing. Growing colonies were harvested and split into equal aliquots, some of which were retained for further culture, while the others were transferred to fresh U-shaped well microtest plates for cytotoxicity assay. After feeding with fresh culture medium (without IL-2) the plates were centrifuged briefly, and the supernatant removed and replaced with further fresh medium. Cells were then cultured overnight. Before assay, replicate colony wells were checked visually to ensure that approximately equal numbers of effectors cells were present in each assay well. 150 #1 of medium was removed from each well by careful suction; 50 #1 of fresh medium and 104 chromiumS~-labeled target cells in 100 t~l culture medium were then added. By assaying colonies at least 3 d after feeding with IL-2 and replacing the medium three times, negligible amounts of PHA were present in the final assay well and there was never any evidence of lectin-mediated cytotoxicity. The assay was completed in the normal way.
Assessment of Results from In Situ Cytotoxicity Assay. Although low levels of percentage specific lysis were recorded by this technique, lysed and nonlysed targets could be clearly distinguished; thus wells were scored positive for lysis if >4.5% specific lysis was observed (representing an isotope release 93 standard deviations higher than the spontaneous release obtained from 4-6 control wells containing target cells alone). <2% specific isotope release was observed in wells scored as negative for lysis. Most colonies were assayed on 2-4 separate occasions with excellent agreement between the results on each occasion. Experimental Procedure. EB virus-specific cytotoxic T cell lines were established from virus-immune donors and tested on a large panel of HLA-typed EB virus-transformed lymphoblastoid target cells. From this analysis, the identity of the dominant self HLAdeterminants restricting each effector cell population could be determined, and instances of"anomalous" lysis of H LA mismatched target cells noted. The nature of the ~anomalous" cytotoxicity was further investigated, (a) by extending the target cell panel to include EB virus-genome-negative mitogen-stimulated lymphoblasts from a range of HLA-typed donors, (b) by monoclonai antibody blocking studies using antibodies specific for monomorphic (framework) or polymorphic determinants on class I HLA antigens, and (c) by cold target inhibition experiments to compare the "anomalous" and EB virus-specific components of cytotoxicity.
Finally, one particular effector T cell line displaying both "anomalous" and EB virusspecific cytotoxicities was seeded at limiting dilutions and the resulting colonies assayed for both cytotoxic functions.
Results
Demonstration of Alloreactivity in an EB Virus-specific Self-HLA-restricted Cytotoxic T Cell Population. Fig. 1 presents an analysis of the cytotoxicity displayed by an effector T cell line obtained from the seropositive (i.e., virus-immune) donor JuG (HLA-A 1, A2, B8, B14) by appropriate in vitro stimulation with autologous EB virus-transformed B cells. It is clear from a that iysis was preferentially directed towards the autologous stimulating cells and towards those allogeneic EB virus-transformed target cells sharing HLA-B8 in common with donor JuG. A lower level of killing was observed with HLA-B14-matched target cells, whereas there was no significant lysis of target cells sharing only HLA-A 1 or -A2 with the effector cells. The HLA-restricted nature of the cytotoxicity is further supported by the results obtained with the 10 HLA-mismatched EB virus-transformed target cell lines shown on the left of b, none of which were lysed. Moreover, the EB virus specificity of these effector cells is apparent from their lack of reactivity when tested against autologous or HLA-B8/B14-bearing mitogen-stimulated lymphoblasts (c) or against the NK-sensitive target cell lines HSB2 and K562 (<10% relative lysis, data not shown).
However, as shown by the remaining results in b, five target cell lines completely mismatched for class I HLA antigens (or sharing only HLA-A2, an antigen that does not mediate virus-specific cytolysis by JuG effector T cells) were unexpectedly lysed at significant levels. This result did not reflect any unique sensitivity of these particular targets to cytolysis per se since in numerous other assays these same lines were not killed by EB virus-specific effector T cell preparations from other HLA-mismatched donors (data not shown). The alloreactive nature of this "anomalous" lysis was first suggested by the parallel results shown in c. Thus, where testing was possible, PHA-stimulated lymphoblasts prepared from these same individuals were also found to be sensitive to "anomalous" cytolysis by the JuG effector T cell line. This result was obtained on several occasions of testing, irrespective of whether the mitogenic stimulation was carried out using fetal calf serum or human AB serum ( Fig. 1 legend).
Subsequent studies showed that high titer preparations of two different monoclonal antibodies (W6/32 and BB7.7), directed against framework determinants common to all class I HLA molecules, blocked this "anomalous" lysis of HLAmismatched target lines very efficiently (74% and 82% inhibition for W6/32 and BB.7.7, respectively, cf. 81% and 52% inhibition of HLA-restricted virus-specific lysis in the same experiments). However, identification of the polymorphic class I HLA determinant against which "anomalous" lysis was directed in this particular case was difficult since no serologically defined HLA-antigen was common to all five sensitive targets (see Discussion).
Demonstration of Alloreactivity Against an Identifiable Serotogicatty Defined HLA Alloantigen.. By contrast, Fig. 2 provides data obtained using another EB virusspecific cytotoxic T cell line with an alloreactive component, where it was indeed possible to identify the particular antigen against which the alloreactivity was directed. In this example, the effector T cell line from seropositive donor JU (HLA-A2, A2, Bw62, Bw62) displayed an EB virus-specific cytotoxic function predominantly restricted through HLA-Bw62, with little A2-restricted lysis. HLA-mismatched target cell lines were not lysed, with the striking exception of all four target cell lines tested that were positive for HLA-Bw57; these showed unusually high levels of lysis and again this "anomalous" reactivity ofJU effector T cells extended to include HLA-Bw57-bearing mitogen-stimulated lymphoblasts ( Fig. 2) in tests where all other EB virus-genome-negative target cells were not killed (data not shown).
• The availability of the monoclonal antibody MA2.1, which is specific for an epitope shared only by HLA-A2 and -Bw57/Bw 58 antigens, allowed testing of the hypothesis that HLA-Bw57 was the target antigen for this "anomalous" lysis. As shown in Fig. 3, high titers of MA2.1 blocked "anomalous" lysis of the two Bw57-bearing target lines RT and TH just as efficiently as did the monoclonal antibody BB7.7 which binds to a common determinant on all class I HLA molecules; this result suggested that Bw57-directed alloreactivity was responsible for all of the observed iysis of RT and TH cell lines. In this same experiment, the EB virus-specific HLA-Bw62-restricted cytotoxicity of JU effector T cells directed against the autologous cell line was selectively blocked by the binding of BB7.7, but not of MA2.1, to these target cells.
Demonstration of Alloreactivity Against a "Public" Determinant Shared by Certain HLA Antigen Types/Subtypes. Table I summarizes The alloreactive nature of these "anomalous" cytotoxicities of the StG effector T cell line was made clear by the results in Fig. 4. Thus, not only EB virustransformed cells but also mitogen-stimulated lymphoblasts from relevant donors bearing either HLA-Bw62 or -Bw35 or -Bw44 were sensitive to "anomalous" lysis, whereas lysis of autologous or of HLA-matched targets was specific for EB virus-transformed cells only. Again the "anomalous" lysis of HLA-mismatched targets was strongly inhibited (<80%) by the class I HLA antigen-specific monoclonal antibody W6/32; in a parallel experiment the class II HLA antigen specific monoclonal antibody TDR31.1 inhibited lysis by <5% (data not shown). Fig. 5 shows representative results of cold target inhibition experiments that were conducted in order to determine the relationship, if any, between the alioreactive responses directed against HLA-Bw62, -Bw35, and -Bw44 respectively. Unlabeled target cells expressing any one of these antigens were capable of significantly inhibiting the "anomalous" lysis of the Bw35-bearing cell line M7 (Fig. 5 a) or of HLA-Bw44-or HLA-Bw62-bearing cell lines (data not shown). Mitogen-stimulated lymphoblasts were just as effective cold target inhibitors as were the corresponding EB virus-transformed cells (see DW-blast results, Fig. 5 a).
An analogous series of cold target inhibition experiments was performed to The relative percentage lysis shown for each target represents the specific lysis of that target cell expressed as a percentage of the specific lysis of the EB virus-transformed autologous (JUG) target cell obtained in the same assay at the same effector/target ratio; the mean relative percentage lysis from between two and eight assays is shown for each target cell. (*Target cells AD, AMc, RT, and M7 were serologically typed as bearing HLA-A2 antigens but are as shown as HLA-mismatched targets since they express "variant" HLA-A2 antigens as defined by T cell-restricting determinants [reference 2.5].) In addition to the results shown in c, when PHA-stimulated lymphoblasts from donors GB and FB were cultured and assayed in medium containing either FCS or human AB serum, the results expressed in terms of relative percentage lysis (at effector/target ratio 5:1) were as follows: In FCS containing-medium, GB blasts 99%, FB blasts 89%; in AB serum containingmedium, GB blasts 85%, FB blasts t 10%. I I I I I I I I I I LRT AR, Bw57 | l EB-negotive targets percentage lysis as described in Fig. 1 legend. Lysis of the autologous target cell is shown by the hatched column. (*) As described in Fig. 1 legend, certain target cells (in this case, M7, AMc, AD, RT, and TR) express variant HLA-A2 antigens and are therefore shown as HLA-mismatehed targets.
). < 2. determine whether the alloreactivity of the polyclonal StG effector cell line reflected genuine cross-reactive lysis by virus-specific self-HLA-restricted effector T cells or was being mediated by a separate population of effectors. The results showed that unlabeled targets bearing any one of the relevant alloantigens, for instance either EB virus-transformed cells or mitogen-stimulated lymphoblasts from the HLA-Bw62-positive donor JU " (Fig. 5 b), were capable of significantly inhibiting lysis of autologous EB virus-transformed targets by the StG effector Target cells bearing the "cross-reactive" antigens, HLA-Bw62 or -Bw35 or -Bw44, are also indicated. For each target ceil, the results are expressed as relative percentage lysis as described in Fig. 1 legend. cell line. The degree of inhibition, while less than that shown by unlabeled cells of the autologous cell line itself, was nevertheless equal to that shown by an HLA-matched cell line JuG (from the mother of donor StG) and clearly much greater than the background effects caused by irrelevant cold targets (Fig. 5 b). Conversely, alloreactive lysis by the StG effector T cell line could in each case be inhibited by unlabeled cells of the autologous virus-transformed cell line StG (Fig. 5 a) and of the HLA-matched cell line JuG (data not shown).
Analysis of Virus-specific and Alloreactive Cytotoxicities by Limiting Dilution of the Effector T Cell Line.
Limiting dilution culture of the polyclonal StG effector T cell line provided an independent approach with which to assess the relationship between virus-specific and alloreactive cytotoxicities. In these experiments, the parent cell line was found to have a low plating efficiency and growing colonies were never obtained at seedings below 5 cells/well. The cell populations obtained could therefore not be designated as clones. Of the colonies derived from seedings of 5-40 cells/well, 24 of 72 colonies tested proved to be cytotoxic. Fig. 6 presents the individual results from four such cytotoxic colonies chosen to illustrate the patterns of lysis most commonly obtained. In the upper panel, the two colonies (SBI 1, 20 ceils/well; JD3, 5 ceils/well) displayed autoiogous target lysis as well as lysis of all three cross-reacting targets (DW, JU, and M7 bearing HLA-Bw44, Bw62, and Bw35, respectively) while another HLA-mismatched target line, RC and the NK-sensitive target line HSB2 were not killed. In all, 18 colonies showed this pattern. In contrast the lower panel shows two colonies (SB9, 20 cells/well; SA 11, 40 cells/well) whose cytotoxicity was preferentially directed towards the autologous cell line with no evidence of any crossreactivity against the relevant targets DW, JU, and M7; one further colony (40 cells/well) showed this pattern. A further three colonies (seeded at 20-40 cells/ well) arising in the same experiment appeared to lyse the autologous target cells and only one of the cross-reacting targets, but in each of these three cases the pattern of reactivity could not be unequivocally identified because the levels of killing observed were borderline. Throughout this work, no colonies were found in which cross-reactive lysis occurred in the absence of cytotoxicity against the autologous virus-transformed line.
Investigations in this (21, 24, 30) and in several other (31-34) laboratories have shown that EB virus-specific cytotoxic T cells can be prepared by in vitro reactivation from memory cells in the blood of virus-immune donors, and that such effector cells are HLA-A and -B antigen-restricted in their function. However, a rigorous analysis of the restriction operative in this system has only recently been made possible through the development of IL-2-dependent effectar T cell lines that retain the EB virus-specific HLA-restricted function of the original preparations from which they were derived (23). The present investigation was initiated once it became clear that certain effector T cell lines, which
in all other respects exhibited classical HLA-A and HLA-B antigen-restricted function, nevertheless showed "anomalous" lysis of particular HLA-mismatched targets. Detailed results from three effector T cell lines with such "anomalous" reactivity are described in the present paper. These are not isolated examples, however, for this same phenomenon has been observed in the functional analysis of several other EB virus-specific effector T cell lines derived from other donors and tested in this laboratory.
The results demonstrate that "anomalous" lysis of HLA-mismatched targets occurred irrespective of the EB virus genome status of the target cell, and appeared to be directed against class I alIo-HLA determinants expressed on the target cell surface. Thus, in each case, both EB virus-transformed and mitogenstimulated lymphoblasts from the particular allogeneic donors in question were sensitive to "anomalous" lysis ( Figs. 1, 2, and 4), in contrast to the reactivity of these same effector cells against autologous or HLA-matched targets where lysis was confined to the EB virus-transformed cell line only. Moreover "anomalous" lysis could be specifically inhibited by monoclonal antibodies binding to framework determinants on all class I HLA antigens (Fig. 3), the degree of inhibition being at least as strong, if not stronger, than that observed when these same effector cells were simultaneously tested for EB virus-specific lysis of the autologous virus-transformed cell line.
The target structure for "anomalous" cytotoxicity therefore appears to be either a determinant expressed by a class I alIo-HLA molecule per se or an "interaction antigen" (10,11) formed by an alIo-HLA molecule and some other (non-EB virus-associated) cell surface component. A precedent for the latter view comes from studies in the mouse system (35) in which cloned cytotoxic T cells specific for a minor transplantation antigen restricted by a self H-2K allele were found also to recognize a different minor transplantation antigen in the context of an allogeneic H-2D allele. The present experiments cannot determine whether a second antigen is being recognized in the context of alIo-HLA, although they can eliminate fetal calf serum proteins in culture medium as possible contributors to such an "interaction antigen" since mitogen-stimulated lymphoblasts prepared in medium containing human AB serum were equally sensitive to "anomalous" lysis ( Fig. 1 legend and unpublished results). It was necessary to check this particular point since there are circumstances in which fetal calf serum-directed cytotoxicity can contaminate otherwise EB virus-specific cytotoxic T cell preparations (36,37).
Although the alloreactive nature of "anomalous" lysis has been clearly demonstrated, the identity of the target allo-antigen has not always been determined. Thus, in the first example shown by the JuG effector T cell line, no one seroiogically defined HLA-A, -B, or -C antigen was shared by all the five targets that were sensitive to "anomalous" lysis ( Fig. 1). Yet in this case cold target inhibition studies clearly showed mutual cross-inhibition within this group of targets (data not shown), suggesting that each was being recognized through its expression of a common ailodeterminant. Perhaps the most likely candidate for such a determinant would be an epitope shared by the serologically related HLA-A3 and -A11 antigens, since all five targets expressed one or other of these antigens.
In contrast, the target antigen responsible for allo-recognition by the EB virus-specificJU effector T cell line was positively identified as HLA-B17 (Bw57). Not only was "anomalous" lysis confined to Bw57-bearing target cells (Fig. 2), but also the killing was strongly inhibited by the monoclonal antibody MA2.1. In contrast, binding of MA2.1 to the HLA-A2-bearing autologous target cell line JU did not reduce its EB virus-specific lysis by the same effector population since most of this activity was restricted through another determinant (HLA-Bw62) on the target cell surface. These particular results suggest that in this case effector cells specific for the EB virus-induced lymphocyte-detected membrane antigen (LYDMA) presented in the context of HLA-Bw62 (self-HLA) were recognizing a cross-reacting epitope presented by the HLA-Bw57 molecule. It is interesting to note here that Bw62 and Bw57 do share some cross-reactive epitopes as revealed by HLA-typing sera (38) so that the interaction of LYDMA with Bw62 might present one or more such epitopes in an immunogenic form to the T cell system. Analysis of the "anomalous" reactivity of the StG effector T cell line revealed a complex relationship between the target alloantigen recognized by T cells and the serologically defined polymorphism of HLA antigens. Thus, lysis involved all of the Bw35-bearing, all of the Bw62-bearing and a subset of the B 12-bearing targets tested (Table I). This subdivision of B 12, while clearly distinct from the serologically defined split into Bw44 and Bw45 sub-types, is strikingly similar to that recently reported by other workers using anti-B12 alloreactive T cells generated in a standard mixed lymphocyte culture (39,40). The fact that all three alloreactivities, against Bw35, Bw62, and B12, respectively, showed considerable mutual cross-inhibition in cold target inhibition experiments ( Fig. 5 and unpublished results) strongly suggested that they share a "public" specificity against which the "anomalous" cytotoxicity of StG effector T cells is directed. In this context, it is interesting to note that cross-reactions between Bw35 and Bw62 and between B 12 and Bw35 have been identified by serological and cell-mediated lympholysis techniques (38,41,42). To our knowledge this is the first identification of a determinant common to all three antigens.
A central question posed by these findings is whether the alloreactivity observed is a genuine cross-reactivity displayed by the EB virus-specific effector T cells themselves or whether it represents a separate reactivity emerging pari passu with the expansion of virus-specific effector cells in IL-2. As a preliminary approach to this question, cryopreserved samples of the original effector cell preparations from which the IL-2-dependent StG and JuG effector T cell lines were derived were thawed and the cells tested on a representative series of targets; "anomalous" lysis of selected HLA-mismatched targets was again apparent even at this early stage (unpublished experiments). Subsequent cold target inhibition experiments supported the view that both virus-specific and alloreactive cytotoxicities were being mediated by the same effector cells. For example, using StG effector T cells, alloreactive lysis could be strongly inhibited by unlabeled target cells of the autologous EB virus-transformed cell line (Fig. 5 a) but not by autologous mitogen-stimulated lymphoblasts (data not shown), while EB virus-specific cytolysis was itself significantly inhibited either by EB virustransformed cells or by mitogen-stimulated lymphoblasts bearing the relevant alloantigen (Fig. 5 b). Several other examples of alloreactivity, including those displayed by the JuG andJU effector T cell lines, have been analyzed in a similar way and these have always shown significant cross-inhibition between the virusspecific and alloreactive components of the cytotoxicity, although the strength of mutual cross-inhibition did vary between individual examples.
The relationship between virus-specific and alloreactive cytotoxicities of the StG effector cell line was further pursued by limiting dilution culture. Although the relatively poor colony-forming ability of this line meant tht formal criteria for cloning could not be satisfied, colonies were isolated that maintained cytotoxicity against the autologous line but that, unlike the bulk culture, had no alloreactivity; on the other hand, all colonies that displayed aUoreactivity also maintained virus-specific cytotoxicity (Fig. 6). This latter result could not be accounted for on the basis of a less stringent limiting dilution procedure used in these particular cases; in fact, those colonies showing only virus-specific lysis were all derived from wells at the higher cell seedings (20-40 cells/well), while those colonies derived from wells at the lowest cell seeding (5 cells/well) displayed both virus-specific and alloreactive cytotoxicities. The inference from these experiments, namely that the StG effector T cell line is composed of clones showing virus-specific lysis only and of clones showing virus-specific and alloreactive lysis, is entirely consistent with the earlier results from cold target inhibition experiments (Fig. 5).
The overall evidence therefore strongly supports the view that EB virusspecific, self HLA-restricted cytotoxic T cells can recognize cross-reactive epitopes presented by class I HLA alloantigens. To our knowledge this is the first demonstration in an outbred species (i.e., man) of a phenomenon that is becoming well recognized in studies with inbred mouse strains using either virus-specific (13,16,43) or minor transplantation antigen-specific (12,14,35,44) cytotoxic T cells. Indeed it should be stressed that such cross-reactivities appear to be quite common in the EB virus system, having been detected in most if not all of the polyclonal effector T cell populations that have been closely analyzed in the course of this work. Such a result is not surprising in that the various target cells used in such an analysis are derived from many different individuals of an outbred species and thus provide a much greater variety of allodeterminants than are conventionally tested in analogous murine systems; the chances of recognizing fortuitous cross-reactions are therefore correspondingly increased.
Finally, the present studies give support to the thesis that alloreactive responses are derived from within the antigen-specific self-MHC-restricted T cell repertoire, a view that carries with it the rider that certain antigen-specific T cell clones restricted through one self-MHC determinant could show cross-reactivity against a different self-MHC molecule. In fact, such clones would be expected to be either suppressed or deleted in the self-tolerant animal (45,46) and indeed this may explain why in certain mouse strains, for instance, particular combinations of class I MHC molecule and viral antigen do not appear to induce an effective T cell response (47,48). Preferential restriction of effector T cells through some but not all of the available class I MHC antigens is also a well recognized feature of the cytotoxic T cell response to EB virus in man (24) and the present demonstration of the alloreactive potential of these virus-specific cytotoxic T cells is particularly interesting in the light of the model outlined above.
Summary
Epstein-Barr (EB) virus-specific cytotoxic T cells, prepared from virus-immune donors by reactivation in vitro and maintained thereafter as IL-2-dependent T cell lines, have been tested against large panels of EB virus-transformed lymphoblastoid cell lines of known HLA type. Whilst the pattern of lysis of the majority of targets was always consistent with HLA-A and HLA-B antigen restriction of effector function, in several cases it was noticed that certain HLAmismatched targets were also reproducibly lysed. When this "anomalous" lysis was investigated in detail, it was found to be directed against allodeterminants on class I HLA antigens; thus, mitogen-stimulated as well as EB virus-transformed lymphoblasts from the relevant target cell donors were sensitive to the killing, and in each case the lysis could be specifically blocked by monoclonal antibodies to class I HLA antigens. In one example the target for this alloreactive lysis could be identified as a single serologically defined antigen, HLA-Bw57, while in another example lysis was directed against a "public" epitope common to HLA-Bw35, -Bw62, and a subset of-B12 antigens. Both cold target inhibition experiments and limiting dilution analysis strongly suggested that this alloreactive lysis was being mediated by the same effector T cells that recognize EB viral antigens in the context ofself-HLA. This is the first demonstration in man that alloreactive responses can be derived from within the antigen-specific, self MHC-restricted T cell repertoire. | 2014-10-01T00:00:00.000Z | 1983-12-01T00:00:00.000 | {
"year": 1983,
"sha1": "b3f9e9f1f51700ee1ce4e960de0db6ca22299e48",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/158/6/1804.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3f9e9f1f51700ee1ce4e960de0db6ca22299e48",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
255185930 | pes2o/s2orc | v3-fos-license | Interactions between the jet and disk wind in a nearby radio intermediate quasar III Zw 2
Disk winds and jets are ubiquitous in active galactic nuclei (AGN), and how these two components interact remains an open question. We study the radio properties of a radio-intermediate quasar III Zw 2. We detect two jet knots J1 and J2 on parsec scales, which move at a mildly apparent superluminal speed of $1.35\,c$. Two $\gamma$-ray flares were detected in III Zw 2 in 2009--2010, corresponding to the primary radio flare in late 2009 and the secondary radio flare in early 2010. The primary 2009 flare was found to be associated with the ejection of J2. The secondary 2010 flare occurred at a distance of $\sim$0.3 parsec from the central engine, probably resulting from the collision of the jet with the accretion disk wind. The variability characteristics of III Zw 2 (periodic radio flares, unstable periodicity, multiple quasi-periodic signals and possible harmonic relations between them) can be explained by the global instabilities of the accretion disk. These instabilities originating from the outer part of the warped disk propagate inwards and can lead to modulation of the accretion rate and consequent jet ejection. At the same time, the wobbling of the outer disk may also lead to oscillations of the boundary between the disk wind and the jet tunnel, resulting in changes in the jet-wind collision site. III Zw 2 is one of the few cases observed with jet-wind interactions, and the study in this paper is of general interest for gaining insight into the dynamic processes in the nuclear regions of AGN.
INTRODUCTION
Disk winds and jets are ubiquitous in active galactic nuclei (AGN) (Yuan & Narayan 2014;Blandford et al. 2019) and play an important role in the AGN feedback to their host galaxies (King & Pounds 2015;Harrison et al. 2018; from radio-quiet (RQ) AGN, which occupy the majority of the AGN population, is dominated by thermal emission related to the accretion disk (Padovani 2016; Panessa et al. 2019). Observational evidence and magnetohydrodynamic models suggest that low-power jets and winds may coexist in RQ AGN (e.g. Tombesi et al. 2012;Fukumura et al. 2014;Giroletti et al. 2017). However, whether and how the jet and disk wind interact with each other remains an open question (Panessa et al. 2019). Some studies suggest that there exists a class of objects with moderate radio loudness, called radio-intermediate (RI) AGN (Miller et al. 1993). Observing RI AGN is much less difficult than RQ AGN, and RI AGN have mixed properties of RL and RQ AGN, thus providing an opportunity to study jet-and wind-driven AGN feedback and jet-wind interactions.
III Zw 2 (Zwicky 1967, also named PG 0007+106, Mrk 1501) is an unusual AGN containing many enigmatic observational characteristics. It is hosted by a spiral galaxy (Hutchings et al. 1982;Hutchings & Campbell 1983) at redshift z = 0.0893 (Sargent 1970), with a prominent tidal arm (Surace et al. 2001) to the north of the nucleus, but showing spectroscopic characteristics of a typical type I Seyfert galaxy (Arp 1968;Osterbrock 1977). It is further included in the bright quasar sample (Schmidt & Green 1983) with a bolometric luminosity up to ≈ 10 45 erg s −1 (Piccinotti et al. 1982;.
In radio bands, III Zw 2 is identified as a RI AGN (Falcke et al. 1996b) with a radio loudness of ∼200 (Kellermann et al. 1989(Kellermann et al. , 1994. The source shows a large extended structure on kilo-parsec (kpc) scales (Unger et al. 1987;Brunthaler et al. 2005), but its radio emission on parsec (pc) scales shows blazar-like behavior (Falcke et al. 1999;Liao et al. 2016). The Very Large Array (VLA) images of III Zw 2 show a triple radio structure extending along the northeastsouthwest (NE-SW) direction with a total extent over 36 (Brunthaler et al. 2005). The SW lobe is brighter but shorter than the NE lobe. The recently published images of III Zw 2 observed by the upgraded Giant Metrewave Radio Telescope (uGMRT) at 685 MHz (Silpa et al. 2020(Silpa et al. , 2021 show additional faint structures beyond the SW and NE lobes previously detected by the VLA, and these outer lobes extend along the direction perpendicular to the NE-SW jet. The most attractive feature of III Zw 2 is its extreme variability: over 20 fold in radio (Wright et al. 1977;Schnopper et al. 1978;Aller et al. 1985;Terasranta et al. 1992;Falcke et al. 1999;Brunthaler et al. 2005) and 10-fold in the X-ray band (Kaastra & de Korte 1988;Salvi et al. 2002a,b); also highly variable in optical (Lebofsky & Rieke 1980;Sembay et al. 1987), infrared (Lloyd 1984;Clements et al. 1995) and gamma-ray (Liao et al. 2016) bands. Moreover, the large flares of III Zw 2 in multiple bands are found to be correlated (Salvi et al. 2002a). The variability characteristics of III Zw 2 is very rare in RI and RQ AGN, and instead is similar to blazars (Aller et al. 2003;Teräsranta et al. 2004;Richards et al. 2011), making it stand out from the RI and RQ AGN populations. In addition, the radio flares seem to exhibit quasi-periodic variations with a period of about 4-5 years (Brunthaler et al. 2003;Li et al. 2010), which warrants an in-depth study of the physical mechanisms underlying the periodic variability.
Studying the structural changes and variability of the jet during prominent flare phases can help reveal the mechanism of jet production and evolution. Imaging the newly born jet features requires sub-parsec resolutions, which is currently only possible with Very Long Baseline Interferometry (VLBI) observations. In the past four decades, several large radio flares have been found in III Zw 2 (Aller et al. 1985;Terasranta et al. 1992;Brunthaler et al. 2005;Li et al. 2010), with flux density approaching or exceeding 3 Jy at 15 GHz. VLBI observations during the 1998 flare revealed a compact core-jet structure within 0.1-0.4 parsec (Falcke et al. 1996a;Kellermann et al. 1998;Falcke et al. 1999;Brunthaler et al. 2000). A dramatic structural change was found from the 43 GHz VLBI data between 1998 December 12 and 1999 July 15, and an apparent superluminal speed of 1.25 c (Brunthaler et al. 2000) was inferred, making III Zw 2 the first RI AGN detected with apparent superluminal jet motion in a spiral galactic nucleus (Brunthaler et al. 2000(Brunthaler et al. , 2005. After the major flare in 2009, the variability amplitude became smaller (see Figure 4 in Brunthaler et al. 2005). In the latest VLBI observations of III Zw 2 in 2017, only one compact core was detected and the core was significantly fainter than ∼ 20 years ago (Chamani et al. 2021).
In this paper, we investigate the jet kinematics and radio variability and propose a model to explain the quasi-periodic variability, the correlation between the prominent flare and jet production, and the secondary flare created by the jetwind collision.
DATA
The discovery of the southern extended feature from uGMRT images (Silpa et al. 2020(Silpa et al. , 2021 motivated us to use low-frequency interferometric data, including the data from the Australian Square Kilometre Array Pathfinder (ASKAP, Hotan et al. 2021) at 888 MHz and GMRT (Swarup et al. 1991) at 150 MHz (the TIFR GMRT Sky Survey, TGSS, Intema et al. 2017) and the Murchison Widefield Array (MWA, Tingay et al. 2013) observations at 72-231 MHz (Appendix A) to study the complete radio structure, to reveal the kpcscale morphology, and to constrain the radio spectrum of the extended structure. The VLBI data used for the jet kinematics analysis are obtained from the Monitoring Of Jets in Active galactic nuclei with VLBA Experiments (MOJAVE, Lister et al. 2018) program and the archived data from the As-trogeo database 2 . Details of the VLBI data reduction are referred to Appendix B. The radio light curve data are obtained from the single-dish monitoring programs of the Owens Valley Radio Observatory at 15 GHz and the Metsähovi radio telescope at 37 and 22 GHz (Appendix C).
RESULTS AND DISCUSSION
3.1. The jet structure The VLA images of III Zw 2 in the literature show a triple structure along the NE-SW direction (Brunthaler et al. 2005). The central core C dominates the total flux density at almost all wavelengths. The SW lobe is located at 15. 4 (∼25.5 kpc) (Unger et al. 1987;Kukula et al. 1998;Falcke et al. 1999) and is connected to the central core by a curved jet (Silpa et al. 2020): the jet initially points to the west and then gradually bends to the southwest. A weaker lobe is detected at 21. 9 (∼36 kpc) northeast of the core (Brunthaler et al. 2005).
The VLBI image in Figure 1 shows that the pc-scale jet extends to the west and is roughly aligned with the SW lobe ( Figure A1); therefore, it is possible that the SW jet is at the advancing side. If the SW and NE lobes were formed from the AGN activity in the same episode, then the length of the advancing jet/lobe should be longer than the reverse jet/lobe. However, the actual situation is the opposite. Although the SW lobe is brighter than the NE lobe, both the VLA and ASKAP images ( Figure A1) show that the SW lobe is shorter than the NE lobe. The difference in length between the two lobes seems to indicate that the ambient interstellar medium (ISM) on two opposite sides of the nucleus has different properties (Brunthaler et al. 2005;Silpa et al. 2021), i.e., the ISM at the southwest side has a higher density and the growth of the southwest jet is more obstructed by the ISM. A companion galaxy (Surace et al. 2001) exists ∼ 30 south of III Zw 2 ( Figure A2), and gravitational interactions may have accumulated a large amount of gas in the intermediate region between the two galaxies. The SW jet bends southward at the SW lobe, where enhanced polarization is observed (Silpa et al. 2021), providing observational evidence for strong interactions between the SW jet and the ISM.
The VLBI image in Figure 1 shows a compact jet with a maximum extent of 1.2 mas (corresponding to a projected distance of ∼2 pc). The integrated flux densities obtained from the VLBI images match those obtained from single-dish observations in close epochs, indicating that the contribution of the optically-thin extended jet to the total flux density at GHz frequencies is very small. Another intrinsic reason for the compact jet is that the radio activity of III Zw 2 is in- 1,2,4,8,16,32,64,128,256,512), where the rms noise is σ = 0.15 mJy beam −1 . The color scale shows the intensity in the logarithmic scale.
termittent and the VLBI jet is short-lived in nature (see discussion in Sections 3.2 and 3.4), which also lead to a short VLBI jet. A similar situation can be seen in another RI AGN Mrk 231 (Reynolds et al. 2017;Wang et al. 2021).
Combining images of III Zw 2 observed with different resolutions at different frequencies, we find that the jet exhibits an overall S shape. There are many physical mechanisms that can create S-or Z-shaped jet morphology, including: backflow from hotspots (Leahy & Williams 1984), buoyancy force bending the outer radio structure into the direction of decreasing external gas pressure (Worrall et al. 1995), or precession of the jet beam (Ekers et al. 1978), or jet reorientation due to hydrodynamic processes associated with galaxy mergers (Gopal-Krishna et al. 2003). III Zw 2 lives in a cluster environment and exhibits ongoing galaxy merger or galaxygalaxy interactions (Surace et al. 2001), with observational evidence of an optical tidal arm north of the III Zw 2 nucleus and a companion galaxy ∼ 30 south of the nucleus (see Figure A2). The spin-flip of the central engine following the galaxy merger can lead to a re-orientation of the jet, which can produce the S-or X-shape jet (Gopal-Krishna et al. 2003;Bogdanović et al. 2007). Using a typical advance speed of the terminal hotspot (e.g., M87, v = 0.11 c, Biretta et al. 1995) as a reference, we obtain a kinematic age of about ∼ 10 6 yr for the extended III Zw 2 lobe. The III Zw 2 jet is not as powerful as the M87 jet and may have a lower hotspot advance speed, which would result in an estimated kinematic age of more than 1 Myr, but should not be larger than 10 7 yr. This kinematic age is within the timescale of the black hole coalescence (Ebisuzaki et al. 1991) and the typical lifetime of an AGN, allowing for the jet re-orientation as a possible mechanism for the S-shaped morphology.
Jet properties at the pc-scale
The 15-GHz MOJAVE archive data of III Zw 2 covers a time span of up to ∼18 years from July 1995 to June 2013, containing 25 epochs. The sub-mas resolution, high sensitivity and homogeneous image quality of the MOJAVE VLBI images make them ideally suited for studying jet kinematics. Although the jet is very compact, it can be well distinguished from the core in the VLBA images. Figure 2-a shows the variation of the jet position angle with time and Figure 2-b shows the variation of the core-jet separation with time. It is clear that these jet knots belong to two separate components (labeled J1 and J2), rather than a single component. J1 and J2 follow their respective ballistic trajectories at different position angles.
We only used the jet components with the highest confidence, and the data in the other epochs were discarded because the data quality was too poor to obtain a reliable model fit (Appendix B). We obtain proper motion velocities of v(J1) = 1.35 ± 0.13 c and v(J2) = 1.21 ± 0.07 c, consistent with those obtained by the MOJAVE team using all fitted components (Lister et al. 2019), and also in good agreement with previous studies based on the 43-GHz VLBI observations (Brunthaler et al. 2000).
Assuming that the velocity of J1 remains constant from the time it was created until it disappeared, the ejection time of J1 can be traced back to epoch 2004.74 and is temporally correlated with the 2004 flare. Similarly, the ejection of J2 is associated with the 2009 flare. From 1996 onwards, three major radio flares over 3 Jy were observed at 37 GHz, with their peaks in early 1999 (Brunthaler et al. 2005), late 2004 (Chamani et al. 2020) and2009 (Figure 3 in the present paper). Each flare is associated with the creation of a discrete jet knot (the first discovered superluminal jet knot in Brunthaler et al. 2000 and J1 and J2 reported in this paper), possibly connected with intermittently enhanced accretion (see more discussion in Section 3.5).
Some lower-level flares that occurred during the intervals between these major flares, such as those in 2003 and 2006, did not produce identifiable long-lived jet knots. Although the lack of long-term follow-up monitoring of the 1998 jet prevents us from obtaining information on how it evolved over time, our observations clearly indicate that the proper motion speeds of J1 and J2 are almost the same as that of the 1998 jet, suggesting that the fundamental condition for the jet production has not changed significantly during these jet creation events.
The core brightness temperatures (column 7 of Table B2) calculated from the VLBI-based measurements (core flux density and size) all exceed the equipartition brightness tem-perature limit (Readhead 1994). The Doppler boosting factor can be estimated as: δ = T b,obs /T b,eq , and the Lorentz factor Γ can be estimated by Γ = (δ 2 +β 2 app +1)/(2δ). We find that large Γ values are related to the major flares. In the quiescent state, we get a mean value of δ = 2.6. Taking into account the jet proper motion speed β app = 1.35 c, that yields a viewing angle of approximately 20 • , which is a typical value for a non-blazar radio quasar (Ghisellini et al. 1993). A larger viewing angle of 35. • 4 was derived by Hovatta et al. (2009) and an upper limit of 41 • by Brunthaler et al. (2000) due to different Doppler factor and jet speed used.
We calculated the magnetic field strength at 1 pc to be ∼ 53 mG (Appendix E), similar to that estimated based on the apparent core shift and synchrotron self-absorption (Chamani et al. 2021). This consistency supports the low jet magnetic flux in III Zw 2 and the inference that the central engine did not reach the magnetically arrested disk state (Chamani et al. 2021). Interestingly, our estimate is based on data from the flaring phases while the earlier estimates by Chamani et al. are based on observations in a quiescent phase, suggesting that the jet structure remains relatively unperturbed by any fresh injection events at the jet base (at a distance of ∼ 800 gravitational radii).
Radio spectrum
We plot the radio spectrum of III Zw 2 from 72 MHz to 37 GHz in Fig. 4. We have chosen data points as close as possible to the time of the first MWA observation epoch (i.e., 2018 June 15). Due to the lack of low-frequency data in previous studies, the spectrum below 685 MHz has not been well constrained. The spectrum components below and above 685 MHz have distinctly different origins, so we fit the whole spectrum with two radiation components. At ν > 685 MHz, the core dominates the emission, showing an inverted spectrum (or peaked spectrum); at ν < 685 MHz, the flux density of the core is substantially reduced due to increasing synchrotron self-absorption (SSA) toward low frequencies, and the extended jets and lobes become gradually dominant.
We first fit the NE and SW spectra with power-law functions (i.e., S ∝ ν α ) based on the VLA data from the literature (Brunthaler et al. 2005) and obtained their spectral indices as α NE = −0.79 and α SW = −0.72, respectively. These are typical values for optically thin synchrotron emission. We then extrapolated the flux densities of NE and SW to the MWA band (central frequencies of 88, 118, 185, and 216 MHz). Next, the flux densities of the core were fitted with a self-absorbed synchrotron spectrum model (Pacholczyk 1970;Türler et al. 1999), by fixing the opticallythick spectral index α thick to 2.5 (Appendix D). The fit yields a low-frequency power-law spectrum with the amplitude A = 27 +12 −9 mJy, the low-frequency spectral index α low = −0.97 +0.21 The blue dashed lines represent the linear regress fit whose slope gives the jet proper motion; red dotted lines and green dotted lines remark the time of γ-ray flare peak and the time of radio flare peak at 15 GHz, respectively; c) changes of VLBI components' size with time; d) the changes of the integrated flux densities of VLBI components with time. To make the trend of the integrated flux density of the jets more visible, the integrated flux density of the jets has been multiplied by a factor of 2. The fitted parameters are referred to mJy, and a higher-frequency optically thick spectrum with the amplitude F m = 146 +11 −9 mJy, the turnover frequency of ν m = 11.24 +2.01 −1.25 GHz, the optically-thin spectral index α = −0.20 +0.10 −0.12 . However, our spectrum shown in Fig. 4, constructed with data points measured when the source was in a low-activity state (indicated by the red-colored arrow in Figure 3), is different from that presented in Falcke et al. (1999) which was in the flaring state. During the 1998 flare, the core had an inverted spectrum between 1.4 and 666 GHz peaking around 43 GHz (Falcke et al. 1999), a factor of 3 higher than our fit. In the quiescent state in 2018 depicted by Figure 4, the emission at GHz frequencies is still dominated by the core C, but the turnover frequency has shifted to 11.24 +2.01 −1.25 GHz. We then subtracted the extrapolated flux densities of C, NE, and SW from the observed MWA flux densities, and the remaining flux density is mainly from the extended components N+S. Finally, we fitted the N+S flux densities with a power-law function and obtained a spectral index of α = −1.09 ± 0.12, which is much steeper than that of the inner lobes NE and SW, but consistent with the spectral indices of radio relics (Shulevski et al. 2015;Quici et al. 2021
Temporal evolution of the VLBI components
Figure 2 panels c and d show the correlation between the core size and the flux density: the core size gradually increased when the flux density was in the rising phase before the flare peak; after the flare peak, the core size gradually decreased. The value of Spearman's rank correlation coefficient (Lehman 2005;Myers et al. 2013;Dodge 2008) between the core size and flux density is 0.43 with p-value of 0.14. Their positive correlation needs to be verified by more data. If this correlation is further confirmed, this phenomenon may be naturally explained by the superimposition of a flaring and fast-moving jet component on a quiescent and stationary core. The resolution of the VLBI images in this paper is not yet sufficient to distinguish between these two components in the initial flaring phase. Only when the jet knot produced after the flare moves to a certain distance is it sufficient to separate the jet from the core clearly.
As the flaring component (a propagating shock) passed through the core, it led to a gradual increase in the flux density of the observed core. As the duration of the radio flare (lasting several years) was much longer than the cooling time of the synchrotron radiation of relativistic electrons (∼ 130 days at 15 GHz), it required a continuous injection of fresh relativistic electrons into the core to ensure that the core flux density continued to increase for about two years. The flaring component continued to move outwards and gradually separated from the stationary core. For a period after the flare (about 1 year), the VLBI image resolution was not sufficient to distinguish between the core and the flaring jet component, but the core size was observed to increase as the jet moved outward (Figure 2-c). With the cessation of the intermittent energy injection, the core returns to the quiescent state until the next flaring component passes by. At the same time, the size and brightness of the core gradually became smaller as the flaring jet component continued to move outward and separated from the stationary core. On the other hand, the flaring component became optically thin and its surface brightness declined over time. An alternative model of an inflating balloon has been proposed to explain the spectrum and structure evolution of III Zw 2 by Falcke et al. (1999). In their model, the initial phase of the flare is interpreted as the relativistic jet interacting with the torus, and the jet-ISM collision causes frustration with the jet's advancing motion. Their model is compatible with the one we propose here to explain the evolution of major flares. Moreover, their jet-ISM explanation is also consistent with the jet-wind collision model we propose in Section 3.7 to interpret the secondary 2010 flare. High-cadence high-resolution VLBI monitoring of the flaring jet helps to refine this physical picture.
The flux density variability of the jet in Figure is not very pronounced due to the sparse distribution of the data points. On the other hand, the temporal evolution of the jet component size (see Table B2) displays an inverse correlation with the core size, as is most evident after 2009 (corresponding to J2). We suggest that after separation from the core region, the moving discrete jet component underwent adiabatic expansion, leading to an increase in size, as well as a decrease in surface brightness. The jet disappeared until the brightness of the shock fell below the detection threshold.
To summarize the properties of the jet of III Zw 2 described above, we find that the radio properties of III Zw 2 in the quiescent state make it look more like a Seyfert 1 galaxy. However, its strong variability, apparent superluminal jet motion, and large extended structure are very distinct from normal RQ quasars, but instead, resemble RL quasars. Especially, its radio properties during the major flares, such as superluminal jet motion and high brightness temperature, are consistent with those of a blazar (Falcke et al. 1999). This hy-brid feature, being a radio-quiet quasar at the quiescent state but behaving like a blazar at the flaring state, has been noted in previous studies (e.g. Falcke et al. 1999) and might be a common feature of radio-intermediate AGN (Section 3.8). Li et al. (2010). The 2009 flare is the highest in magnitude in ≈40 years of radio monitoring, with a maximum flux density of 3.12 Jy, approximately 20 times the baseline in the quiescent state. Such a large radio variability is rare even in blazars' light curves (Richards et al. 2011). After the 2009 flare, the source became less active and the flare peak gradually decreased, although there were a few lower-amplitude flares in 2013, 2015, and 2017. Since 2018.4, the source has entered a quiescent state. Similar to the 37-GHz light curve, the 15-GHz light curve also displays a number of flares with the largest one in late 2009 -early 2010. The maximum flux density increased by a factor of 10.5 compared to the baseline level. After 2016, the source entered a low-level state. Brunthaler et al. (2003) found that III Zw 2 has major radio flares about every five years. Li et al. (2010) found the same flare period based on the historical light curves of III Zw 2 at 22 and 37 GHz obtained from the Metsähovi Radio Observation database (Teräsranta et al. 2005) covering 18 years from 1986 to 2004. We continue to analyze the periodicity of the 37-GHz light curve from 2011.40 and the 15-GHz light curve after 2008 using the Lomb-Scargle periodogram, yielding a period of P ∼ 2.1 yr (Appendix C). This periodic signal is not sharply distributed in the Lomb-Scarge periodograms, therefore it can only be called a quasi-periodic signal. In the wavelet periodograms, the most prominent feature is around 1.97-2.02 yr (Figure 5), and a secondary weaker feature is around 3.97-4.26 yr. The period of P ∼ 4 yr has been steadily maintained for ∼35 years (Brunthaler et al. 2005;Li et al. 2010 and the present paper) with 6-7 complete cycles, suggesting that the periodicity should not be a fake signal caused by random red noise, but is related to some intrinsic dynamical process. The 2-yr periodicity has also appeared in previous periodicity analysis (Li et al. 2010), but the magnitude is relatively low.
Periodic flares and jet ejection
Quasi-periodic variations in AGN light curves are a common observational phenomenon and are often interpreted as being related to regular perturbations at the jet origin, such as the precession of the jet nozzle (Valtonen et al. 2008) and rotation of a hotspot along a helical path (Camenzind & Krockenberger 1992;Mohan et al. 2016a), or magnetohydrodynamic or hydrodynamic instabilities arising from the starting section of the jet (Hardee 2003;Jorstad et al. 2022) Figure 5. Periodogram of III Zw 2. The top panels show the periodograms of 37 GHz (left) and 15 GHz light curves (right). In the periodogram plots, the horizontal dashed line is the level of white noise (constant). The curved line represents the mean periodogram (closest to the underlying power spectral density) from the Monte-Carlo simulations using the best-fit model parameters; this additionally accounts for model uncertainties. The dashed curve is the 95 % confidence level (encompasses 95% of the periodogram ordinates; any outliers are statistically significant quasi-periodic oscillations). A broad peak stands out in both plots, corresponding to a period of ∼ 2 yr. The bottom panels show the results from wavelet analysis of the light curves from 2011.6 to 2021: left -37 GHz, right -15 GHz. A persistent component corresponding to a characteristic timescale of 1.97-2.02 yr is clearly seen throughout the time span. A secondary feature appears at a timescale of 3. 97-4.26 yr. or the disk Wang et al. 2014) and propagating as helical-mode waves.
In many cases, warping of the accretion disk can occur. In the case of jet precession, if the spin axis of the primary black hole is not aligned with the orbital plane of the binary, the differential precession with the change in radius can cause the warping of the outer disk (Bardeen & Petterson 1975). Alternatively, the self-irradiation of a luminous accretion disk can also lead to the disk warping out of the orbital plane (Pringle 1996). The dynamic process associated with the warped disk generally occurs in the outer disk region, while the jet is launched from the inner region of the disk (Blandford & Payne 1982) or the vicinity of the black hole (Blandford & Znajek 1977). There is a big gap between the two physical processes of jet launching and disk warping. The coupling of the jet and disk would require that the instabilities originating from the edge or the outer region of the disk can be transmitted to the inner region, whereas this inward propagation is difficult to achieve through viscous pro-cesses because the required time scale is too long. Instead, the yr-timescale periodic variability of III Zw 2 could result from global acoustic or p-mode oscillations in a thick disk (Rezzolla et al. 2003a,b). The inferred variability period is on the order of several years for a black hole mass of 10 8 M . The trigger of the p-mode oscillations can be a locally periodic agent at the edge of the disk due to either gravitational or radiation perturbations. The inward propagation of global oscillations to the inner region induces quasiperiodic fluctuations in the accretion flow, which in turn trigger quasi-periodic injection of plasma into the jet, leading to the observed harmonic periodic radio variability and the corresponding periodic jet ejection.
In addition to the primary 4-yr period, there is a secondary peak at P ≈ 2 yr in the periodograms, consistent with four lower-amplitude flares in 2013, 2015, 2017 and 2019 (Figure 3). The 2-yr periodic component could be a harmonic of the 4-yr periodicity, with each being dominant in different activity states. Only the fundamental frequency of the oscil-lations (i.e. f = 0.25 yr −1 ), can produce the largest flare as well as the long-lasting jet knot which can be observed in VLBI images. Lower-amplitude flares associated with harmonic components may also generate jet knots, but they are too weak to be detected. The presence of multiple harmonic periodicities on years timescale can be explained by hydrodynamic instability, such as the global p-mode oscillations of the accretion disk. Moreover, the quasi-periodic signals induced by the accretion disk instability have no fixed frequency and often vary within a range, which is consistent with the observations. 3.6. Correlation between gamma-ray and radio flares The 2009-2011 lightcurves could be better described with two flare components than with just one (Section F). The observed light curve matches well with a simulated light curve consisting of two "exponential rise + exponential decay" flare components as an approximate description of the composite behavior of the flares ( Figure F3 at 15 GHz, with no significant time delay. The fact that the 37-GHz radio flare leads the 15-GHz flare can be naturally explained by the frequency-dependent opacity (Hovatta et al. 2008). The time span between the 2009 and 2010 flares of 0.5-0.6 yr is much longer than the lifetime of synchrotron electrons (i.e., the cooling time of 85 days and 135 days at 37 and 15 GHz), but much shorter than the time separation between two major flares (∼ 4 yr). Therefore, the 2009 and 2010 flares must be associated with the same episodic nuclear activity but are unlikely to be the same emission components.
The 2009 flare is reminiscent of the typical core flares occurring in blazars. In γ-ray AGN, both the γ-ray flare and the associated delayed millimeter-wavelength flare are created in the standing shock in the innermost jet (the shockin-jet model Marscher et al. 2008), which is optically thick and shows time delays in the light curves at different radio frequencies. The 2009 radio flare lagged the γ-ray peak (2009.84) by only ∼40 days (Liao et al. 2016), suggesting that the γ-ray emitting zone is spatially connected to the radio-emitting zone and the γ-ray emitting site is closer to the central engine than the radio flare site. In contrast, the concurrent 2010 flare at 37 and 15 GHz frequencies suggests that it should arise from an optically-thin jet component, probably associated with a compressed shock in the jet propagating downstream. In addition, the time-integrated flux density is also higher at 37 GHz than at 15 GHz, implying that the energy output of the radio source is concentrated at higher frequencies during the most active phase of the flare. Whereas, the peak time and maximum amplitude of the 2010 flare are similar at both 37 GHz and 15 GHz, suggesting that the energy dissipation of the 2010 flare is weakly dependent on frequency, reinforcing the intrinsic difference between the 2009 and 2010 flares.
Jet-wind collision and the associated 2010 flare
The pieces of observational evidence presented in previous sections, including the temporal evolution of the flux density and jet component size, optically-thin radio spectrum, and correlated radio/γ-ray flares, lead us to speculate that the 2010 flare could be a compressed shock caused by the downstream jet-ISM collision. Additional evidence for jet-ISM collision comes from the VLBA MOJAVE polarimetric observations: the fractional linear polarization of III Zw 2 increased from 0.2% on 3 June 2009 (prior to the 2019 flare) to 0.7% on 12 July 2010 (corresponding to the peak time of the 2010 flare), and the polarization angle changed from 6 • to 24 • . The enhanced linear polarization and change of polarization angle in the jet offer direct evidence of a compressed shock created in the jet-ISM collision, as has been found in other radio-loud quasars .
In the warped disk model discussed in Section 3.5, the wind or outflow driven by the radiation pressure of the disk is released from the outer region of the disk to form a cylindrical or conical structure in the broad line region (BLR). The jet flushes a tunnel in the axial direction of the wind, in which the jet moves outwards. The axis of the disk wind is bound to the axis of the outer disk, and the oscillations of the outer disk would lead to the wind wall oscillating within a certain angle as well. Thus, the jet axis is not always parallel to the wind axis; at a certain distance, the jet may hit the inner boundary of the wind wall, producing an oblique shock. On the jet-wind interaction interface, the oblique shock is deflected, after which the shock (jet knot) follows a (new) ballistic trajectory in the direction of the deflection. The loss of the jet kinetic energy during this collision leads to the dissipation of radiation, producing the observed prominent γ-ray and radio flares (e.g., the 2010 flare). On the other hand, the oscillations of the wind boundary would cause the working surface of the jet-wind collision to vary with time, which seems to coincide with the observed change in the position angle leading to a change in the position angle of the (deflected and redirected) VLBI jet, as seen in Figure 2.
Recent studies (Boccardi et al. 2016(Boccardi et al. , 2021 have shown that the powerful nuclei of high-excitation galaxies produce disk-launched winds/outflows which could form the slower jet sheaths, in addition to highly relativistic jet spines. The slower sheath may contribute to the collimation of the jet. This is compatible with the model we propose: in our model, it is the accretion disk wind rather than the outer-layer jet that surrounds the jet spine. Collision may occur between the fast-moving jet spine and the inner boundary of the disk wind when the axis of the winds is not aligned with the jet axis. Where did the flares occur? Extrapolating the trajectory of the jet component J2 back to the peak times of the 2009 and 2010 radio flares, we obtain a distance of J2 of 0.096 mas (a projected distance of 0.16 pc) and 0.205 mas (0.34 pc) from the black hole, respectively. Neither of these size scales is resolvable by the current 15 GHz VLBI observations, and even the previous 43 GHz VLBI observations (resolution of 0.1 mas, Brunthaler et al. 2005) can only barely resolve this size. Therefore, all relevant jet-wind collision and flaring activities are hidden in the 15-GHz VLBI core. Using the same method, we extrapolate to estimate the distance of the 2009 γ-ray flare site from the central black hole to be 0.068 pc, a distance that can be considered as the upper limit of the jet collimation zone. That is to say, the jet collimation must have been complete at this distance (7.6 × 10 3 R g in projection). This distance is comparable with that of the jet collimation zone derived from other AGN, such as M87 (Asada & Nakamura 2012;Hada et al. 2013) and other nearby radio galaxies (Boccardi et al. 2021). The 2010 γ-ray flare site, 0.27 pc from the central engine, favors that the jet-wind collision occurs farther downstream in the jet.
Comparision with other similar sources
In this section, we compare the radiation and physical properties of III Zw 2 with Mrk 231 and explore the generalized properties of RI AGN. Mrk 231 is one of the closest known radio-quiet quasars with an extremely strong infrared luminosity and rich multi-phase multi-scale outflows (Wang et al. 2021 and references therein). III Zw 2 shares similar observational features with Mrk 231: ongoing galaxy mergers, high luminosity, misalignment between the pc-scale and kpc-scale jets, prominent flares, and the associated intermittent jet ejection. In addition, both sources behave like radioquiet AGN in the quiescent state, while like a blazar in the flaring state (Wang et al. 2023). Their flare properties and jet kinematics can be explained by the classical synchrotron self-absorption radiation model. The jet knots follow ballistic trajectory on a few parsec scales, but the position angle varies from one knot to another. These variability properties and jet structure change are reflective of interactions between the jet and the interstellar medium in the broad line region, or the jet being reflected by a rotating torus (Wang et al. 2021). III Zw 2 differs from Mrk 231 in that the III Zw 2 jet extends to a larger distance, while the Mrk 231 jet is confined within the host galaxy. The ability to develop large-scale jets depends not only on the initial kinetic properties of the jet, but also on the external environment (whether or not it chokes the jet), and the ability of the central engine to remain active for a long period has a profound effect on the jet growth (An & Baan 2012).
SUMMARY
In this paper, we analyze in detail the radio structure and variability properties of III Zw 2, a radio intermediate AGN.
The main results are summarized below: • The overall jet structure from pc to kpc scales shows an S-shaped morphology, probably related to the jet re-orientation due to galaxy interaction. The lowfrequency ASKAP and MWA images confirm the presence of extended emission 27 to the north and 26 to the south from the core. The ultra-steep spectra of these extended features suggest that they are relics of past AGN activity.
• Two jet components J1 and J2 are detected in the VLBI images, with an apparent superluminal velocity of 1.35 c and an average jet viewing angle of ∼ 20 • .
• The radio light curves show quasi-periodic flares: before 2008, a ∼4-yr cycle dominates; after 2008, when the source is in a low-activity state, a high-frequency harmonic component of ∼2-yr period becomes dominant. The variability characteristics (quasi-periodicity, two periodic signals, and the presence of harmonic relation between them) can be explained by the global acoustic oscillations of the accretion disk. The perturbations occurring in the outer region of the disk propagate inward, leading to modulated changes in the accretion rate and, consequently, to the generation of periodic radio flares and jet ejection.
• Only major flares associated with the fundamental frequency oscillation can produce observable jet components. The two strongest flares occurring in late 2004 and late 2009 coincide with the creation of the jet knots J1 and J2 observed in the VLBI images, respectively.
• The radio flare from late 2009 to mid-2010 can be decomposed into two sub-flares, corresponding to two γray flares, respectively. The 2009 γ-ray flare led the radio flare and the high-frequency radio flare led the low-frequency flare, suggesting that it originated from an optically thick component, probably in the jet collimation region. While the 2010 radio flare occurred simultaneously at 37 and 15 GHz, suggesting that they occurred in an optically thin jet zone.
• The wind or outflow arising from the outer accretion disk forms a cylinder or cone in the nuclear region. As the axis of the warped disk is misaligned with the jet. At a certain distance, the jet flow hits the wind wall, creating an oblique shock that deflects the jet; at the same time, the jet-wind collision leads to the production of γ-ray and radio flares (e.g., those observed in 2010).
III Zw 2 has hybrid nature of RQ AGN and blazar: in the quiescent state, it is a typical RQ AGN, while in the flaring state it behaves as a blazar. During the intermittent flares, the produced jet knots interact with the accretion disk wind in the broad line region, producing γ-ray and radio flares. The characteristics observed from III Zw 2 may be common to the RI AGN population, in which both the jets and winds coexist and play important roles at different spatial scales and timescales. Detailed studies of typical individual RI AGN will improve our understanding of the RQ/RL AGN dichotomy and the structure and dynamics of the AGN nuclear region.
DATA AVAILABILITY
The datasets underlying this article were derived from the public domain in NRAO archive (project codes: BU013, BA080; https://science.nrao.edu/observing/data-archive), Astrogeo archive (project codes: BB023, RDV13, BG219D, UF001B, UG002U; http://astrogeo.org/), the MOJAVE data can be found from the MOJAVE website (https: //www.cv.nrao.edu/MOJAVE/sourcepages/0007+106.shtml), MWA archive (https://asvo.mwatelescope.org) and CSIRO ASKAP Science Data Archive (CASDA, https://research. csiro.au/casda/). The MWA GLEAM-X data is not publicly released and the calibrated visibility data can be shared on reasonable request to the corresponding authors. Data from the Owens Valley Radio Observatory and the Metsähovi Radio Observatory can be requested from the respective data maintainers. This work was supported by resources provided by the China SKA Regional Centre prototype funded by the Ministry of Science and Technology of China (MOST; 2018YFA0404603). This research has been supported by the National SKA Program of China (2022SKA0120102). S.G. is supported by the CAS Youth Innovation Promotion Association (2021258). The authors acknowledge the use of Astrogeo Center database maintained by L. Petrov. The National Radio Astronomy Observatory are facilities of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This publication makes use of data obtained at the Metsähovi Radio Observatory, operated by the Aalto University in Finland. This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team (Lister et al. 2018). This work makes use of the Murchison Radioastronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS) under a contract to Curtin University, administered by Astronomy Australia Limited. We acknowledge Paul Hancock, Gemma Anderson, John Morgan, and Stefan Duchesne for their contributions in GLEAM-X pipeline which bring great convenience to us. This research has made use of data from the OVRO 40-m monitoring program which was supported in part by NASA grants NNX08AW31G, NNX11A043G and NNX14AQ89G, and NSF grants AST-0808050 and AST-1109911, and private funding from Caltech and the MPIfR. This research has made use of the NASA/IPAC Extragalactic Database (NED) (2019) which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. GLEAM-X achieves a typical sensitivity of 1σ ≈ 1.3 mJy beam −1 in the frequency range of 170-231 MHz, which is eight times more sensitive than GLEAM. We downloaded 40 observation snapshots containing III Zw 2 from the MWA archive 3 , and processed the data at the China SKA Regional Centre using the pipeline developed by the GLEAM-X team 4 . For each snapshot, we first flagged the bad tiles or receivers and obtained reliable calibration solutions. Next, we flagged potential radio frequency interferences. The calibration solutions obtained from the calibrators were then applied to the data. A Briggs robust parameter was set to +0.5 in imaging to maximize sensitivity. We made a deep CLEANed image, followed by source-finding, ionospheric de-warping, and flux density scaling to GLEAM in the CLEAN images. Then the point spread function (PSF) placeholder of each snapshot image was corrected. In the last step, we combined the astrometrically-and primary-beam-corrected snapshot images into a high signal-to-noise image with reduced noise, allowing for detecting fainter sources and diffuse structures that are not visible in individual snapshot images. Figure A1-a shows the 216-MHz MWA image of III Zw 2, displaying a compact component. The resolution of the image is 63. 1 × 49. 1, and the root-mean-square (rms) noise is 2.1 mJy beam −1 (a factor of ∼ 4.8 lower than that of the GLEAM image). The rms noise of the III Zw 2 image is 1.7 times higher than the mean value of the GLEAM-X images, due to the lower elevation angle of the MWA when observing III Zw 2 at δ = +11 • . III Zw 2 is marginally resolved and shows an elongation along the north-south direction. The source is unresolved at other lower frequencies. The GLEAM-X data points themselves can be fitted with a power law with a steep spectral index α = −0.96 ± 0.09. Figure A1-b shows the GMRT image at 150 MHz, showing a resolved structure along the NE-SW direction with a total extent of ∼ 48 . The GMRT image of III Zw 2 is from the TIFR GMRT Sky Survey Alternative Data Release (TGSS-ADR1 5 ) (Intema et al. 2017). TGSS-ADR1 covers 90 per cent of the total sky from δ = −53 • to δ = +90 • with a noise level of ∼5 mJy beam −1 and a resolution of 25 × 25 /cos(Dec−19 • ) at 150 MH. The total flux density is ∼ 212 mJy, dominated by the central core.
Besides the core C, there are extensions toward the northeast (NE) and toward the south (S).
We obtained the ASKAP image of III Zw 2 from the Rapid ASKAP Continuum Survey (RACS), which is a shallow all-sky (covering the entire sky south of declination δ = +41 • ) pilot survey for future multi-year surveys with the full-scale ASKAP (McConnell et al. 2020). The observations covering the III Zw 2 position began on 2019 April 21 and lasted three weeks. The RACS has an instantaneous bandwidth of 288 MHz and is centered at 888 MHz. The angular resolution of the resulting image using the natural weighting is 14. 20 × 12. 93. The distance between NE and SW lobes is ∼ 33 and the image resolution enables to resolve the jet structure. The rms noise in the image is 0.33 mJy beam −1 , which is consistent with the mean rms of the RACS images. The ASKAP image is shown in Fig. A1-c, which reveals much richer details than the MWA image: the main jet body is elongated along the northeast-southwest (NE-SW) direction and consists of the core C and NE and SW lobes. The ASKAP image shows a very similar structure to the recently published uGMRT image in Silpa et al. (2020), although the two images are observed at different frequencies. As the observed frequency of the ASKAP image is higher than that of the uGMRT image, the emission from the outermost extended components beyond the NE and SW lobes (labeled as N and S in Fig. A1-c) is fainter due to their steep spectrum nature. The separation between N and S lobes is ∼ 55 (∼98 kpc) 6 , larger than the size 1, 1, 1.4, 2 , 2.8, 4, 5.6, 8, 11.2, 16, 23, 32) mJy beam −1 . The rms noise is 2.14 mJy beam −1 . The beam size is 63. 1 × 49. 1 with the major axis along a position angle of 146. of the previously detected structure between NE and SW lobes in VLA images at GHz frequencies (Brunthaler et al. 2005). In the 685-MHz uGMRT map, the southern extension (labeled ML in Silpa et al. 2021) is much longer than our Figure A1-c. The total flux density estimated from the ASKAP images is 92.5 mJy, of which the main body (C+NE+SW) accounts for ∼86.3 mJy, while the remaining 6.7% (6.2 mJy) of the flux density comes from the sum of the N and S lobes.
The power-law portion at low frequencies originates from large-scale extended emission structures in the outermost N and S lobes which are evident from the MWA image of III Zw 2 and part of the structures are shown in the uGMRT (Silpa et al. 2021) and ASKAP images. These steep-spectrum features correspond to aged structures, which become very faint above 1 GHz (e.g. Callingham et al. 2017).
The flux density from optically thin synchrotron emission is (Rybicki & Lightman 1986;Ghisellini 2013) where R is the size of the extended emission structure, D L is the luminosity distance, K 0 is a normalization factor, B is the magnetic field strength, ν is the observing frequency, p is the energy index, and c1 and c2 are radiative constants (Pacholczyk 1970). Assuming a similar power law distribution of synchrotron emitting electron energies, N (E) dE = K 0 E −p dE, and energy equipartition between the magnetic fields and particle kinetic energy densities, the normalization K 0 can be evaluated similarly to eqn. E11. The synchrotron frequency corresponds to the minimum injected Lorentz factor of emitting electrons and can be calculated as Using eqn. A2 and the normalization in eqn. A1, and γ m = ((p − 2)/(p − 1))(m p /m e ) e , in which e is the fraction of the total energy density in the emitting region in the magnetic fields. The magnetic field strength in this region can be estimated as , assuming e = 0.1 and an index p = 1 − 2α low = 2.94 based on the inferred spectral index α low = −0.96, and is indicative of emission from a weakened decelerating shock produced by Fermi acceleration of electrons (e.g. Blandford & Eichler 1987;Jones & Ellison 1991;Frank et al. 2002). The estimated µG magnetic field strength is consistent with similar estimates for this source from other studies (Silpa et al. 2021) and AGN in cluster environments (Müller et al. 2021). Assuming a spherical volume, the total energy content in the synchrotron emitting extended region is where, U e = U B = B 2 /(8π), where = e / B and using B from eqn. A3. After eqn. A4 relating to the total energy E ext , from the mean flux density measured in the MWA bands (88 -215 MHz), S ν (ext) = 0.246 mJy, and using a spectral index α low = −1.15 for the relics and a central frequency ν = 151.6 MHz, the radio luminosity of the extended emission L R,ext ≈ 4πD 2 L ν S ν (ext) (1 + z) −1−α low = 6.18 × 10 36 erg s −1 . Using the empirical relations in eqn. E17 and eqn. E18, the associated luminosity of the extended jet is L ext = 5.19 × 10 39 erg s −1 .
B. HIGH FREQUENCY VLBI IMAGING OF THE PC-SCALE JET
The source structure of III Zw 2 on parsec scales is revealed from the VLBI imaging data, which include the archive data from the Monitoring Of Jets in Active galactic nuclei with VLBA Experiments (MOJAVE) (Lister et al. 2018) program and the archive data obtained from the Astrogeo database 7 . Details of the VLBI data are presented in Table B1. The MOJAVE data of III Zw 2 were observed at 15 GHz over 25 sessions from 1995 July 28 to 2013 June 2. The Astrogeo data were observed simultaneously at the dual frequencies of 2.3 and 8.4 GHz, enabling the determination of spectral indices α 8GHz 2GHz for III Zw 2. Since the source is unresolved in 2.3 and 8.4 GHz images, the analysis of the jet kinematics is based only on the 15 GHz VLBI data.
All of these archival VLBI data have been calibrated, so we only made a few iterations of self-calibration in the DIFMAP software package (Shepherd et al. 1995) to eliminate some residual phase errors and to increase the dynamic range of the images. Model fitting was performed using the MODELFIT program in DIFMAP. Lister et al. (2019) studied the parsec-scale jet kinematics of 409 bright radio-bright AGN including III Zw 2. They presented the model fitting results of III Zw 2 from epoch 1995 July 28 to epoch 2013 June 2. In their model fitting, (1) the source was detected with only a single core before 2011 January 11; (2) the distance of the fitted jet component from epoch 2000 July 22 to 2006 June 15 is less than 0.32 mas, i.e., less than half of the minor axis of the synthesized beam, therefore, these components are practically unresolvable. These above epochs were abandoned from the jet kinematics analysis. The model fitting parameters are given in Table B2. The uncertainties of the parameters are first estimated from the equations given in Fomalont (1999) which only take into account the fitting errors. In practical observations, the flux density error also includes a certain level of uncertainty in the flux scale calibration through error propagation. For the 15-GHz VLBA, this systematic error is typically about 5%. Both total intensity (Stokes I) and linear polarization images are created from the MOJAVE data. After self-calibration in DIFMAP, we separately created images using the Stokes I, Q and U data. Then we combined the Stokes Q and U images to produce the linear polarization intensity p, which is the root of the sum of squares of Q and U components, i.e., p = Q 2 + U 2 . The fractional polarization was calculated as the ratio of the polarized flux density to Stokes I flux density, p/I. The derived fractional polarizations are consistent with the values reported on the MOJAVE webpage. Snapshot-mode Astrogeo data have short integration time and sparse (u,v) coverage. Deconvolution and model fitting are affected by strong sidelobes caused by incomplete (u,v) coverage. For this reason, the uncertainty in component size adopts the root of the quadratic sum of its corresponding statistical error and the fitting error. The position error was then estimated as half of the component size error.
The present paper is focused on the 2009-2010 major flare and its associated jet properties, so we only use the MOJAVE data after 2004, including 13 epochs listed in Table B1. The (u,v) coverage, image sensitivity, and resolution of these VLBI data are all in good agreement, therefore the systematic errors between the fitted parameters are small. In six of the 13 epochs, the jet is not distinguishable, and these data were not used in jet kinematics analysis.
On pc scales, VLBI images of III Zw 2 reveal either a single core or "a core + a compact jet" structure. The compact core dominates the total flux density in all images. The jet extends to the west and is relatively weaker but well distinguished from the core in 7 epochs, allowing us to determine its kinematic properties. We double-checked the model fitting reported by the MOJAVE team and only adopted the high-confidence jet models. If the signal-to-noise ratio of a jet component is lower than 3, or the jet is indistinguishable from the core, then it will not be used for the kinematic analysis. Figure 2 shows the variation of the core-jet separation with time and the variation of the jet position angle with time. It is clear that these jet knots do not belong to a single component, but are two separated jet knots, labeled J1 and J2 in the plot (corresponding to components #1 and #4 in Lister et al. 2019), each of which follows a ballistic trajectory along a different position angle. We made linear regression fitting to the position-time correlation of J1 and J2 and obtained the jet proper motions as 0.24 ± 0.02 mas yr −1 and 0.22 ± 0.01 mas yr −1 , respectively. These convert to jet transverse speeds of 1.35 ± 0.13 c (J1) and 1.21 ± 0.07 c (J2). The derived jet proper motions are in good agreement with the previous studies (Brunthaler et al. 2000) based on the 43-GHz VLBI observations in 1998. To distinguish it from our jet components, we name the jet detected by Brunthaler et al. as "1998 jet". Extending the 1998 jet to epoch 2006 (the first epoch of the MOJAVE data used in this paper), we find the component should be at a distance of about 0.525 mas. However, it is not detected in the 2006 VLBA image, suggesting that this component has been considerably dimmed due to the adiabatic radiation loss.
C. FLUX DENSITY VARIABILITY AND TIME SERIES ANALYSIS
The single-dish light curve data used in this paper were from the archives of the 14-meter Metsähovi radio telescope in Finland and the 40-meter telescope of the Owens Valley Radio Observatory (OVRO) in US.
Metsähovi observations of III Zw 2 were made at 22 and 37 GHz in the period of 1986-2020. The 22-and 37-GHz light curves from 1984 to 1998 have been published in Falcke et al. (1999), and the Metsähovi data from 1986 to 2019 have been published in Chamani et al. (2020). In addition to these data, we also include the new data from 2019 onwards to date. A detailed description of the data reduction and analysis is given in Teraesranta et al. (1998). Under good observing conditions, the detection limit of the Metsähovi telescope at 37 GHz is around 0.2 Jy. Uncertainties in flux densities include the contribution from the root mean square (rms) of the measurements and the errors in the absolute flux density calibration.
The radio monitoring program with the 40 m telescope at the OVRO includes over 1500 sources in the northern sky (δ > −20 • ) (Richards et al. 2011). Each source is observed twice per week. The minimum flux density detected is about 4 mJy. A typical uncertainty of the measurements is ∼3%. The high cadence and high sensitivity of the monitoring greatly facilitate the study of the variability properties of radio sources on timescales of months and years. The OVRO monitoring data of III Zw 2 were Column 5 gives the restoring beam in natural weighting; Columns 6 and 7 present the peak flux density and rms noise in the images, respectively. (6) The integrated flux densities from the 15 GHz VLBA data are superimposed on the OVRO light curve, showing a good match between the two sets of flux densities at adjacent epochs. This suggests that the steep-spectrum extended structures including the kpc-scale jet and lobes seen in the low-frequency images are barely detectable at 15 GHz and above. The time series analysis is carried out using two methods, the Fourier periodogram (Scargle 1989;Vaughan 2005;An et al. 2013;Mohan & Mangalam 2014) and the wavelet analysis (Torrence & Compo 1998;An et al. 2013;Mohan et al. 2016b). For both methods, the light curves are first pre-processed by a conversion from a partial uneven sampling with small data gaps to a full even sampling after a linear interpolation and re-sampling. The normalized Fourier periodogram P (f j ) is evaluated as (Vaughan 2005;Mohan & Mangalam 2014;Mohan et al. 2015) where ∆t is the sampling time step size, x is the mean of the light curve x(n∆t) of length N , and F (f j ) is the Fourier transform of x evaluated at the frequencies f j = j/(N ∆t) with j = 1, ....., (N/2 − 1) (up to the Nyquist frequency). Since astrophysical processes typically produce red noise in the light curve, the periodogram P (f j ) will be characterized by a power law behavior, especially at low temporal frequencies. This can be captured by parametric model fits to P (f j ) to estimate the underlying power spectral density (PSD). Here, we use two models, a power law given by where A is the normalized amplitude, α is the power-law index, C is the ambient noise level, and a bending power law is given by where f b is a break frequency marking a transition in slopes, the power-law indices are α for f j > f b , and α l for f j < f b , and C is the ambient noise level. The fitting of the Fourier periodogram with the above parametric models is carried out using the maximum likelihood estimator. The methodology and estimation of parameters and their errors are discussed in (Mohan & Mangalam 2014;Mohan et al. 2015). After accounting for model uncertainties, statistical significance testing is carried out to identify outlying peaks in the fit residuals (based on an expected χ 2 statistical distribution), which are potential candidates for quasi-periodic oscillations in the light curve.
The statistical significance of all peaks is assessed based on a procedure involving Monte Carlo simulations (Mohan & Mangalam 2014;Mohan et al. 2015). These are carried out using the algorithm of Timmer & Koenig (1995). The best-fit model and associated parameters are used as trial values to simulate 5000 realizations of the periodogram, oversampled in duration and temporal frequencies, and then re-sampled at the original frequencies. A mean periodogramĨ(f j ) is constructed from the simulations and is re-scaled to match the variability properties of the original periodogram; this could be the closest estimate of the underlying PSD. The statistical significance of the periodogram ordinates for any frequency bin is evaluated based on the assumption that the light curve consists of randomly distributed data points (i.e., no periodic behavior). The residuals from the fitting, P (f j )/Ĩ(f j ), are then χ 2 2 /2 distributed (Chatfield 2016), with a conditional probability p[P (f j )|Ĩ(f j )] =Ĩ(f j ) −1 e −P (fj )/Ĩ(fj ) . The cumulative distribution function for the PSD ordinates is then the integral of the χ 2 2 distribution, i.e., a gamma density function Γ(1, 1/2) = exp (−x/2)/2. Specifying a level of statistical significance (1 − ) in the integral helps to identify outliers (quasi-periodic signals) that may be present in the tail of the distribution. We set a threshold of = 0.05 (95 % level of significance) to identify quasi-periodic signals in the light curves.
The wavelet analysis is employed here in a complementary manner to probe quasi-periodic signals in the light curves and their locations (to infer the total duration and number of cycles). The two-dimensional wavelet power spectrum is a function of the wavelet scale (that can be expressed in units of the sampling wavelength or period) and the time window being sampled, and is evaluated as (Mohan et al. 2016b) where W (n, s) is the wavelet transform of the evenly sampled light curve x(n∆t) at times (n∆t) and at scales s, F (2πf j ) is the Fourier transform of x evaluated at the circular frequencies 2πf j = 2πj/(N ∆t) with j = 1, ....., N , and Ψ * (2πsf j ) is the complex conjugate of the Fourier transform of the wavelet sampling kernel function. The wavelet transform is the inverse Fourier transform of the convolution product of the above constituents. For a continuous wavelet transform, a commonly used sampling kernel is the Morlet wavelet function (Grossmann et al. 1989) ψ = π −1/4 e iω0t e −t 2 /2 where ω 0 = 6 is a frequency parameter characterizing the wavelet shape and t is the time parameter. In the frequency domain Ψ(2πsf j ) = π −1/4 e −(2πsfj −ω0) 2 /2 . The wavelet scales s ≈ 1.03/f j for the Morlet function (Torrence & Compo 1998) is hence in near correspondence with the sampling frequencies.
In the current work, the following additional features have been implemented: the use of a cone of influence to help explain the cyclic behavior of the sampling kernel, especially at low temporal frequencies (Torrence & Compo 1998), the use of a Hann window function to smoothen the noisy features in the power spectrum, and the identification of contiguous features in the power spectrum that may be statistically significant in anticipation of measuring their total duration, number of cycles and the time evolution of the features. These improvements all tend to narrow the search window and consequent computational costs in the identification of statistically significant signals, and considerably improve the contrast between the signal and noise. The statistical significance of contiguous signals detected in the wavelet analysis employs a two-fold strategy. In the first stage, the algorithm of Timmer & Koenig (1995) and the expected χ 2 2 statistics of periodogram ordinates (assuming that the light curve ideally consists of random Gaussian noise) are employed to simulate a large number of realizations of the periodogram. The bestfit power-law model of the form I(f m ) = Af α m + C with associated parameters is used as trial values. The index α is varied in the range from the best-fit value to 0.0 in steps of −0.2 and in each case. 1000 realizations of the periodogram are simulated, and they are oversampled in duration and temporal frequencies and then re-sampled at the original frequencies. In each realization, the periodogram is inverse Fourier transformed to obtain a synthetic light curve with similar statistical and variability properties as that of the original light curve; the total number of simulated light curves is typically 20000. In the second stage, their individual wavelet power spectra are evaluated. For each simulated wavelet power spectrum, the mean of all powers at a given scale P (n) is used to estimate the global wavelet power spectrum GWPS(s) that corresponds to a window function smoothed version of the Fourier periodogram. The candidate signals in the GWPS of the original light curve are compared with their simulated counterparts. The number of times p that a candidate signal in the original GWPS exceeds the values in the simulations (totaling Q) at a given wavelet scale is measured in terms of the statistical significance (1 − p/Q).
D. INTEGRATED RADIO SPECTRUM AND FITTING
We plot the radio spectrum of III Zw 2 from 72 MHz to 37 GHz in Figure 4. The red-colored squares below 230 MHz are taken from the MWA GLEAM-X survey (Hurley-Walker et al. 2022) and the green-colored diamond is from the GMRT observation (Intema et al. 2017), revealing the extended radio emission characterized by a steep spectrum. The data point in slateblue color is from the ASKAP observation at 888 MHz. The solid circle in magenta color is from the recent uGMRT observation (Silpa et al. 2020) on 2018 November 23. In addition to these low-frequency data, we also include the data points at GHz frequencies observed in the epochs close to the MWA and uGMRT observation: 37 GHz Metsähovi (blue-colored triangle-left), 15 GHz OVRO (navy-colored triangle-right), 8.4 and 2.3 GHz Astrogeo VLBI (purple-colored diamond). From Figure 3 we find that the core dominates the total flux density at GHz frequencies.
The spectrum shows different characteristics above and below 685 MHz: at 685 MHz and above, the core dominates the emission, showing an inverted spectrum; at frequencies below 685 MHz, the flux density from the core is substantially reduced toward the low-frequency end due to increasing synchrotron self-absorption, and the extended jets and lobes become increasingly dominant. The two-component radio spectrum spanning 0.072 GHz -37 GHz is subjected to a weighted (the measurement errors of flux densities are used as weights) least square fit using the function eqn. D9, in which the first two parts (Aν α low + B) form a power-law spectrum and the last one describes a spectrum of self-absorbed synchrotron radiation (Pacholczyk 1970;Türler et al. 1999).
where the parameters of the low-frequency power-law spectrum include the amplitude A (mJy), the spectral index α low , a baseline flux density B (mJy), and the parameters of the high-frequency section include the amplitude F m (mJy), the frequency of transition from optically thick to thin emission ν m (GHz), the optically-thick spectral index α thick , the optically-thin spectral index α, and the optical depth τ m expressed in terms of α thick and α. In the fitting process derived by using the Markov chain Monte Carlo (MCMC) method, α thick is fixed at 2.5, corresponding to the canonical synchrotron self-absorption case, and other parameters are constrained within reasonable ranges 0.0 < A ≤ 0.8 Jy, −1.5 ≤ α low ≤ −0.4, 0.0 < B ≤ 0.6 , 0.0 < F m ≤ 0.5 Jy, 7.0 ≤ ν m ≤ 15.0 GHz, and −0.4 ≤ α ≤ 0.0. This yields A = 27 +12 −9 mJy, α low = −0.97 +0.21 −0.21 , B = 41 +8 −12 mJy, F m = 146 +11 −9 mJy, ν m = 11.24 +2.01 −1.25 GHz, and α = −0.20 +0.10 −0.12 . In Figure 4 we also plotted the flux densities of the NE and SW lobes obtained from the VLA images (Brunthaler et al. 2005) and fitted the NE and SW lobes (represented by blue-colored open circles and red-colored open squares, respectively) with power law functions. We then calculated the flux densities of NE and SW lobes at the MWA observation frequencies (i.e., 88, 118, 154, and 200 MHz) by extrapolating the power-law fits. Next, we subtracted the extrapolated flux densities of NE and SW lobes from the measured values to estimate the flux densities from large-scale extended emission contributed mainly by the N and S lobes. Finally, we calculated the spectral index of the extended outer lobes N+S as α = −1.09 ± 0.12, which is much steeper than the inner lobes and comparable to the spectral index of the recently observed radio relics at low frequencies, indicating that the outer lobes are more likely to be dying relics, dominated by radiation from aged relativistic electrons.
E. EMISSION PROPERTIES OF THE PC -KPC SCALE JET
The synchrotron emission spectrum is assumed to be shaped by a power-law energy distribution of the emitting relativistic electrons N (E) dE = KE −p dE for energy E, normalization K and energy index p 2. The normalization and magnetic field strength B are expressed in terms of the radial distance r from the supermassive black hole as K = K 0 (r/r 0 ) −2 and B = B 0 (r/r 0 ) −1 with scaling constants K 0 and B 0 at a fiducial distance r 0 (Zdziarski et al. 2015;Mohan et al. 2015;Agarwal et al. 2017). With this, the particle kinetic energy density is where e is the fraction of the total energy density in the particle kinetic energy and E min is the minimum energy required to accelerate electrons (assuming that they are the dominant constituents of the jet) to relativistic energies, here taken to be the rest mass energy E min = 0.51 MeV. Assuming equipartition of the total energy density between that in the magnetic fields U B = B 2 /(8π) and in the particle kinetic energy U e , i.e. U e / e = U B / B , the normalization constant is where B is the fraction of the total energy density in the magnetic fields and = e / B . The total energy density is then The jet luminosity attributable to synchrotron emission in a region of size = r sin θ j (where θ j is the jet half opening angle) downstream of the jet base is (Ghisellini & Tavecchio 2010) L jet = π 2 β j Γ 2 j cU tot = β j Γ 2 j c 8 sin 2 θ j (B 0 r 0 ) 2 (1 + ), where β j and Γ j are the bulk velocity and Lorentz factor of the jet. The optically thick absorption coefficient (Rybicki & Lightman 1986;Ghisellini 2013) α ν = π 1/2 e 2 K 8m e c eB 2πm e c (p+2)/2 (ν ) −(p+4)/2 f (p), where e is the electron charge, ν is the emission frequency in the source rest frame, and f (p) is expressed in terms of p and the Gamma function Γ dependence as f (p) = 3 (p+1)/2 Γ p + 8 12 Γ 3p + 22 12 Γ 3p + 2 12 Γ p + 6 12 .
The optical depth for the emitting region is τ ν = α ν . From the condition τ ν = 1 that signifies the transition of the emitting region from optically thick to thin, the associated radial distance corresponds to the location of the emitting core. The associated synchrotron self-absorption frequency ν = ν(1 + z)/δ where ν is the frequency in the observer frame, accounting for cosmological redshift through the factor (1 + z) for a source at redshift z, and relativistic beaming through the Doppler factor δ. Using eqns. E11, E13 and E14 in the above condition, with the familiar r ∝ ν −1 as expected for a self-absorbed radio core Lobanov 1998).
The mean flux density of the radio core at 15 GHz from the VLBI data is S ν (C) = 907.15 ± 90.87 mJy (see Table B2). The associated radio luminosity L R = 4πνD 2 L S ν (C) (1 + z) −1+α = 2.37 × 10 42 erg s −1 for ν = 15 GHz, D L = 403.1 Mpc, and z = 0.089. We use empirical relations to estimate the total jet luminosity (radiative L jet,rad and kinetic L jet,kin components, associated with emission and the acceleration of baryonic constituents of the jet, respectively) from the radio luminosity (Foschini 2014;An et al. 2020) log L jet,rad = 12.00 + 0.75 log L R (E17) log L jet,kin = 6.00 + 0.90 log L R .
The total jet luminosity L jet = L jet,rad + L jet,kin = 1.97 × 10 44 erg s −1 . Jet properties are estimated from the VLBI 15 GHz data points during the flaring phases (δ 4.3; median Doppler factor). These include an averageδ = 16.9, an associated Γ = 8.5 based on Γ = (β 2 app +δ 2 + 1)/(2δ). The jet half opening angle θ j = 1/Γ = 6. • 7, similar to the lower limit estimated in Chamani et al. (2021). With the estimated L jet , p = 2.3, the assumption of equipartition in the energy density between that in the magnetic fields and particle kinetic energy with = 1 and the above physical properties, the radial distance of the self-absorbed core from eqn. E16, r SSA ≈ 435r G (where r G = GM • /c 2 = 1.47 × 10 13 cm is the gravitational radius for a SMBH of 1.84 × 10 8 M (Grier et al. 2012)). Using this in eqn. E13 (r 0 = r SSA ), the associated magnetic field strength is B 0 = 13.8 G. For the B ∝ r −1 scaling relation, this corresponds to B(r = 1 pc) = 52.8 mG. This estimate is consistent with the core-shift based measurement of 60 mG and of a similar order of magnitude to the SSA-based measurement of 20 mG (Chamani et al. 2021).
F. DECOMPOSITION OF THE MAJOR FLARING PHASE DURING 2009-2010 AND ORIGIN
The AGN flares based on generalized shock models can be described by a fast-rising and slowly-declining pattern (Valtaoja et al. 1999), which can be modeled with exponential functions. However, the rising phase of the flux of III Zw 2 does not exhibit a significantly shorter timescale than the declining phase. For example, the rise of the 1998-1999 flare of III Zw 2 is even slower than the decay timescale (Brunthaler et al. 2005). Several lower-amplitude flares were observed prior to the 2009-2010 flare peak, similar to the 1999 and 2004 flares. On the one hand, these smaller flares did not form jets observable in VLBI images, and on the other hand, the opacity of the core and the limited imaging resolution hindered the detection of the corresponding fine structural changes. However, the superposition of these sub-flares leads to a slow increase in flux density before the peak of the major flare. Therefore, it is difficult to obtain a 1:1.3 ratio (as found in many other blazars by Valtaoja et al. 1999) between the rising and declining timescales in III Zw 2. For this reason, we did not fix the decay-to-rise ratio in exponential functions used to model the III Zw 2 flares.
In addition, two γ-ray flares were found during the 2009-2010 flare, one is associated with the peak of 2009 flare, and the other is in mid of 2010. This motivates us to decompose the light curves with two flare components as an approximate description of the primary 2009 flare and the subsequent 2010 flare ( Figure F3), respectively. We need to mention that the model curves shown in Figure F3 are not obtained by least-squares fitting to the observed data, but by selecting model curves with minimum residuals from those obtained with different parameter combinations.
In the main text, we inferred a correlation between γ-ray and prominent radio bursts, and that the flares are directly connected to the production of jet knots. These can be explained by the shock-in-jet model developed for the blazars (Marscher et al. 2008). Then where did these events occur ?
From Figure 2 we find the jet knots J1 and J2 move along ballistic trajectories. Assuming that the jet speed remains constant, then we can trace back to obtain the position of the jet J2 at the moments of the γ-ray and radio flares. They are in sequence: 0.041 mas (2009 γ-ray flare), 0.096 mas (2009 radio flare), 0.163 mas (2010 γ-ray flare), and 0.205 mas (2010 radio flare). The mass of the SMBH in III Zw 2 is (1.84±0.27)×10 8 M (Grier et al. 2012), therefore its gravitational radius is R g = 9×10 −6 pc. The physical size corresponding to the first γ-ray flare in 2009 is 7.6 × 10 3 R g , expressed in units of gravitational radius. This size scale corresponds to the starting section of the jet, and it can be assumed that the collimation of the jet occurs before that distance. A smaller black hole mass of 5 × 10 7 M is derived by Kaastra & de Korte (1988). This implies that jet collimation would occur at a longer distance, measured in units of the black hole's gravitational radius.
The second γ-ray flare and subsequent radio flare may happen at a distance of 0.27-0.34 pc, which is consistent with the dimension of the broad line region of III Zw 2 (Kaastra & de Korte 1988). This suggests that the jet probably collides with the inner boundary of the disk wind within the broad line region, or with the clouds in the torus on a larger scale, resulting in the observed flares (see discussion in Section 3.7). | 2022-12-29T06:42:26.737Z | 2022-12-28T00:00:00.000 | {
"year": 2022,
"sha1": "27a574c71cd0ebac7fb8f29f32df7a9bdf7a9eba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "27a574c71cd0ebac7fb8f29f32df7a9bdf7a9eba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221971223 | pes2o/s2orc | v3-fos-license | Relaxing Common Belief for Social Networks
We propose a relaxation of common belief called factional belief that is suitable for the analysis of strategic coordination on social networks. We show how this definition can be used to analyze revolt games on general graphs, including by giving an efficient algorithm that characterizes a structural result about the possible equilibria of such games. This extends prior work on common knowledge and common belief, which has been too restrictive for use in understanding strategic coordination and cooperation in social network settings.
INTRODUCTION
Common knowledge and its analogues (e.g. common belief) are fundamental in the analysis of coordination and cooperation in strategic settings. Informally, common knowledge is the phenomenon that, within a population, everyone knows that a proposition is true, everyone knows that everyone knows that it is true, everyone knows that everyone knows that everyone knows that it is true, and so on, ad infinitum. The existence of common knowledge often shows up as an assumption that underlies some kind of strategic coordination. For example, in the standard game theoretic setting, it is typically assumed that the agents playing a game have common knowledge both of the payoffs of the game and of the rationality of each agent. This assumption undergirds the agents' ability to coordinate on equilibria.
A rigorous investigation into the consequences of this type of assumption can be traced back to Robert Aumann [2], who showed that if two rational agents have the same prior, and their posteriors for an event are common knowledge, then their posteriors must be equal (i.e. it is impossible for such agents to "agree to disagree"). Later work considered whether assumptions about common knowledge, and the mathematical consequences of those assumptions, were realistic for trying to explain economic phenomena. In particular, it seemed unrealistic to expect agents to reason about infinite hierarchies of knowledge. Initial attempts to address this critique involved truncating the infinite hierarchy that is required in our informal definition of common knowledge after some large, but finite, number of levels. However, this line of inquiry resulted in the discovery of "common knowledge paradoxes" that arose from examples like Ariel Rubinstein's Electronic Mail Game [21]. Such examples demonstrated that in situations where common knowledge was required for coordination, if the infinite hierarchy were truncated to be any finite hierarchy, strategic agents behaved unexpectedly and unrealistically. That is, they behaved very differently when a proposition was close to common knowledge, in the sense that many-but only finitely many-hierarchies of knowledge were satisfied, then they did when the proposition was actually common knowledge (and therefore infinitely many hierarchies of knowledge were satisfied).
An important breakthrough came from Dov Monderer and Dov Samet in 1989 [14]. Inspired by an earlier version of Rubinstein's work [21] and the work of Aumann [2], Monderer and Samet proposed an alternative to truncating the infinite hierarchy of knowledge: relaxing the requirement of "knowledge" at each level in the hierarchy. They introduced the notion of common belief -an analogous concept to common knowledge that is defined by replacing "everyone knows" with "everyone believes with probability (at least) p" at each place where it occurs in our informal definition of common knowledge. Under this definition, they showed that common belief approximates common knowledge in precisely the way that truncating the infinite hierarchy did not: When common knowledge of some proposition is relaxed to common belief, agents behave in approximately the same way they did when they had common knowledge of the proposition. For example, common belief can be applied to generalize Aumann's result that was mentioned above: when rational agents have the same prior and their posteriors about an event are commonly believed, these posteriors must be approximately equal.
In the same paper, they also detailed a key insight that allowed them to address the critique that common knowledge might be too unrealistic to have explanatory power in Economics. They showed that common knowledge has an alternative definition (used implicitly in [2]), that is formally equivalent to the more natural definition, but simpler to reason about. This alternative-inspired by the example of public announcements-defines common knowledge in terms of events that are evident knowledge: events for which their occurrence implies knowledge of their occurrence within the entire population. Common belief has a similar formally equivalent, alternative definition in terms of events that are evident belief : events for which their occurrence implies belief with probability at least p of their occurrence within the entire population.
Here, it is worth noting that our work most heavily borrows from that of Monderer and Samet and, as such, their paper [14] is highly recommended both as a primer for what follows here and as an introduction to common knowledge and common belief in general.
In a somewhat orthogonal line of research, Michael Suk-Young Chwe studied common knowledge (not common belief) on networks in the context of a revolt game [6,7]. In his setting, each agent has a threshold that represents the size of the revolt in which she would agree to participate. For example, an agent with a threshold of 3 would need to know that at least 2 other agents would participate in order to herself agree to participate in the revolt. To decide whether this threshold is met, agents learn the thresholds of their neighbors. They use this information (and knowledge of their 2-hop neighborhood in the graph) to reason about which agents have their thresholds satisfied and consequently whether their own threshold is satisfied. Because agents require absolute certainty in this setting, they only consider their neighbors when conducting this reasoning.
The strict requirement that absolute certainty is necessary for an agent to revolt implies that for a group of agents to revolt, it must be common knowledge among them that all of their thresholds are satisfied. Consequently, the problem of finding groups of revolting agents reduces to the problem of finding cliques of a certain size in the network. However, an important innovation of Chwe's work, in contrast to the settings described above, is that common knowledge in his setting is a local phenomenon occurring within those cliques, not a global phenomenon occurring within the entire population.
Both approaches-that of Monderer and Samet and that of Chwe-are limited in their usefulness in social network settings, because their respective notions are unlikely to apply in many natural graphs that are used to model social networks. For example, an Erdős-Rényi random graph of n vertices with p = 10 n will almost certainly be sparse, so population-level phenomenon like common belief will not arise. On the other hand, an Erdős-Rényi random graph of n vertices with p = 1 2 would be unlikely to contain large cliques, so the phenomenon of local common knowledge would be severely limited despite the fact that each agent, in seeing about half the graph, should have a lot of information about the entire population and might be expected to be able to coordinate with some large fraction of it.
We propose that Monderer's and Samet's concept of common belief-itself a relaxation of common knowledge-can be further relaxed to a notion of common belief among a faction (i.e. a minimal-size subset) of the population, while retaining both its mathematical simplicity (in being defined in terms of events that are evident belief) and its economic explanatory power. We refer to this notion as factional belief.
Factional belief is a natural application of the ideas of Monderer and Samet to network settings similar to those of Chwe. It retains from Chwe the idea that common knowledge/belief can occur in only a subset of the population and still motivate their behavior. However, it is not prohibitively strict, such that it would be unlikely to occur endogenously in many natural graphs. Factional belief is not necessarily local-agents can and may need to reason about agents outside of their neighborhood. As such, it does not require cliques.
Our Contributions
• We formally define a notion of factional belief (Section 2), which can be used to analyze revolt games on general graphs (Section 3). Prior notions of common knowledge and common belief were insufficient for this type of analysis. • We provide an algorithm that characterizes a structural result about the types of equilibria that are possible in instances of the network revolt games described in Section 3 (Section 4) and a few natural extensions of those games (Section 5). • We show that, surprisingly, it is sufficient for our algorithm to only have access to the degree sequence of the network; additional details of the network beyond the degree sequence are not relevant. • We demonstrate the practical utility of our algorithms from Sections 4 and 5 by applying them to simulated network data to explore how various parameters of networks and of the model relate to the size of revolts that are supported in equilibria of the network revolt game (Section 6).
Additional Related Work
The work of Stephen Morris is particularly notable when surveying the literature related to common knowledge and common belief. Morris, often following the work of Monderer and Samet, has done much theoretical work relating to common knowledge and common belief [15,[17][18][19]. More recently, he has also collaborated with Benjamin Golub to study higherorder reasoning, reminiscent the infinite hierarchy of reasoning in the initial definitions of common knowledge and common belief, in network settings [10,11].
Underlying Morris' work, above, and our work is our assertion that common knowledge, common belief, and factional belief are useful, not just as mathematical concepts, but for understanding real social and economic phenomena. Here, we briefly outline some relevant work in applying common knowledge and common belief to explaining such phenomena. One clear direction for future research is to try to similarly apply factional belief as an explanatory tool in these and other settings.
Morris has applied common knowledge/belief to settings such as contagion [16] and global games [20].
Chwe, in 2013, revisited his earlier work and expanded his purview to consider how common knowledge is generated in society [8]. He proposed that the importance of rituals in society can be understood from the perspective that rituals create conditions under which common knowledge can be generated. This theory fits nicely with understanding common knowledge through the lens of evident knowledge events. Another potential direction for future research would be to try to mathematically model this ritualistic generation of common knowledge.
Finally, common knowledge, common belief, and factional belief can be used as tools to help understand the formation and transformation of social norms. In particular, Cristina Bicchieri proposed and meticulously advocated for a definition of social norms of which our definition of factional belief is very reminiscent [4].
DEFINING FACTIONAL BELIEF
For the following definitions, let (Ω, Σ, Pr) be a probability space, where Ω denotes a set of states, Σ denotes a σ -algebra of events, and Pr denotes a probability measure on Σ. Let I denote a set of agents.
For each i ∈ I , Π i is a partition of Ω into measurable sets with positive probability. It is, therefore, a countable partition. For ω ∈ Ω, the element of Π i that contains ω is written as Π i (ω). Π i can be interpreted as the information available to agent i. That is, Π i (ω) is the set of states that are indistinguishable to i when i observes ω. Let B p i (E) denote the event that agent i believes in event E with probability at least p. Formally, we write B p i (E) = {ω : Pr[E|Π i (ω)] ≥ p}. Lastly, for events E and F , we use the notation E ⊆ F to denote that, whenever E occurs, F occurs.
The following examples are helpful to illustrate this notation. When rolling a fair die, with equally probably outcomes in the set {1, 2, 3, 4, 5, 6} we have {2, 4} ⊆ {Outcome is even}. Similarly, when the die has been tossed, but the outcome has not been revealed, we have the event B 1 2 i ({Outcome is even}) for any agent i. This notation is borrowed from Monderer and Samet [14], and the rest of the terms and claims in this section are defined and stated, respectively, to be analogous to those from their paper. Definition 2.1 (Evident (p, µ)-belief). An event E is an evident (p, µ)-belief if, whenever E occurs, (at least) a µ fraction of the agents assign a probability of at least p to its occurrence. That is: Following, Monderer and Samet, we first define our notion of factional belief in terms of events that are evident (p, µ)-belief. To maintain consistency with their work, we refer to this notion of factional belief as common (p, µ)-belief.
That is, F is common (p, µ)-belief whenever there is an event (E) that is an evident (p, µ)-belief whose occurrence implies the existence of (at least) a µ fraction of agents that believe with probability at least p in F . Note that any event E that is an evident (p, µ)-belief is trivially also common (p, µ)-belief (with F = E). Now, an important property shared by common knowledge and common belief is the formal equivalence of their definitions in terms of evident events and their intuitive definitions as infinite hierarchies. Common (p, µ)-belief retains this property. In order to state this result formally in Proposition 2.4, we need to formally define the infinite hierarchy, which is done below in Definition 2.3.
Informally, each level (n ≥ 1) in this hierarchy, refers to the event that there exists (at least) a µ fraction of agents who believe with probability (at least) p in the previous level of the hierarchy. The initial level (n = 0) is simply the relevant event F . So, written out entirely, the full informal definition would be that an event F is common (p, µ)-belief if there exists a µ fraction of agents who believe F with probability p, there exists a µ fraction of agents who believe with probability p that there exists a µ fraction of agents who believe F with probability p, and so on, ad infinitum.
where F 0 µ = F and F n µ is the event "∃ J ⊆ I with |J | ≥ µ |I | such that ∀j ∈ J , B p j (F n−1 µ )".
Proposition 2.4. For every event F , every 0 ≤ p ≤ 1, and every 0 ≤ µ ≤ 1: The proof of this proposition is essentially the same as the proof of the analogous proposition (Proposition 2) in Monderer's and Samet's paper [14]. However, the details of the proof are not particularly relevant or instructive with regard to our contributions in this work, so we omit them here and consign them to Appendix A. The important takeaway from this proposition, is, as noted above, the formal equivalence between the hierarchical definition of common (p, µ)-belief (Definition 2.3) and the definition in terms of evident belief (Definition 2.1). As with the analogous definitions for common knowledge and common belief, the former definition is more intuitive and perhaps more natural, but the latter definition is more mathematically convenient and is what we will reference in what follows. The latter definition is also a notion that better corresponds to how agents might be expected to reason about this type of belief in reality, since it is unrealistic to suppose that they consider infinite hierarchies of belief.
MODEL
Let G = (V , E) be a graph and I be a set of n agents, such that each vertex v i ∈ V corresponds to an agent i ∈ I . We will think of G as representing a social network of strategic agents who are participating in a revolt game. The graph G is common knowledge among the agents.
Nature draws the state of the world s ∈ S according to a distribution D S and selects a type t i ∈ T = {α, ν } ∪ X for each agent i, where X = ∪ j {χ j } is non-empty (and finite). The types are selected independently at random according to a distribution D s T associated with state s.
Each agent i will observe the type of each agent k ∈ I such that (v i , v k ) ∈ E (this set of agents constitutes the set of neighbors of i). The information resulting from this observation-an agent's type and the types of all of her neighbors-defines that agent's context. When agent i has the context c, we write it as c(i) = c. We use C to denote the set of all contexts that are possible in G.
Ex ante, or, before selection of the state, the assignment of types, and the observation of contexts, agents choose a pure strategy σ : . . , σ n ) is collection of strategies for each agent. Let σ −i = (σ 1 , σ 2 , . . . , σ i−1 , σ i+1 , . . . σ n ) denote the strategies of all the agents except for i. Similarly, ex post, or, after the selection of types and observation of contexts, an action profile (a 1 , a 2 , . . . , a n ) where a i = σ i (c(i)) is a collection of actions for each agent.
Let R(a 1 , a 2 , . . . , a n ) = {i ∈ [n] : a i = R} denote the set of agents who play the action R (revolt) given their context and strategy. Agent i with type t i = t receives a payoff according to the function f t i : {R, Y } n → [0, 1]: where p j , µ j ∈ [0, 1] for each j.
We call σ i a best-response to σ −i when σ i maximizes agent i's ex ante expected payoff given σ −i , which, by linearity of expectation, is equivalent to maximizing her expected payoff for each c ∈ C (using the beliefs she would have about the contexts of the other agents after observing c).
We say that a strategy profile (σ 1 , σ 2 , . . . , σ n ) is an equilibrium when each strategy σ i is a best-response to σ −i .
Intuitively, the game is defined so that there is a natural strategy for each agent based on type: • Agents of type α should always revolt.
• Agents of type ν should never revolt.
• Agents of type χ j should conditionally revolt: -Each type χ j is characterized by a pair of thresholds (p j , µ j ) that indicate an agent of type χ j should revolt when she believes that Pr[|R(a 1 , a 2 , . . . , a n )| ≥ µ j ] ≥ p j . An important thing to note about this game is that when there are agents of type ν , there is not an equilibrium where each agent revolts regardless of her beliefs. Similarly, when there are agents of type α, there is not an equilibrium where each agent chooses not to revolt regardless of her beliefs.
Even without these, though, there are still potentially multiple equilibria corresponding to revolts of different sizes, and our work does not address equilibrium selection. Consequently, in what follows, we will write that a revolt (of a particular size) is supported in equilibrium instead of writing that a revolt will occur. Similarly, we write that agents are secure enough to revolt when they sufficiently believe their thresholds are met instead of writing that agents will revolt.
Lastly, in this work we primarily consider the largest revolts that are supported in some equilibrium. Such revolts are supported by symmetric equilibria, in which each agent adopts the same strategy-namely, the strategy detailed in the bullet points above.
Motivating Example
To see this model in practice and get a feeling for how agents need to reason about the concepts that we have introduced, we will work through a modest example.
Suppose that G is a grid of n vertices embedded on a torus, so that each the vertex associated with each agent is adjacent to the vertices representing exactly four other agents (and there is no boundary). Thus, a context in this graph consists of the central agent and her type plus the types of each of her neighbors in G. In this example, we will have two types of agents: (1) Agents of type χ , who want to revolt conditionally; they feel secure enough to revolt when the threshold pair (p = 2 5 , µ = 1 2 ) is satisfied. (2) Agents of type ν , who never want to revolt. We will also have two equally-likely states: an antigovernment state A, where agents are of type χ with probability 4 5 and a pro-government state B where agents are of type ν with probability 4 5 . We also define the notion of a candidate agent. In this setting, for reasons that will become clear in the proof of the following proposition, we refer to agents of type χ with two or more neighbors of type χ as candidate agents.
Proposition 3.1. In this model, when G is sufficiently large, the event that at least 1 2 of the agents are candidate agents is an evident ( 2 5 , 1 2 )-belief. When it occurs, it is common ( 2 5 , 1 2 )-belief that supports a revolt of size 1 2 .
Proof. First, we will compute Pr[State = A|c] for each context c and use the computed values to demonstrate that our definition of candidate agents is the correct one for this setting; candidates are likely to feel secure enough to revolt, based on their p-thresholds.
Each agent has 5 independent samples from the probability distribution defined by the state, with which to calculate the probability of each state, given her information. The first step in this is to compute the likelihood of her context given the state, which can be calculated by evaluating the probability mass function of a binomial distribution with the appropriate parameters. Then, the probability for each state given a certain neighborhood can be calculated using Bayes' rule. The results of these calculations are shown in Tables 1 and 2. Now, agents of type χ that only have at most one other neighbor of type χ , believe that the state is A with probability at most 1 5 , so they will not feel secure enough to revolt; a revolt of size µ = 1 2 is very unlikely to be supported when the state is B, and they do not sufficiently believe that the state is A.
As a result, it is exactly agents of type χ with contexts such that they have two or more neighbors of type χ , which we defined earlier as candidate agents, in which we are interested (cases with k ≥ 3 in Table 2). In particular, we are interested in the probability that a majority of agents are candidates.
Claim. The probability that at least half the agents are candidates, given that the state is A is at least 1739 3125 .
Proof of Claim. For this, we can try to count all the non-candidate agents and see how likely they are to outnumber the candidates.
Let X count the number of non-candidate agents. Specifically, let X = i ∈[n] X i , where X i is an indicator variable that is 1 if and only if agent i is not a candidate. For each X i , therefore, the expectation of X i is equal to the probability that i is not a candidate, which can happen in three ways: (1) i is of type ν , (2) i is of type χ and has no neighbors of type χ , or (3) i is of type χ and has one neighbor of type χ . The probability of (1) is trivially 1 5 . Conditioned on i being of type χ , the respective probabilities of the remaining possibilities are (2) 1 625 and (3) 16 625 . To obtain unconditional probabilities for (2) and (3), we multiply each conditional probability by 4 5 . Consequently, since the events (1), (2), and (3) are disjoint, we have the following: Therefore, by linearity of expectation, E[X ] = 693n 3125 . Now, we need to upper bound the probability that the number of non-candidate agents is the majority. Ideally for this, we would like to use something tight, like a typical Chernoff-Hoeffding bound. Unfortunately, here, the X i 's are not independent; the contexts of agents that share neighbors are correlated, which complicates the calculation.
For our more rigorous analysis later, we will insist on tighter probability bounds. But for the sake of simplicity, in this example, Markov's inequality is sufficient: Consequently, the probability that at least half the agents are candidates, given that the state is A, is at least (1 − 1386 3125 ) = 1739 3125 . □ Candidate agents can perform this exact same calculation and, further, they believe that the state is A with probability at least 4 5 . Thus, when the graph is large enough that the context of any individual agent is inconsequential in their reasoning about the total fraction of candidate agents, the probability that they assign to at least half of the agents being candidate agents is at least 4 5 * ( 1739 3125 ) ≥ 2 5 . That is, in sufficiently large graphs, candidate agents believe with probability at least 2 5 that at least 1 2 of the agents are candidate agents. As a result, the event that at least 1 2 of the agents are candidate agents is an evident ( 2 5 , 1 2 )-belief. Consequently, when it occurs, by definition, it is common ( 2 5 , 1 2 )-belief. In this event, a revolt of size 1 2 is supported, since at least half of the agents have their thresholds satisfied, and consequently feel secure enough to revolt.
APPLYING FACTIONAL BELIEF: A COMPUTATIONAL PERSPECTIVE
The network revolt game model described in Section 3 gives us a useful setting in which to apply our definitions from Section 2 and demonstrate how they can provide insight into strategic coordination. In this section, we explore the interaction between factional belief and strategic coordination from a computational perspective, by trying to answer an elementary question: When can we efficiently determine if strategic coordination (i.e. in our model, a revolt) is supported under given conditions?
In pursuing an answer to this question, we seek to apply the intuition that we have constructed in our motivating example to a more general setting. Toward that end, a relatively modest generalization of the model used in the example-our Fundamental Case, below-is rich enough to provide an interesting, non-trivial answer to our question and to provide robust intuition that guides us through the various extensions of the model that we discuss in Section 5.
Fundamental Case: Low-Degree Graphs, Two States, and Three Types
Recall that our model requires a description of possible agent types and possible states (which specify probability distributions over those types). For our initial case, there are three agent types and two possible states.
Our focus will be on agents who want to revolt conditionally-agents of type χ -who have the threshold pair (p, µ) for an arbitrary 0 ≤ p < 1 and 0 ≤ µ < 1. In some sense, these are the truly "strategic" agents; they need to coordinate with other agents in order to feel secure enough to revolt. There are also agents who always behave in a prescribed manner, regardless of their contexts or the state: pro-government agents of type ν , who will never revolt, and anti-government agents of type α, who will always revolt.
The possible states are A, an anti-government state, and B, a pro-government state. The likelihood of each state, the values of p and µ for agents of type χ , and the corresponding distributions D A T and D B T over the types in each state, and D S the distribution over the states are commonly known to the agents under the prior P. We require, as can be inferred from the labels assigned to the states as being anti-and pro-government, that anti-government agents should be more likely in the anti-government state. That is, In this more concrete setting, we pose are able to pose a straightforward question: Given values µ * and q * , is a revolt of size (at least) µ * supported with probability at least q * ? We refer to this problem as Revolt. Note that there is a nuance regarding the timing of the network revolt game model described in Section 3. Revolt considers the likelihood of revolt of a certain size being supported in a given network ex ante-i.e. before the selection of the state and the assignment of types to agents.
Although the question posed by Revolt is straightforward to state, it is not straightforward to solve efficiently. In fact, we have the following hardness result, which we prove in Appendix B: Consequently, we will instead consider a relaxed version of the problem that we will be able to solve efficiently. In order to define this new problem, we will need two error terms, which will be constants given as input: ϵ, δ > 0. We also introduce two additional priors, as follows (recall P = (p, µ, D A T , D B T , D S ) is the common prior given as input to both problems): . Now, we are ready to define our new problem, Promise Revolt.
is the common prior, and ϵ, δ > 0 are constants. Output: When exactly one of the following cases is true, output the corresponding symbol: under the prior P. ∅: A revolt of size µ * + ϵ is supported with probability q * ≤ δ under the prior P + .
Promise Revolt is named so as to emphasize that this it is a promise problem, in the typical sense. The "promise" is that the given instance is such that exactly one of the cases from the definition of the problem is true. Given that promise, the three cases are mutually exclusive and any solution is required to always output the correct answer. It is exactly this promise to exclude difficult inputs that makes Promise Revolt easier to solve than Revolt.
Still, in providing an algorithm to solve Promise Revolt, we will see that the problem has a structure that allows us to closely approximate Revolt by solving Promise Revolt (for large graphs). This structure is discussed in more detail at the end of this subsection (4.1), particularly with respect to Figure 1, which helps to illustrate this intuition.
Lastly, note that we assume all inputs to Revolt and Promise Revolt are given as rational numbers. This leads us to state our first theorem: such that for any graph G with n vertices where the largest degree of any vertex is O ϵ, P 3 √ n , Algorithm 3 can be used to solve Promise Revolt(G, P, µ * , ϵ, δ = δ (n)) in polynomial time.
Before we get to the proof of Theorem 4.4, we briefly discuss our assumption that each agent has a degree that is O 3 √ n . Primarily, this assumption serves to simplify our analysis. For simplicity, it is convenient to make some distinction between "low-degree" agents and "high-degree" agents to highlight the fact that a prototypical high-degree agent would have a lot more information about the state than a prototypical low-degree agent. However, any particular choice of cutoff to separate high-and low-degree agents is somewhat arbitrary.
n , which is mathematically convenient for defining an upper bound on our error function δ (n) in Theorem 4.4.
Given this cutoff, we focus first on the case where there are only low-degree agents, which is sufficient to provide the guiding intuition that we will follow in the next section, when we discuss broadening the setting in various ways. One such extension will involve allowing vertices of arbitrary degree.
We now proceed by making several arguments which form the building blocks of the proof of Theorem 4.4 that follows. Let e s (τ ) denote the expected fraction of type τ agents in state s. Define the set of candidate states Let C(χ ) be the set of contexts centered around agents of type χ . Define the set of candidate contexts (Below, we slightly abuse notation, treating C C as if it were a type when writing ALGORITHM 2: Comparing the size of the largest revolt supported in each state. Input: X A , X B (output from Algorithm 1) and µ * . Proof. The key for this property of Algorithm 1 is that contexts are identity-agnostic, so the number of contexts is polynomial in n when the number of types is a constant. Therefore, we are able to enumerate all of the possible contexts in polynomial time. For each of these contexts c, we can compute the relevant probabilities-Pr[State = s |c] and Pr[c |State = s] for both states s-using Bayes' rule. The rest of the steps in the algorithm ALGORITHM 3: Solving Promise Revolt Input: (G, P, µ * , ϵ, δ ), where G is a graph of n vertices, P = (p, µ, D A T , D B T , D S ) is the common prior, and ϵ, δ > 0 are constants Output: Ω, A, ∅, or Null.
return Null end are linear in the number of contexts, and therefore also computable in polynomial time. Algorithm 2 is trivially computable in constant-time. □ Having shown that they are polynomial-time computable, we now demonstrate the correctness of Algorithms 1 and 2 under idealized conditions. Claim. When agents believe with probability 1 that the actual size of the largest supported revolt in each state will exactly equal its expected size, then Algorithm 1 correctly computes the expected size of the largest revolt in each state. When, further, it is true that the actual size of the largest supported revolt in each state will exactly equal its expected size, then Algorithm 2 identifies the set of states in which revolt of size µ * is supported with no error.
Proof of Claim. Algorithm 1 first defines the set of candidate states-these are the states in which it is possible, but not necessarily the case, that type-χ agents will feel secure enough to revolt, because there are at least µ αand χ -type agents. If there are no such states, then only agents of type α will revolt. If both states are candidates, then all αand χ -type agents will feel secure enough to revolt: Agents of type α always revolt and agents of type-χ have their µ threshold met in both states, by the definition of a candidate state. As a result, their p threshold is also necessarily met, because the probability of the state being either A or B is 1 ≥ p. The most interesting case is when only A is a candidate state. In this case, only type χ agents who p-believe that the state is A will have their pthreshold met. So, similarly to our example from Section 3.1, we call those agents candidate agents (and refer to their contexts as candidate contexts.) If the number of candidate agents and α-type agents, given that the state is A, is at least µ, then all of those agents feel secure enough to revolt. This is true in any state, because even when the state is B, candidate agents, by definition, p-believe that the state is A.
Algorithm 2 simply compares the size of the expected revolt in each state to µ * to decide in which states, if any, a revolt of expected size at least µ * is supported. If the actual size of the revolt in any state is exactly equal to its expectation, as we assume, then Algorithm 2 introduces no error. □ Now, the assumption that the actual supported revolt exactly equals the expected size of the supported revolt is, of course, too strict. However, we will be able to show that the actual size of the supported revolt concentrates around its expectation, and consequently, the errors in our algorithms decrease quickly as n grows.
Recall that computing the expected size of a supported revolt involves counting the number of some subset of three kinds of agents: α-type agents, χ -type agents, and candidate agents. For α-type and χ -type agents, it is simple to show that the number of such agents concentrates around its expectation because agent types are assigned independently at random. Lemma 4.6. Given ϵ > 0, the probability that, in a given state, the number of agents of type χ and of type α differ from the expected number of agents of type χ and of type α by a multiplicative factor of ϵ is at most δ = δ (n) for some δ (n) ∈ 1 exp(Ω ϵ, P ( 3 √ n)) .
Proof. Because the agent types are drawn independently at random, we can use a standard Chernoff-Hoeffding bound on the expected number of each type of agents. Consequently, the difference between the actual and expected number of each type of agent decays exponentially with n, and therefore we can choose δ (n) ∈ 1 exp(Ω ϵ, P ( 3 √ n)) so that the difference is trivially smaller. □ Counting the number of candidate agents, as foreshadowed in the Motivating Example of Section 3.1, is somewhat more complicated. Here, we need to consider the contexts of agents-not just their type-and agents' contexts are correlated with the contexts of their neighbors and their neighbors' neighbors in G. We will show, though, that as the graph grows, the effect of this correlation is small. In fact, we will still be able to prove an exponentially-decreasing bound on the difference between the actual and expected number of candidate agents. Lemma 4.7. Given ϵ > 0, the probability that, in a given state, the number of candidate agents is less than the expected number of candidate agents by a multiplicative factor of ϵ is at most δ = δ (n) for some δ (n) ∈ 1 exp(Ω ϵ, P ( 3 √ n)) .
Proof. Let X C be a random variable that denotes the number of candidate agents. We can write X C = n i=1 X i,C where X i,C is an indicator variable that indicates whether or not agent i is a candidate.
As we have noted above, an agent's status as a candidate is dependent on her context, and as a result, the variables X i,C are not independent. However-given our assumption that the maximum degree of any vertex in G is O 3 √ n -their dependence is constrained enough that we are able to apply a useful exponential bound involving the fractional chromatic number χ * (Γ) of the constraint graph Γ of the random variables X i,C .
(See Theorem C.1 in Appendix C for additional details). For our purposes, it is sufficient to use a trivial bound on the fractional chromatic number of any graph. The fractional chromatic number of a graph is at most the chromatic number of the graph, which is at most the maximum degree of any vertex plus one. So in Γ, where edges correspond to dependencies between pairs of random variables (X i,C , X j,C ), the maximum degree of any vertex-and therefore the fractional chromatic number χ * (Γ)-is O ϵ, P n 2 3 .
(1) Suppose Algorithm 1 is run with the given inputs, but with modified µ ′ = µ + ϵ 3 and p ′ = p + δ 3 , and Algorithm 2 is run with the resulting X A and X B and the given µ * . If the result is Ω, then a revolt of size µ * − ϵ is supported with probability at least 1 − δ under the prior P. If the result is A, then a revolt of size µ * − ϵ is supported with probability at least Pr[State = A] − δ under the prior P.
(2) On the other hand, suppose Algorithm 1 is run with the given inputs, with modified µ ′ = µ − ϵ 3 and p ′ = p − δ 3 , and Algorithm 2 is run with the resulting X A and X B and the given µ * . If the result is ∅, then a revolt of size µ * + ϵ is supported with probability at most δ under the prior P. If the result is A, then a revolt of size µ * + ϵ is supported with probability at most Pr[State = A] + δ under the prior P.
Proof. This proof decomposes into two analogous arguments about (1) and (2), which themselves each contain two analogous arguments. We include only the first one in detail here, below, and claim the rest via analogy.
Claim. Suppose that, in instance (1), the result is Ω. Then, the probability that a revolt of size µ * − ϵ is supported in both states is at least 1 − By Lemma 4.6, we know that, in either state, the probability that the expected number of α-type agents differs from the expected number of α-type agents by a multiplicative factor of ϵ is 1 exp(Ω( 3 √ n)) . Therefore, the probability that a revolt of size µ * − ϵ is supported in both states is at least 1 − In this case, the presence of χ -type agents slightly complicates the analysis. Actual χ -type agents have slightly easier thresholds to satisfy (p and µ) than the χ -type agents considered by the algorithms (p ′ and µ ′ ). Our algorithms determined that, in expectation, agents with p ′ and µ ′ thresholds would feel secure enough to revolt. Each actual χ -type agent, then, must also feel secure enough to revolt, not just in expectation: By Lemma 4.6, the probability that the actual number each of α-type and χ -type agents differs from its respective expectation by a multiplicative factor of ϵ 6 is 1 exp(Ω( 3 √ n)) . Combining these, then, χ -type agents believe with probability 1 − 1 exp(Ω( 3 √ n)) ≥ p that at least µ ′ − ϵ 3 = µ agents feel secure enough to revolt, and are thus themselves secure enough to revolt. Applying Lemma 4.6 (from the perspective of Algorithm 2 this time, not the perspective of χ -type agents) we again incur two error terms of ϵ 6 (one each for the χ -and α-type agents). Thus, we can conclude the probability that a revolt of size µ * − ϵ is supported in both states is at least 1 − In the final case, we consider candidate agents instead of all agents of type χ , and proceed exactly as we did in the second case, using Lemma 4.6 to conclude that the number of α-type agents concentrate and Lemma 4.7 to conclude that the number of candidate agents concentrates. Once again, the result is that the probability that a revolt of size µ * − ϵ is supported in both states is at least 1 − Proof of Claim. The proof of this claim is analogous to the previous proof, where events that are assigned probability at least 1 − The proof of the analogous claims for (2) are themselves analogous to the above cases for (1).
Finally, we note here that when χ -type agents decide whether or not their thresholds are satisfied, they are not solely relying on their estimates of the fractions of different kinds of agents, as we describe above. They have additional knowledge, since they see the realized types of the agents in their context. However, for small graphs, δ (n) can be chosen to account for this. As the graph grows, since each agent has at most O 3 √ n neighbors, the consequences of observing the types of a few adjacent agents is negligible after conditioning on the state. As a result, for large enough n, after the agent reasons about the state, the probabilistic effect of the agent viewing the types in her context is subsumed by the ϵ error term. Furthermore, the appropriate choice of δ (n) also accounts for the 1 exp(Ω( 3 √ n)) terms present in each of the claims stated above, for each n. □ Proof of Theorem 4.4. It follows from Lemma 4.5 that Algorithm 3 terminates in polynomial time with respect to the inputs G and P. We also note that Algorithm 3 terminates in polynomial time with respect to 1 ϵ and 1 δ . Further, it follows from Lemma 4.8 that when Algorithm 3 outputs a case, that case is always true. It only remains to show that when exactly one case in the statement of Promise Revolt is true, then Algorithm 3 necessarily outputs that case: Here, the key is our use of three different potential revolt sizes (µ * − ϵ, µ * , and µ * + ϵ) and 3 different priors (P − , P, and P + ) in defining the three cases of Promise Revolt. By promising that exactly one of those three cases is true, we guarantee that the values of µ and p are sufficiently far-distance at least ϵ for µ and distance at least δ for p-from the crucial decision thresholds in Algorithm 1 (e.g. the value e B (χ ∪ α), which is used to determine whether or not B is a candidate state) 2 . This ensures that both calls to Algorithm 1 will return the same values.
We can illustrate this counterfactually: Let µ = e B (χ ∪ α) + ϵ 2 with e B (χ ∪ α) > µ * > e B (C C ∪ α) and let e A (C C ∪ α) > µ + ϵ 2 . Then, A and B would both be candidate states under the prior P − . As a result, Algorithm 1 (run with prior P − ) would return X A = e A (χ ∪ α) and X B = e B (χ ∪ α). Note that X A > X B > µ * , so Algorithm 2 would return Ω when run with inputs X A , X B , and µ * . The same analysis holds for P − with µ and p incremented by ϵ 3 and δ 3 , respectively, so applying Lemma 4.8 implies that the case Ω is true.
On the other hand, only A would be a candidate state under the prior P. Algorithm 1 run with the prior P would return X A = e A (C C ∪ α) and X B = e B (C C ∪ α). Run with these inputs (and µ * ), Algorithm 2 would return A. The same analysis holds for P with µ and p incremented and decremented by ϵ 3 and δ 3 , which by Lemma 4.8 implies that the case A is true. We can conclude that these values of µ and µ * must be excluded by our promise for any input to Promise Revolt for which our assumed constraints hold.
An argument similar to this counterfactual argument suffices to exclude any value of p or µ that is insufficiently far from a crucial decision threshold in Algorithm 1 (along with associated constraints on µ * ) present in an instance of Promise Revolt. □ Now we can discuss the information that we gain from solving Promise Revolt. In doing so, it is useful to refer to Figure 1, which contains the following illustration: Given µ * , we identify all values of q for which a revolt of size µ * is supported with probability at least q (dark blue and yellow regions), all values of q for which revolt of size µ * may be supported with probability q (light blue, yellow, and grey regions), and all values of q for which a revolt of size µ * is supported with probability strictly less than q (white regions). The blurry regions between distinct colors represent the inputs to Promise Revolt for which two of the cases would overlap (and are therefore excluded from the set of inputs by the "promise").
If we could perfectly solve Revolt, then we would be able to perfectly define the boundaries-there would be no blurry regions between distinct colors, as in Figure 1. The shape of the resulting figure would characterize the equilibria of the network revolt game in the following sense: The blue column (corresponding to values of µ * for which only case Ω of Promise Revolt is true) would show the sizes of revolts that are supported in equilibrium with high probability regardless of the state, the next yellow and white column (corresponding to values of µ * for which only case A is true) would show the sizes of revolts that are supported in equilibrium with high probability given that the state is A, and the last grey and white column (corresponding to values of µ * for which case ∅ is true) would show the sizes of revolts that, with high probability, are not supported in 2 Whether or not a decision threshold is in the set of crucial decision thresholds-the thresholds from which our promise guarantees p and µ are sufficiently far-depends on µ * . In addition to e B (χ ∪ α), it can also include e A (C C ∪ α) and the value of p below which e A (C C ∪ α) ≥ µ holds for candidate contexts and above which e A (C C ∪ α) < µ. Fig. 1. Illustrating what we learn from solving Promise Revolt in an archetypal case.
Here, e B and e A denote the expected size of the revolt supported in states B and A, respectively. equilibrium (or, equivalently, the sizes of revolts that are supported in equilibrium with low probability). Although we cannot perfectly solve Revolt, as shown in Figure 1, solving Promise Revolt with Algorithm 3 allows us to approximate this shape. By choosing ϵ to be very small, we can make the boundary cases only relevant for very small subsets of possible values of µ * and values of µ in the prior P. And, as we have shown via the proof of Theorem 4.4, as n grows, δ (n) quickly becomes very small. Consequently, the range of possibly forbidden values of p in the prior P also quickly becomes small and the probabilities for each class of distinct equilibria described above (when they exist), converge to 1, Pr[State = A], and 0, respectively.
Lastly, we note that an interesting and surprising corollary to our analysis above is that Algorithms 1, 2, and 3 never use any information about the graph beyond the degree sequence of G-an anonymous list that records the degree of each vertex in the graph. That is, our algorithms require a list of the degrees of the agents, but do not require that any degree value be labelled with the identity of any agent, nor do they require any further information about the set of edges in the graph. Rather, the results of the algorithms are valid for any graph that is consistent with the provided degree sequence, because the concentration of the fraction of candidate agents and agents of each type supersedes the additional structure imposed by any concrete edge set consistent with the degree sequence.
We record this fact with the following proposition: Proposition 4.9. To compute Promise Revolt in polynomial time, we only require the degree sequence of the graph G; the graph itself is not required.
BROADENING THE SETTING
The core intuition from our analysis above actually applies to a broader class of settings than that of Section 4.1. The additional complexity present in the broader settings generally warrants additional complexity in the analysis, but the core of the argument follows the same reasoning in each case. As a result, we briefly consider the broader settings in the following sections and illustrate how the logic of the initial analysis can be extended to cover those cases. In doing so, we focus on changes to Algorithm 1; the subsequent changes required to adapt Algorithms 2 and 3 are straightforward.
Smallest Supported Revolt
Algorithm 1 computes the expected size of the largest supported revolt in each state and Algorithm 2 compares the result with some given value µ * to see in which states, if any, the size of supported revolt is at least µ * . However, this setting has a natural symmetry, which allows us to similarly compute the expected size of the smallest supported revolt in each state with Algorithm 1 (and compare the results to some given value using Algorithm 2).
To do this, we define D A ′ T = D B T , but swapping the probabilities of αand ν -type agents (so that Pr . Intuitively, this corresponds to the smallest supported revolt for the following reason: An agent of type χ feels secure enough to revolt when she believes with probability at least p that at least a µ fraction of agents will feel secure enough to revolt. Conversely, she does not feel secure enough to revolt when she believes with probability at least 1 − p that at least a 1 − µ fraction of agents will not feel secure enough to revolt. As a result, in the setting with the modified inputs, as described, agents who feel secure enough to revolt (including α-type agents) correspond precisely with agents who do not feel secure enough to revolt in the original setting (just as α-type agents in the modified setting correspond to ν -type agents in the original setting). Algorithm 1 computes the expected size of the largest supported revolt in each state, so in the modified setting, it computes a value that corresponds to the expected size of the largest group of agents, in some equilibrium of the original setting, who do not feel secure enough to revolt. Since this value is maximized, the size of the supported revolt is minimized. The precise expected size of that minimum revolt in each state is 1 − X s for each X s returned by the algorithm.
General Graphs
Suppose that we impose no restriction on the degree of the vertices in G in the statement of Theorem 4.4. It turns out that our results still hold. The existence of high-degree χtype vertices (i.e. those that would not exist when we assume an O 3 √ n bound on the maximum degree of any vertex), complicates the analysis, but only in the case where A is the only candidate state. When A and B are both candidate states, we only need to calculate the expected number of all χ -type agents, regardless of their degree. When there are no candidate states, χ -type vertices are irrelevant.
When A is the only candidate state, however, there is a key difference: The presence of high-degree agent contexts in the set of candidate contexts would affect our earlier analysis using Theorem C.1 (in the proof of Lemma 4.7), because the contexts of high-degree agents are correlated with many other agents' contexts.
Because of this, though, high-degree agents (with degree at least c · 3 √ n for sufficiently large c) have a unique perspective on the graph; their contexts contain a significant amount information that they can use in determining the state. More concretely, suppose that agent i is a high-degree agent and let N (i) be the set of neighbors of i in G. Let Now, for any ϵ 0 > 0, applying a standard Chernoff-Hoeffding bound, we have: In particular, if we choose ϵ 0 such that n , then agent i (and by the same argument, any high-degree agent) can correctly determine the state with enough accuracy that their error can be absorbed into the δ error term with an appropriate choice of δ (n). As a result, when the state is A, all high-degree χ -type agents will behave like candidate agents (recall that A is a candidate state).
However, for computational purposes, we will not lump them in with the low-degree candidate agents, because Lemma 4.7 can only apply to low-degree candidates. Instead, in first the if statement after the condition that A is the only candidate state (the line in Algorithm 1 that reads "if e A (C C ∪ α) ≥ µ then"), we calculate e A (C C ∪ α ∪ H χ ) instead of just e A (C C ∪ α), where H χ refers to the set of high-degree χ -type vertices. If this is at least µ, then when we calculate X A , we include H χ . However, when we calculate X B , we only include H χ if e B (C C ∪ α ∪ H χ ) ≥ µ, since those high-degree agents will not p-believe that the state is A regardless of the state the way that (low-degree) candidate agents will. If e B (C C ∪ α ∪ H χ ) < µ, we calculate X B = e B (C C ∪ α), as described in Algorithm 1.
Next, we must show it is possible for the agents (and the algorithm) to accurately calculate the expected number of high-degree agents of type χ and know that the actual number of such agents sufficiently concentrates around that expected number. Here, we can rely on the fact that the agents and the algorithm know the degree sequence of the graph. There is some subtlety involved: If the number of high-degree agents is small-less than ϵn-then we cannot provide a very useful concentration bound for the number of high-degree agents of type χ . However, we can essentially ignore the high-degree agents in this case and absorb the error we incur by ignoring them into our choice of δ (n).
On the other hand, if there are at least ϵn high-degree agents, we can again use a standard Chernoff-Hoeffding bound to conclude that the actual number of high-degree χ -type agents will, with high probability, be close to the expected number of such agents. The error here will be smaller than the error from Lemma 4.7 and so can be absorbed there.
Finally, there is one additional subtlety we must address for high-degree agents. While it is true that, as previously mentioned, their contexts contain a significant amount of information that they can use in determining the state, their contexts actually contain more than just information about the state-they contain information about the actual realization of types for a significant number of agents. This point is the primary conceptual reason to make the distinction between high-degree and low-degree agents in the first place. Consequently, we need to show that high-degree agents tend to behave as if they only knew the state, when in fact it is possible that the number of agents of a certain type in their context differs greatly from its expectation and as a result they have additional information to use beyond the state of the world. For this, we again appeal to our previous argument that resulted in the bound expressed by the inequality (1), above.
That argument demonstrates that it is highly improbable for the information in a highdegree agent's context to contradict what their belief would be solely given knowledge of the state. For example, if a high-degree agent uses her context to determine that the state is A, with high probability it will not also be the case that the context that she uses to make that determination has (far) fewer agents of any type than would be expected given that the state is A. Consequently, a high-degree agent will tend to act as if she is just calculating expectations and acting off of them (like a low-degree agent would), even though in actuality she has quite a bit of additional information.
A Larger Set of States
Suppose that there are m states in S, with an associated probability distribution D s T over the set of three agent types T for each s ∈ S. Intuitively, the key insight is similar to the case when A and B are both candidate states in our fundamental setting: When an agent p-believes that the true state is in some subset of the states, then she must believe that her µ threshold would be satisfied in each of the states in that subset in order for her to feel secure enough to revolt.
Guided by this intuition, we modify Algorithm 1 in the following way: As before, we identify the set of candidate states by determining for which states e s (χ ∪ α) ≥ µ. If all the states are candidate states, then as in the analogous instance for the two-state setting, there is no need to reason about belief. So, we set X s = e s (χ ∪ α) for each state s. On the other hand, if not all of the states are candidate states, then we do need to reason about belief. First, we set C C , the set of candidate contexts, to contain the contexts of all agents who believe with probability at least p that the state is in some subset of the candidate states. Then, for each state we compute e s (C C ∪α). If for some state s ′ in the set of candidate states e s ′ (C C ∪ α) < µ, then we remove s ′ from the set of candidate states and recompute the set of candidate contexts. We repeat this iterative removal procedure until e s (C C ∪ α) ≥ µ for all states s in the current set of candidate states. Once this is true (or the set of current candidate states is empty), then X s = e s (C C ∪ α) for all the current candidate states and X s = e s (α) in the remaining states.
Essentially, this procedure succeeds, because each time we define the set of candidate states, we are as optimistic as possible about the number of agents who may feel secure enough to revolt. This, though, is not too optimistic-i.e. we do not include any superfluous agents-because when all of the states in the set of candidate states satisfy e s (C C ∪ α) ≥ µ, then all of the agents with candidate contexts feel secure enough to revolt, based on our intuition above.
More formally, we can use an inductive argument: The initial set of candidate contexts contains the contexts of every agent for whom it is possible that they will feel secure enough to revolt. At each step when we remove contexts from the set of candidate contexts, any agent with a context that is removed will never feel secure enough to revolt. They do not feel secure enough to revolt with the given set of candidate contexts and α-type agents, and this set, at each step in the iteration, is maximal. Thus, at each step the set C C ∪ α contains every agent who feels secure enough to revolt. So, when the iteration terminates, the resulting (possibly empty) set of agents in C C ∪ α that all feel secure enough to revolt must be the largest such set.
The number of steps in this procedure is polynomial in m, the number of states, so we consider it still to be efficient.
It is worth noting that this result is driven by the specific way we have defined the problem. In our setting, we are only required to consider the equilibrium with the maximum revolt size in each state. As we have shown, enumeration of all of the subsets of states is not required to compute this. However, if we were to try to characterize all of the equilibria of our network revolt game, without making any further assumptions on the states, a stronger method, potentially including enumeration of all of the subsets of states, would be necessary.
EXPERIMENTS
Beyond the theoretical insight they provide, our algorithms from Section 4 also have practical utility for exploring the relationship between strategic coordination (i.e. a revolt supported in some equilibrium) and various parameters-both parameters of network models from which the graph G can be generated and parameters of the prior P. To demonstrate this utility, we return to the concrete setting of our motivating example. For convenience, the relevant parameters of this setting are summarized in Tables 3 and 4 In the following subsections, we focus on how the size of the largest supported revolt in expectation in each state varies with the chosen parameters. That is, we apply Algorithm 1 to compute our dependent variables.
However in general, Algorithm 3 also has practical utility. For example, we are able apply Algorithm 3 to instantiate concrete constructions of Figure 1 in a given setting with an
Varying Parameters of Network Models
In this experiment, we explore the relationship between the size of the largest supported revolt in expectation-denoted as in Figure 1 by e s for the state s ∈ {A, B}-and the parameters of the various network models described below. d-regular graphs, which have constant degree sequences, serve as a baseline which helps us interpret the results from the other three network models. Each of the other models are each common social network models that generate graphs according to two parameters. The first parameter for each social network model is the number of vertices n, which we set to n = 1000 for each model. The second parameter is unique to the model (we discuss the details for each model in the associated paragraph below). We treat these parameters as the independent variables. 6.1.1 Methods. For constant-degree graphs, the degree sequence is deterministic given the degree d. So, we ran an implementation of Algorithm 1 a single time with a degree sequence consisting of 1000 entries equal to d and the prior P from our motivating example (see Tables 3 and 4) as input and recorded the output values.
For the social network models, the generated degree sequences are not deterministic. So, for each social network model and for each value of the relevant parameter, we generated 100 graphs of size n = 1000 from the model with that particular parameter value. Then, for each of the 100 generated graphs, we ran our implementation of Algorithm 1 with the degree sequence of the graph and the prior P from our motivating example as input and recorded the mean of the output values (i.e. the average e s for each state s).
The results are illustrated in Figure 3. Constant Degree Sequence. For constant degree sequences, the relevant parameter is d, the degree of each vertex.
As we noted above, the degree sequence is deterministic given d and it is trivial to generate. Furthermore, the output of Algorithm 1 is agnostic to n, so the choice of n = 1000 is somewhat irrelevant. The result would be the same for any n, so the key constraint is whether a d-regular graph on n vertices exists for given n and d. This is the case for any n ≥ d + 1 such that nd is even, and therefore is true for n = 1000.
Power-Law Degree Sequence. For power-law degree sequences, the relevant parameter is γ , which represents the exponent of a power-law distribution. Specifically, in a graph with a power-law degree distribution, the probability of observing a vertex with degree d is proportional to d −γ .
For our experiment, we used the powerlaw package in Python [1] to generate integer sequences of length n = 1000 which are distributed according to a power-law distribution with exponent γ and then used the Havel-Hakimi algorithm implementation from the networkx package in Python to determine whether the sequences corresponded to feasible graph degree sequences [12]. This two-step process was repeated until we had 100 such feasible degree sequences for each value of γ . 3 Barabási-Albert Graph Degree Sequence. For degree sequences from Barabási-Albert graphs (also referred to as preferential attachment graphs) [3], the relevant parameter is m, which represents the number of edges that "incoming" vertices attach (preferentially) to "existing" vertices. 4 For our experiment, we used a generator from the networkx package in Python to generate Barabási-Albert Graphs [12].
Erdős-Rényi Graph Degree Sequence. For degree sequences from Erdős-Rényi random graphs, the relevant parameter is p edge , which represents the probability that an edge between any fixed pair of vertices will exist in the graph after the edges are sampled.
For our experiment, we used a generator from the networkx package in Python to generate Erdős-Rényi Graphs [12].
6.1.2 Results. As illustrated in the two columns of Figure 3, the results from both states display interesting phenomena and the distinct states have distinct features of interest.
State A. For each type of degree sequence, we see that e A displays non-monotonicity with respect to the parameter serving as the independent variable.
The results from the constant degree sequence are useful in interpreting this phenomenon. First, we see that there are parity effects-agents of type χ with odd degrees are more likely to sufficiently believe that the state is A. In the most extreme case, in fact, agents of type χ with degree one will always sufficiently believe that the state is A, because regardless of their neighbor's type, they will believe that the state is A with probability at least 1 2 > 2 5 (the value of p in the prior). Second, controlling for the parity effects and the unique case of the degree-one agents, agents with higher degrees tend to be more likely to correctly determine the state. This observations can be applied to explain the non-monotonicity we observe in the results for the degree sequences drawn from social network models. For the Barabási-Albert and Erdős-Rényi graphs, as with constant degree sequence graphs, the modal degree value in the graph increases monotonically with the independent variable parameter, so we observe non-monotonicity in the lower values of the relevant parameter as degree 2 vertices become more common than degree 1 vertices. For the power-law degree sequences, the modal degree value is 1 for all relevant values of γ , but the frequency of degree 1 vertices increases monotonically with γ . The non-monotonicity in the lower values of the parameter corresponds to a small range in which degree 2 vertices become more common, and the effect of the increasing frequency of degree 1 vertices is not yet large enough to offset the effect of the greater frequency of vertices of degree 2.
State B. For each type of degree sequence, the shape of the curves for e B can be explained with roughly the same explanation used for e A above. The key differences is that agents of type χ with degree one are no longer an exception to the trend that higher-degree agents tend to be more accurate in determining the state (after controlling for parity effects). In particular, this explains why we see the non-monotonicity of e A for the degree sequences drawn from social network models, but not of e B .
The more interesting phenomenon is that the range of e B is significantly larger than the range of e A for each graph. This phenomenon is relatively simple to explain: it is uncommon for low-degree agents of type χ -and impossible for such agents with degree one-to observe a context that convinces them that the state is more likely to be B, since their own type provides evidence against this. Their own evidence is weighted highly, because their total amount of evidence is small.
Even in light of this explanation, though, it is striking that we see e B range from including almost every agent of type χ to almost no agents of type χ (or just over half of the agents of type χ , in the case of the power-law degree sequences). Even excluding parameter values where degree one agents proliferate (which produce the most extreme values of e B ), the range of e B still varies significantly more than the range of e A . Consequently, when the state is B, we could expect that a significant fraction of the agents of type χ would feel secure enough to a revolt in conditions that would be very unlikely to support a revolt in which they would feel comfortable participating. This phenomenon is of particular interest with regard to the formation and maintenance of unpopular social norms like the culture of excessive alcohol consumption in American colleges and universities and the more general phenomenon of pluralistic ignorance [4][5][6].
Varying p in the Prior
In this experiment, we explore the relationship between e A and e B and the value of the parameter p in the prior. Recall that p represents the belief threshold required for agents of type χ .
6.2.1 Methods. For this experiment, we first fix the underlying parameters of the degree sequences that we varied in the previous experiment so that for each degree sequence, the average or expected degree for each vertex will be 4. Thus, each model is expected to produce degree sequences that have the same average degree, but with different variances. For the constant degree sequence this is accomplished with d = 4, for the degree sequences of Barabási-Albert graphs it is with m = 2, and for the degree sequences of Erdős-Rényi graphs it is with p edge = 1 250 . 5 As before, for all of the degree sequences, we set n = 1000. With those parameters, we generated graphs and ran Algorithm 1 as in the previous experiment-running Algorithm 1 once for the constant degree sequence and generating 100 graphs and recording the average outputs of Algorithm 1 for the social network models -for each value of p (i.e. p between 0.05 and 0.95, incremented in steps of size 0.05).
The results are illustrated in Figure 4.
6.2.2
Results. The behavior of e A and e B for each type of degree sequence in this experiment is easy to predict-they both decrease monotonically in steps as p increases. As a result, in this experiment, it is more interesting is to compare across the different degree sequences. As an example, for degree sequences from Barabási-Albert graphs, the values of e A and e B are somewhat lower than the analogous values for degree sequences from Erdős-Rényi graphs for relatively high values of p-the values where we might expect p thresholds to be for real agents contemplating a potentially costly action like violating an existing social norm. This might indicate that in real social networks-which in terms of degree sequences, among the three models in this experiment, are best approximated by Barabási-Albert graphs-we might expect agents to be more conservative about taking costly actions than they would be in random graphs. On the other hand, the opposite is true for p values that are closer to, but still above, 1 2 , so we might expect agents in real social networks to be relatively less conservative for actions with a moderate cost.
DISCUSSION
We proposed that the notion of common belief could be relaxed to a notion of factional belief in order to be more suited to social network settings and gave a natural definition of factional belief, drawing heavily from previous work on common belief. We then applied this definition theoretically and experimentally in a setting inspired by prior work about common knowledge and revolt games on networks to show how this definition moves beyond the limitations of previous work by being applicable in general graphs.
Open Questions and Future Work
The most clear direction for future work is to continue to apply factional belief in new settings and use it as a tool to understand strategic coordination and cooperation on networks. In particular, as mentioned in the introduction, the work of Stephen Morris provides many examples of applying common belief as a tool for gaining insight into a diverse range of settings. We believe that applying factional belief can yield similar results in social network settings, and that this paper represents an initial step in that process.
Additionally, though, there are some more subtle technical open questions regarding our definition of factional belief that are also worth exploring in future work that we state below.
(1) Considering our definition of factional belief from the perspective of an infinite hierarchy of reasoning, akin to the initial definition for common knowledge that we provided (2.3), with regard to the µ fraction of agents at each step in the hierarchy, it is not required by our definition that this be the same µ fraction of agents at each step in the hierarchy. However, it is not clear whether or not this should always be the case. Is it possible to describe an event that is common (p, µ)-belief, but for which the infinite hierarchy refers to a different µ fraction of agents at some step? If so, what are the consequences of this for strategic coordination and cooperation? (2) It is not too difficult to modify the setting described in Section 4 to create a model where the underlying event that supports revolt does not necessarily neatly reduce to a single event that is common (p, µ)-belief. For example, if there are multiple types of conditionally-revolting agents with different p and µ thresholds, the event supporting revolt is more like a µ fraction of agents believe with sufficient probability that their thresholds are satisfied. This event encompasses common p-beliefs among certain agents of the same type regarding their µ thresholds, but not precisely common (p, µ)-beliefs. Is there a more general definition of factional belief that allows for the existence of different thresholds for different types of agents to still be encompassed in a single event that is a factional belief among all of the agents who feel satisfied enough to revolt? We believe that answering these questions could yield further insight into factional belief that could inform its application in other settings.
ACKNOWLEDGMENTS
This work is supported by the National Science Foundation under Career Award #1452915.
For the base case, it follows from the definition of common (p, µ)-belief that for each j ∈ J , E ⊆ B p j (F ), which implies that E ⊆ F 1 µ . Now, suppose E ⊆ F n µ . Then using the fact that E is an evident (p, µ)-belief, there must exist some J ′ ⊆ I with with |J | ≥ µ |I | such that for each j ∈ J ′ , E ⊆ B p j (E).
Applying (7) for each j ∈ J ′ we have that E ⊆ B Proof. We will reduce a classic NP-complete problem, Cliqe, to Revolt.
Given an instance of Cliqe (G, k), we define a prior P, summarized in the tables below: Parameter Value p 1 µ k n Then, we consider an instance of Revolt with input (G, P, µ * = k n , q * = 0.99 k 2 ). (Clearly, defining this instance can be done in polynomial time with respect to n.) Now, suppose that G contains a clique of size k. Then, with probability at least q * , all of the agents that form the clique will be of type χ . When this is the case, they will each observe the types of their neighbors, and will believe with probability 1 that all k agents that form the clique (including themselves) feel secure enough to revolt. As a result, a revolt of size at least k n = µ * is supported with probability at least q * in G under the prior P.
On the other hand, suppose that a revolt of size at least µ * is supported with probability at least q * in G under the prior P. That is, suppose the event that at least k agents feel secure enough to revolt occurs (call this event E) with probability at least q * > 0.
For E to occur, at least k agents of type χ must believe with probability 1 that at least k − 1 other agents (of type χ ) feel secure enough to revolt. This is only possible, though, when the existence of at least k agents of type χ is common knowledge among a group of at least k χ -type agents.
As a result of the limited information available to each agent, common knowledge of the existence of k agents of type χ can only occur within a clique of k agents. This follows from that fact that in order to be certain of another agent's reasoning about some observed information (here, the type of some agent or agents), an agent i must observe that the other agent j observes the same information and also observe that j observes that i observes the information, and so on.
In order for E to occur, this kind of mutual reasoning must happen among k χ -type agents, each of whom would need to be a neighbor of the other k − 1 to facilitate the mutual observation of each others' types. That is, they would need to form a clique of k agents. 6 E occurs with non-zero probability, so there must exist a clique of size k in G. □ We note here that in addition to establishing NP-hardness, this proof demonstrates that in order to achieve the exactness required to solve Revolt, an algorithm must use more information than just agent degrees, which, as we show, are sufficient for the relaxation Promise Revolt.
We also conjecture that Revolt is #P-hard.
C A CHERNOFF-HOEFFDING BOUND FOR SETTINGS WITH LOCAL DEPENDENCE
For a standard Chernoff-Hoeffding Bound, we consider taking the sum of independent random variables. However, similar bounds exist for sums of dependent random variables when the dependence between the variables is constrained. Borrowing from a book on concentration of measure by Dubhashi and Panconesi [9], we consider the case where dependencies between the variables are encoded in a dependency graph Γ. Formally, let Γ be a graph on n vertices such that for each vertex i ∈ [n], the following property holds: if i is not adjacent to a distinct vertex j ∈ n in Γ, then X i is independent from X j .
Let χ * (Γ) denote the fractional chromatic number of Γ, which represents the smallest possible ratio a b of positive integers a, b such that the vertices of Γ can each be assigned a set of b colors from a total set of a colors under the constraint that adjacent vertices must be assigned disjoint sets of colors. (A formal definition of the fractional chromatic number can be found in Dubhashi and Panconesi [9].) Theorem C.1 (Theorem 3.2 in Dubhashi and Panconesi [9]). Suppose X = n i=1 X i where 0 ≤ X i ≤ 1 for each i. Then, for t > 0, | 2020-09-29T01:01:20.533Z | 2020-09-27T00:00:00.000 | {
"year": 2020,
"sha1": "64e170bc34493ca649ba38247e13dbc2c008cbf0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "64e170bc34493ca649ba38247e13dbc2c008cbf0",
"s2fieldsofstudy": [
"Economics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
12421593 | pes2o/s2orc | v3-fos-license | An evaluation of the health and wellbeing needs of employees: An organizational case study.
INTRODUCTION
Workplace health and wellbeing is a major public health issue for employers. Wellbeing health initiatives are known to be cost-effective, especially when the programs are targeted and matched to the health problems of the specific population. The aim of this paper is to gather information about the health and wellbeing needs and resources of employees at one British organization.
SUBJECTS AND METHODS
A cross-sectional survey was carried out to explore the health and wellbeing needs and resources of employees at one British organization. All employees were invited to participate in the survey, and, therefore, sampling was not necessary.
RESULTS
838 questionnaires were viable and included in the analysis. Employees reported "feeling happier at work" was the most important factor promoting their health and wellbeing. Physical tasks, such as "moving and handling" were reported to affect employee health and wellbeing the most. The "provision of physiotherapy" was the most useful resource at work. In all, 75% felt that maintaining a healthy lifestyle in the workplace is achievable.
CONCLUSIONS
More needs to be done by organizations and occupational health to improve the working conditions and organizational culture so that employees feel that they can function at their optimal and not perceive the workplace as a contributor to ill-health.
Introduction
Workplace health and wellbeing is becoming a major public health issue for employers and at all levels of government initiatives. Multidisciplinary strategies to improve health and wellbeing at work have been acknowledged to be very effective in addressing both individual risk and the broader organizational and environmental issues 1) . The workplace is the ideal site for health promotion and a wellbeing initiative as it is a specifically defined community with the benefits of social support and the associated economic and organizational productivity 2) .
Well planned, comprehensive workplace health and wellbeing initiatives have been shown to be costeffective, especially when the initiatives are targeted and matched to the health needs of the specific population 3 ) . Furthermore, studies have repeatedly demonstrated that well-resourced workplace health and wellbeing initiatives not only lower healthcare and insurance costs but also decrease absenteeism and improve performance and productivity 3,4) . The workplace is often an under-utilized setting for promoting employees' health and wellbeing. Most employees spend more than a third of their waking hours at work, and therefore large numbers of employees can be reached and encouraged to acquire the knowledge and skills to live a healthy lifestyle.
Despite health and wellbeing programs being initiated in organizations, the views of employees on what specific programs and resources are relevant to them are rarely evaluated. In November 2009, the Steve Boorman Report was launched in the United Kingdom to promote the health and wellbeing of employees working in the National Health Service 5) . One of the report's recommendations was for organizations to review the health and wellbeing needs and resources within their organization. As a result, this survey was undertaken in order to gather information about the health and wellbeing needs and resources of employees at one British organization.
Subjects and Methods
A cross-sectional survey was carried out to explore the health and wellbeing needs and resources of employees at one British organization. All employees were invited to participate in the survey, and, therefore, sampling was not necessary. This survey was within the scope of good clinical practice, and, therefore, ethical approval was not required.
A questionnaire was designed for the purpose of the survey and distributed by email in April 2014. The design of the questionnaire was based on health and wellbeing literature which ensured face validity [1][2][3][4] . In order to ensure content validity, members of the health promotion committee were consulted to scrutinize a draft copy and provide feedback. Following initial feedback, any changes made to the questionnaire were discussed with members of the health promotion committee to ensure its accuracy.
The questionnaire included both open and closed questions to obtain information in several domains. This included demographic information, factors improving their health and wellbeing, factors affecting their health and wellbeing, and whether or not maintaining a healthy lifestyle in the workplace was achievable.
The information from the questionnaires was coded on spreadsheets, and descriptive analysis was carried out.
Results
Of the 1356 questionnaires that were distributed, 847 were returned. Nine questionnaires were incomplete and were consequently excluded. In total, 838 questionnaires were viable and included in the analysis. The effective response rate was, therefore, 61.8%.
Employees were asked about the factors that were important in improving their health and wellbeing at work. In total, 58% of employees (n = 485) reported "feeling happier at work" was the most important factor. Other factors included "wanting to eat a healthier diet" (n = 411, 49%), "increasing levels of physical activity" (n = 387, 46%), and "wanting to be a healthier weight" (n = 331, 40%). A large number of employees (n = 477, 57%) did not feel a "reduction in alcohol intake" was important in improving their health and wellbeing at work.
Employees were asked about the factors affecting their health and wellbeing at work. In total, 819 (98%) employ-ees answered this question. The most common factors were physical tasks, such as "moving and handling" (n = 434, 53% ) , " work pressures, such as unrealistic deadlines" (n = 312, 38%), and "poor relationship with colleagues" (n = 259, 32%). The issue that least affected employees' health and wellbeing at work was " inflexible working patterns" (n = 204, 25%).
Employees were asked about the types of resources that could be useful to support their health and wellbeing at work. In total, 568 (68%) employees indicated that the " provision of physiotherapy " was the most useful resource at work. Other types of work assistance employees felt were useful included "better access to healthy, affordable food" (n = 551, 66%) and activities, such as "subsidized gym membership/cycling scheme" (n = 536, 64%). The least useful work resources reported by employees were "smoking cessation" (n = 153, 18%), "advice and support on alcohol intake" (n = 142, 17%), "literature concerning health topics" (n = 26, 3%), and "health promotion events" (n = 25, 2.9%).
All employees were asked to indicate whether they agreed or disagreed with the following statement: "I feel maintaining a healthy lifestyle in the workplace is achievable". A total of 836 (99.7%) employees answered this question, of which 623 (75%) answered "Yes" and 213 (25%) answered "No." Of the 213 employees that answered "No", 145 (68%) gave personal reasons to support why they did not believe a healthy lifestyle was achievable at work. Table 1 highlights some of these personal reasons.
Discussion
This survey shows that employees had clear expectations about the factors that improved and hindered their health and wellbeing at work. In most organizations, health and wellbeing initiatives are not limited to occupational health professionals but also to physiotherapists and psychologists, providing a wide range of resources 6,7) . This survey had several strengths, including identifying and documenting those factors that were affecting the health and wellbeing of employees. This provides important information to the organization and occupational health so that a targeted approach can be implemented. Many organizations have limited resources, and a targeted approach ensures that available resources are used effectively and future investment is allocated to appropriate resources. In addition, the evaluation of the health and wellbeing needs is likely to demonstrate to employees that the organization is taking their needs seriously.
The "provision of physiotherapy" was identified as the most useful resource at work to support employee health and wellbeing. Given the high level of moving and handling injuries affecting employee health and wellbeing in this organization, it is not surprising that employees felt that rapid access to physiotherapy services was a valuable resource. Studies have shown that rapid access to physiotherapy is both clinically and cost effective in dealing with employees presenting with moving and handling injuries 6,7) .
A quarter of employees did not believe that a healthy lifestyle is achievable at work. This highlights that more needs to be done by the organization and occupational health to improve the working conditions and organizational culture so that employees feel that they can function optimally at work and not perceive the workplace as a contributor to their ill-health. In addition, occupational health can assist with advising both employees and line managers about how to tackle some of the personal issues affecting employee health and wellbeing at work as outlined in Table 1. This could include referral to counseling services, temporary or long-term adjustments to working hours, or risk assessments in areas where the working environment is unsafe.
In addition, future health and wellbeing initiatives should be developed taking into account the factors and resources that employees felt would be most beneficial. This approach would ensure that these initiatives appeal to a wide range of employees and possibly increase employee engagement and uptake.
As with all case studies, the findings reported in this paper are specific to one British organization, and care should be taken when generalizing the findings to other organizations. However, the details provided in this paper will hopefully enable practitioners to draw conclusions about the applicability of these findings to their own organization or country. Other limitations included some missing data in responses and misinterpretation of a few questions.
Conclusion
This case study has demonstrated three key points. Firstly, it has evaluated the health and wellbeing needs and resources of employees at one British organization. Secondly, it has highlighted the health and wellbeing resources that are most valued by employees at this organization. Finally, it has made recommendations for tailoring health and wellbeing initiatives to the needs of employees in order to increase engagement and uptake. In conclusion, it is up to individual organizations or countries to decide if this approach is suitable within the context of their policies, procedures, and legislation in order to inform any future investment in health and wellbeing initiatives for the benefit of their employees.
Conflicts of interest:
None declared. | 2018-04-03T00:30:50.053Z | 2016-11-16T00:00:00.000 | {
"year": 2017,
"sha1": "f1d382ac06a75619c836028d42aed54fa8a90c99",
"oa_license": "CCBYNCSA",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1539/joh.16-0197-BR",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1d382ac06a75619c836028d42aed54fa8a90c99",
"s2fieldsofstudy": [
"Business",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
89614044 | pes2o/s2orc | v3-fos-license | Solving Arbitrage Problem on the Financial Market Under the Mixed Fractional Brownian Motion With Hurst Parameter H ∈ ] 12 , 3 4 [
The classic Black-Scholes-Merton model was introduced in 1973. Option pricing problems have been one of the hotest issues for researchers and practitioners from the academia and industry. It is well known that the basis of Option pricing problems is how to describe the behavior of the underlying asset’s price. In (Black, F. & Scholes, M ,1993), the underlying asset’s price is assumed to follow the geometric Brownian motion. However, extensive empirical studies shows that the distribution of the logarithmic and distribution is strictly not permitted, except for Open Access articles. Returns of financial asset usually exhibit properties of self-similarity and long-range dependence in both auto-correlations and cross-correlations. Since the fractional Brownian motion has two important properties (self-similarity and long-range dependence), it also has the ability to capture the behavior of underlying asset price. There are many scholars in the study of option pricing based on a fractional Brownian motion such as (Duncan, T.E., Hu, Y., & Duncan, B, 2000); (Necula, C, 2004); (Cheridito, P, 2003). However, the fractional Brownian motion is neither a Markov process nor a semi-martingale as well as it cannot use the usual stochastic calculus to analyze it. Thereby making the fractional Brownian motion not suitable for the behavior of stock price. To eliminate the arbitrage opportunities and to reflect the long memory of the financial time series, many scholars have proposed the use of mixed fractional Brownian motion. The mixed fractional Brownian motion is a family of Gaussian processe, comprised of a linear combination of the Brownian motion and the fractional Brownian motion. Its defined on the probability space (Ω,F ,P) for any t ∈ [0,T ] by
Introduction
The classic Black-Scholes-Merton model was introduced in 1973.Option pricing problems have been one of the hotest issues for researchers and practitioners from the academia and industry.It is well known that the basis of Option pricing problems is how to describe the behavior of the underlying asset's price.In (Black, F. & Scholes, M ,1993), the underlying asset's price is assumed to follow the geometric Brownian motion.However, extensive empirical studies shows that the distribution of the logarithmic and distribution is strictly not permitted, except for Open Access articles.Returns of financial asset usually exhibit properties of self-similarity and long-range dependence in both auto-correlations and cross-correlations.Since the fractional Brownian motion has two important properties (self-similarity and long-range dependence), it also has the ability to capture the behavior of underlying asset price.There are many scholars in the study of option pricing based on a fractional Brownian motion such as (Duncan, T.E., Hu, Y., & Duncan, B, 2000); (Necula, C, 2004); (Cheridito, P, 2003).However, the fractional Brownian motion is neither a Markov process nor a semi-martingale as well as it cannot use the usual stochastic calculus to analyze it.Thereby making the fractional Brownian motion not suitable for the behavior of stock price.To eliminate the arbitrage opportunities and to reflect the long memory of the financial time series, many scholars have proposed the use of mixed fractional Brownian motion.The mixed fractional Brownian motion is a family of Gaussian processe, comprised of a linear combination of the Brownian motion and the fractional Brownian motion.Its defined on the probability space (Ω, F , P) for any t ∈ [0, T ] by such that (a, b) (0, 0), where B = (B t , t ≥ 0) is a standard Brownian motion and B H = (B H t , t ≥ 0) is a fractional Brownian motion with the Hurst parameter H ∈ (0, 1).It is an important class of long memory processes when the Hurst parameter H ∈] 1 2 , 1[. (Cheridito, P, 2001) has proved that for H ∈] 3 4 , 1[, the mixed model with the dependent Brownian motion and fractional Brownian motion is equivalent to the one with the Brownian motion and is a semi-martingale.Hence, it is arbitrage-free.For H ∈] 1 2 , 1[, (Mishura, Y.S, 2008) proved that, the model is arbitrage-free.However the arbitrage problem still exists for H ∈] 1 2 , 3 4 [.The process X H,a,b t is not a semi-martingale except for H 1 2 .The issue with this is that the extensively used Itô calculus, developed from semi-martingales to solve stochastic integral, does not apply here.Similarly, the non semi-martingale property of mixed fractional Brownian motion indicates that arbitrage opportunities are possible.The stochastic differential equation of stock price S t assuming X H,a,b t is defined on a probability space (Ω, F , P) for all t ∈ [0, T ] by: where µ, σ, a and b are constants, the Hurst parameter H ∈] 1 2 , 3 4 [.The analytical solution based on wick-product integration approach is not a semi-martingale and present the arbitrage opportunities on the financial market.
In this paper, to capture the long range property and to exclude the arbitrage in the environment of mixed fractional Brownian motion, we use the Liouville form of the fractional Brownian motion on the space L 2 (Ω, F , P).In other to do this, we define for all λ > 0, the process X H,a,b,λ on the same probability space , for any t ∈ [0, T ] by and the stochastic differential equation given by (2) becomes Hence, we use the idea of (Thao, T. H, 2003) to construct the prove of existence and uniqueness solution of (4).
We show that X H,a,b,λ converges to X H,a,b .Our motivation is that the process defined by B H,a,b,λ can be seen as a semimartingale for all λ > 0 and therefore the process X H,a,b,λ t is a semi-martingale when H ∈] 1 2 , 3 4 [.The rest of the paper is organized as follows.In section 2, we briefly introduce the definition and main properties related to mixed fractional Brownian motion.Also, some necessary properties are provided.In section 3, we study an approximation of the process X H,a,b .In section 4, it shows the existence and uniqueness solution of (2).In section 5, we study the modification of the mixed fractional model.In the section 6, we study the convergence of the solution of (4), while section 7, present the solution of mixed fractional equation and section 8 present an application on the financial market.Conclusion is given in the last section.
Preliminaries
In this sub-section, we shall briefly review the definition and some main properties of the mixed fractional Brownian motion.These properties can help us to prove the existence and uniqueness theorem of the solution of (2).These result can be found in (Cheridito, P, 2003).
Definition 1 (Local martingale)
The process M t is a local martingale with respect to the filtration F t if there exists a non-decreasing sequence of stopping times τ k → ∞ a.s.such that the processes M (τ k ) t = M t∧τ k − M 0 are martingales with respect to the same filtration.
Definition 2 (Bounded variation function ) A continuous function
where M is a local continuous martingale and A is a process locally bounded variation.
Proposition 1 For all real γ, a process exp is a martingale.B t is an standard Brownian motion.
Definition 5 The mixed fractional Brownian motion X H,a,b is a continuous centered Gaussian process with variance a 2 t + b 2 t 2H and covariance function defined by: where E(.) denotes the expectation with respect to the probability measure P.
Proposition 2 1.The increments of X H,a,b t are stationary and these increments are correlated if and only if H = 1 2 .
2. The process X H,a,b t is also mixed self similar.
3. The process X H,a,b is neither a Markov process nor a semi-martingale when b 0, unless H = 1 2 .
4. The process X H,a,b exhibits a long rang dependance if and only if H > 1 2 .
5. For all T > 0, with probability one X H,a,b has a version, the sample paths of which are Hölder continuous of order γ ≤ H on the interval [0, T ].Every sample path of X H,a,b is almost surely nowhere differentiable.
Basic Properties
We recall the following lemma which will use to show the proof of the existence and uniqueness of the solution of (2).
Lemma 3 (Doob's L p Inequality: (Oksendal, B, 2003)) If M t is a martingale such that t → M t (ω) is almost continuous surely, then for all p ≥ 0, T ≥ 0 and for all ξ > 0, Lemma 4 (Borel-Cantelli's lemma: (Chandra, T. K, 2012)) Let (Ω, F , P) be a probability space.Let {A k } k≥1 be a sequence of events in F such that Lemma 5 (Fatou's Lemma: (Knapp, A. W, 2005)) Let f n ≥ 0 be a sequence of measurable function and S is a measurable set.Then,
Approximation of Mixed Fractional Model
In this section, we approximate the process X H,a,b t by a semi-martingale when 1 2 < H < 3 4 .To do this, we begin to find the asymptotic solution of model defined by (4).For all λ > 0, we have According to the Liouville form of the fractional Brownien motion, we have Where , where K is the fractional kernel Lévy's function.
By applying the derivation of fractional Brownian motion, we obtain: in other words: According to the Fubini's theorem, we have: From where We deduce the following result: Lemma 7 The process X H,a,b,λ t is a continuous semi-martingale.
Proof 2 We have X H,a,b,λ t = aB t + bB H,λ t and from lemma 6, B H,λ t is a semi-martingale.
The process X H,a,b,λ t is a linear combination of continuous semi-martingale , from where X H,a,b,λ t is a semi-martingale.
We recall the following estimation: Proof 3 Applying the mean value theorem on the function u → u α , we have According to the isometric Itô's integration lemma, we have: By combining (20) and (21), we obtain: where C(α) depend only α: where ∥.∥ is a standard norm in L 2 (Ω).
According to the lemma 8, we deduce the following result: Lemma 9 The process X
Theorem of Existence and Uniqueness
In this section, we use the approximation approach which is given in terms of practical approach of the theory by (Thao, T. H ,2003) (Intarasit, A., & Sattayatham, P, 2010).We study the Liouville form of fractional Brownian motion.We approximate the process B H t in space L 2 (Ω, F, P) by a semi-martingale, So, we use the idea of (Alos, E., Mazet, O., & Nualart, D, 2000) to introduce the semi-martingale where α = H − 1 2 and B t a standard Brownian motion.Furthermore, where We shall prove in section 3 that B H,λ t converges uniformly with respect to t ∈ [0, T ] to B H t and further lead to Now, we construct the proof of the existence and uniqueness solution of (2).We define (2) under the following stochastic differential equation: where µX t (x) and σ(s)X t (x) are two continous functions.X 0 is a random variable such that E(X 2 0 ) < ∞.
Let L 2 (Ω, F , P) be the probability space where F is the σ-algebra of the set Ω, and P is a probability measure.
Let {X t } t∈[0,∞[ be a stochastic process defined on (Ω, F , P) such that ω → X t (ω) is a continous function represented by the trajectories of process We can write X t (ω) = X(t, ω) and defined a function T × Ω → R n as (t, ω) → X(t, ω).
To solve (29), we need the following assumptions Theorem 1 Let T > 0, under assumption H.1 and H.2. Then the mixed geometric fractional Brownian motion defined by (29) has a unique solution in t ∈ [0, T ].
Proof 5 Let X and Y are two solutions of (29), suppose that (30) We have As However, By putting (26) in (35), we have: We have the following estimation: By Lemma 2 and using (4), we have In the last expression, we show that E [ (∫ t∧s n 0 β 3 ϕ λ s ds ) 2 ] = 0.
Since β 3 is bounded, for all constant M, we have and from (28) we have the following inequality Finally, we have We define , thus for all t ∈ [0, T ], we have By applying the Lemma 1 (C = 0 and u(s) = Since t → X t and t → Y t are continious, this implies the result for t ∈ [0, s n ] whereas n → ∞, so we obtain the uniqueness solution on [0, T ].Now we prove the existence of the solution of (29), consider the stochastic differential equation define by (29) when X λ 0 = X 0 and the corresponding approximation equation of (29) become By using (26), we can write (39) as By replacing (41) into (40), we obtain The equation (42) can be written as The equation ( 42) and (43) represent the stochastic differential equation driven by B s , where b(s, X λ s ) and σ(s, X λ s ) satisfy the hypotheses H.1 and H.2.
If the solution of (43) exist, this implies that the solution of (39) also exist.
Hence, the solution of (29) exist.To show the existence of the solution of (43), we follow the approach of (Oksendal, B,2003).
We define Z 0 t = Z 0 and Z p t = Z p t (ω) such that By a similar calculus as in the case of uniqueness, we have Let's apply the principle of induction on (45).Let p > 1 and t ≤ T , we have By applying the Lemma 2 and the hypothesis H.2, we have Hence, we have where R 1 is a constant dependent of T, N and E Now by induction on p ≥ 0, for all t ≤ T , we have Let's prove the above inequality (49) by mathematical induction.
1.For p = 0, the statement reduces to and is obviously true according the inequality (48).
Assuming the statement is true for p
We will prove that the statement must be true for p = k + 1: The left-hand side of (52) can be written from (45) as From (51), we have This implies When we evaluate the right-hand side of (49) for p = k + 1, we obtain the same value with the right-hand side of (52).This proves the inductive step.Therefore , by the principle of mathematical induction, the given statement is true for ever positive integer p.
We use the Lemma 4 as follow It follows that for almost every ω, there exist p 0 such that Hence, the sequence (Z n t (ω)) define by: for almost all ω.
If we set which X λ t is continuous in t for almost all ω, since Z n t (ω) have the same properties for all n.As we know that every Cauchy sequence is convergent, by using (61) we have for m > n ≥ 0, (67) show that the sequence of {Z n t } converge in L 2 (P) towards Z t .Since the subsequence (Z n t ) converge to Z t for all ω, we have to Z t = X λ t almost surely that is.
Now, we show that X λ t satisfy (29) and (42).For all n, From (76), we obtain for all t ∈ [0, T ], Z n+1 t = X λ t when n → ∞, that is uniformly convergent for almost all ω.From (72) and by application of Lemma 5, we have From Lemma 2, we have this implies X λ t − Z n t → 0.
By taking the limit for (76) when n → ∞ we have
Modification of the Mixed Fractional Model
In this section, we use the theorem 8 to study the mixed modification model.By this modified model, we can use the stochastic Itô's calculus when we consider the consequence of the stock price on the financial market using long memory property.For each λ > 0, we associate to (2) the following asymptotic model: From ( 16), we deduce that (79) become Let G λ t is an absolutely continuous trajectory process (80) become: We have with The equation ( 83) is a stochastic Itö differential equation and solving this equation, we obtain the following result which represent the solution of (79).
Theorem 2 The solution of mixed modified fractional stochastic equation defined by (79) is given by S λ 0 is the initial condition Proof 6 From (81) and (82), we have By applying the Itô's lemma to the function u → log(u) with u = S λ t > 0, where S λ t is the solution of (79).We obtain The This figure 1, shows that increasing the hurst parameters affects the future price of the asset so that, by increasing the Hurst parameter, the difference between expected lowest price and the highest price be increased and the paths are almost convegent.The simulation of the mixed modified fractional Brownian motion model have been given by the following algorithm Algorithm.MMFBM model simulation process.
T is the maturity of the option.2. For j = 1 to number of simulation n.
By applying this algorithm and by using the parameter of the option given by table 1, we have the following results for differents values of the Hurst parameter H.In the table 2, 4 and 3, we observe that when the value of the Hurst parameter increases and the number of paths increases, the value of the stock price under MFBM and MMFBM are almost the same on the financial market.We can say that through this process the market is stable and balanced.It means that the process S λ t is well-defined as a semi-martingale when H ∈] 1 2 , 3 4 [, we can conclude that the arbitrage problem is almost dismissed.
Conclusion
This study provided a process to obtain the price of the underlying asset on the financial market having a long term memory.As this is not always the property of the master mind.As this is not always the case in the traditional (Black, F.& Scholes, M, 1973).After showing the existence and uniqueness of the solution of (2) describing the mixed fractional model, we consider for each λ > 0, the process X H,a,b,λ t which represents a modification of the fractional mixed process (29) and we have shown that the process (79) has a unique solution that converges uniformly to the process checking the (29) statement in the space L 2 (Ω).
Figure 1 .
Figure 1.Simulated asset paths of Mixed modified fractional Brownian motion model
Table 1 .
Simulation of the Stock Price Under the Modified Mixed Fractional model In this section, we notice that, BM denotes the results of the Black-Scholes equation driven by classical Brownian motion.FBM denotes the results of the Black-Scholes equation driven by fractional Brownian motion.MFBM denotes the results of the Black-Scholes equation driven by the mixed fractional Brownian motion and MMFBM denotes the results of the Black-Scholes equation driven by the mixed modified fractional Brownian motion.The asset price has been estimated under the Mixed modified fractional Brownian motion model, where the parameter of the option model is given by the table 1 and in figure 1 below, we observe 7 simulated paths for the asset price with different Hurst parameter H such that H ∈] 1 2 , 3 4 [.Parameter of MFBM model 0 (t − s + λ) α−1 dB s ] + (σa + σbλ α )dB t .(90)8. | 2019-02-16T22:31:35.612Z | 2019-01-24T00:00:00.000 | {
"year": 2019,
"sha1": "bbea7d62ccd7955546030adb84d497b86345c958",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/jmr/article/download/0/0/38304/38962",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bbea7d62ccd7955546030adb84d497b86345c958",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
157083773 | pes2o/s2orc | v3-fos-license | ANALYSIS OF ACCOUNTING OF FINANCIAL EXPENSES IN THE PROCESS OF FINANCIAL PLANNING OF TRANSPORT COMPANIES
The object of research is practical aspects of financial planning of transport companies. Factors affecting the parameters of the process of financing capital-intensive companies are analyzed, which include port and shipping companies. Structural changes in the economy and the financial system led to both a lack of financial resources in the market at an optimal price, and to reluctance of financial intermediaries to finance the activities of companies with a heavy asset structure. The level of competition on the one hand requires a reduction in costs, on the other hand, technical modernization, as a result, financial planning must be made more flexible. The conditions in the financial markets are considered to attract additional financing primarily for shipping companies. Changes in financing conditions need to be taken into account in the financial planning process, primarily the level of financial costs and the level of risk in budgeting, providing for an alternative way of financing. In some cases it is advisable to refuse to attract, as the effect of financial leverage can be negative. The study of trends in terms of development of capital raising allows to more accurately predict the level of efficiency of using the company's capital.
Introduction
The state of the external environment complicates the process of financing of capitalintensive companies, which include port and shipping companies, primarily political risks, the economic legislation of the country, changes in the international financial market and the maritime market. Under such conditions, if the company wants to raise enough funds to increase the volume of its means of production, attracting investments at a low interest rate and controlling risks in financing its activities, all aspects of financial planning must be carefully worked out, primarily the level of financial costs and the risk level.
Risks substantially limit the desire of financial inter mediaries to lend and the amount of financial resources for transport companies. The terms of longterm financing can offset the positive effect of the financial leverage, as a consequence, the planning process should be more flexible.
The object of research and its technological audit
The object of research is the process of financial plan ning of transport companies in terms of financial costs.
Financial planning must take into account, in modern conditions, first of all, external factors. In order to im plement an effective financial planning process, conduct research on the conditions for raising funds in financial markets. The problem aspects of the financial planning system at transport companies are considered. Conditions of reception of financial resources of the transport com panies are considered.
The aim and objectives of research
The aim of research is to review and analyze the main factors influencing the size of financial resources of com panies and the conditions for implementation of effective financial planning.
To achieve this aim, the following tasks are defined: 1. To analyze the Ukrainian market. 2. To analyze the possibilities for obtaining financing from the banking sector.
3. To give recommendations on the accounting of the financial situation in the financial planning process of the transport companies.
Research of existing solutions of the problem
In general, the focus of practitioners is on the forma tion of the optimal structure of capital, cost management, optimization of management structures and management decisions [1][2][3][4][5][6][7][8][9][10][11][12] General features of financial planning and scientific approaches to formation of a financial planning system with varying degrees of detail are examined by diffe rent authors [1,2,10]. Methodical bases and practical recommendations [1] of the system of financial planning of automobile transport companies are developed. How ever, experts [11] note the need to optimize the costs of transport companies. Financial analysts [7] declare that financial institutions do not want to conduct longterm loans to companies and the level of rates does not change significantly, which increases the financial costs of transport companies, reduces the level of efficiency and can affect the financial sustainability of companies [12].
Analysis of the specifics of financial planning in mari time transport companies is not given enough attention. The models proposed in the theory of financial planning are of a general nature and do not take into account the industry specificity of companies of this type of transport.
Methods of research
To achieve this aim, such general scientific and special methods and methods of investigation are used: generaliza tion, analysis and synthesis -for research of the relations of transport companies with financial markets, abstract logical for theoretical generalization and formulation of conclusions.
Research results
Transport companies are experiencing difficulties in attracting investments in modern conditions. Even if ope rations are effective, and the level of financial stability is high enough, its investment attractiveness is a matter that is solved taking into account the influence of exter nal factors in the first place. The negative influence of external factors, in turn, is risks, the ways of protection from which are rather difficult to find in the investment projects.
According to the experts' calculations [5], a stagnant world trades, a decrease in activity in investment activi ties and an increased uncertainty of politics have become another difficult step for the world economy. Moderate recovery is expected in 2017, however, if commodity mar kets are not activated, then for transport companies, the increase in activity volumes is also undefined. While the financial incentive may increase global growth above ex pectations, the risks to growth forecasts are still shifted toward deterioration. Important risks of deteriorating the situation stem from the increased uncertainty of the poli cies in the main countries.
In Ukraine, the cost of attracting additional funding (in terest rate) has an increasing dynamics (row 1) in compari son with the states of the European Union (for example, Estonia -row 2) (Fig. 1).
However, according to the findings of experts [7], ac cording to the NBU and to the results of 2016, the average rates on hryvnia loans to business customers declined from 20 % to 16 %. That is, the loan drops a further 1-2 % before the spring. «The main problem is not the cost of loans, but the complexity of obtaining them. Banks strictly assess the applications and are ready to lend only for short terms -there are almost no programs with terms more than 1-3 years» [7]. However, transport companies need financing for development, which requires longterm relationships. The heavy structure of assets significantly affects the level of liquidity indicators, which also does not contribute to raising the company's credit rating (Ac cording to the data [6]). According to the expert [7], serious changes in the market can be expected by the end of the year: «If our economy continues to stabilize, by autumn we can expect new bank offers with cheaper rates on loans».
Transport companies have access to various financial markets with a different mechanism for obtaining resources. For example, shipping is a capitalintensive and risky busi ness, on the one hand, shipping companies need a signifi cant amount of financing to modernize the fleet in order to maintain the necessary level of competitiveness, but in the maritime industry for financial institutions investing is more risky than in the other sector of the economy.
There are a huge number of transport companies with financial difficulties that do not have the opportunity to raise funds from external sources to finance the pro duction process and development of the material base. Companies can obtain the necessary funds on acceptable term with the participation of financial institutions, it is profitable to invest temporarily free funds in various financial transactions. Any financial intermediary, providing borrowers with services for obtaining the missing re sources, and creditors -for placing free money, pursues only its own interests. A complex intersection of interests arises, and a financial project can only take place when the terms of cooperation are acceptable to all parties. An understanding of the essence and role of financial intermediaries is necessary for success in the financial management of the company.
In addition to objective, subjective factors play a role in decisionmaking. Any investor, assessing the attractiveness of investments, uses its approach to valuation. Financial institutions make investment decisions, they ask not only information about the company's revenues and history, but also a full account of the company's debts, which complicates the financing process and, consequently, the company's activities.
There are many ways to finance the activities of trans port companies, various methods for calculating the credit rate, with a certain period and the mechanism for return on investment, as well as other conditions.
The company's ability to service the attraction of funds from various sources depends on the effectiveness of its activities.
ISSN 2226-3780
The decision on the capital structure is based on a thorough and varied forecast of revenues and expenditures. High and constant level of income allows to reduce the dependence of the company on external sources, but at the same time allows on the other hand to use cheaper and more risky sources of financing.
In the economic downturn, the company's owners are more critical about the company's development projects. For example, research shows that managers are reluctant to go for an additional share issue, if this leads to a dilu tion of the earnings per share [1].
Formally, creditors claim only a part of the company's income (usually fixed) and do not interfere with the ma nagement of its activities. However, if significant borrowed funds are involved as a source of financing, using va rious conditions and restrictions specified in contracts (for ex ample, providing shares as a pledge of the company, pro hibiting an increase in debt and paying dividends, etc.), lenders can exert significant pressure on the owners and management.
Financial planning for example for a shipping company is an optimal choice of one of many practical schemes, and an equilibrium between the minimum costs and the minimum risk that can maximize the expansion of the company for a certain amount of money, as illustrated in Fig. 2.
Any shipping company must follow the basic rules in the field of financing its activities: 1. Moderate amount of necessary capital. 2. The lowest cost of financing capital investments. 3. Optimum capital structure. 4. Variable ways of attracting capital. These rules can help the company to form an optimal financing structure, make full use of capital, independently develop and react to market changes.
It should be noted that formation of the company's budget should take into account changes in the way of attracting capital.
Transport plays an important role in ensuring stable and sustainable development of the economy, ensuring the strengthening and optimization of market links of economic entities.
Development of transport companies of the country is characterized by complex processes occurring in the industry.
In the global financial crisis, the number of banks that are actively cooperating with shipping companies has significantly decreased. The reasons, including, were also included in a large number of unreturned loans. The success of the bank's operation depends on many factors, among which the most important is the credible lending policy, including the choice of clients, the strength of the ties with it, knowledge of the specifics of financial management in the relevant areas of business, etc. Com paniesborrowers should carefully study and take into account differences in credit policy of specific financial institutions, based on the peculiarities of the positions of banks and their leaders for the following aspects: loan risk, the debttoequity ratio, the total amount of the possible loan, the degree of loyalty and, most importantly, the «human factor». All this determines the risk that the bank is ready to go to. Significant economic changes are a consequence of the last financial crisis, which affected, first of all, the company's capitalists. Shipping companies do not have the resources to finalize the payment for the started projects.
Before the global crisis of 2008-2009 there was a sharp increase in contracts for building of new ships, caused by a record high rates and tariffs on freight markets. The income level formed the conditions for increasing the inflow of loan capital. At the same time, many companies started largescale programs for replenishment of the fleet, accompanied by huge shipments of new tonnage, which did not correspond to the real needs of international shipping. The shipyards barely managed to cope with the influx of orders and continuously increased shipbuilding capacity. The pace of development of the world merchant fleet was sharply accelerated and significantly exceeded the moderate growth in demand for maritime transport. Many shipyards still continued to properly perform previously concluded contracts in the postcrisis period of 2010-2012. The further influx of surplus tonnage into the world fleet, with the help of the newest highperformance ships, caused a sharp deterioration in the mar ket conditions, a collapse of rates, a prolonged disruption in the balance of supply and demand in all major freight sections. To implement effective financing, it is necessary to be able to invest at the right time, in the right type of ship, have the right number and size of ships on the lines, and manage more efficiently than competitors.
In addition, every time when a company tries to at tract a new amount of investment, financial intermediaries increase interest rates. In the event of a restructuring, lenders require the presence of a professional restructuring firm to help them examine the volume of possible reve nues and form plans for recapitalizing the business that undoubtedly increases costs. Attracting capital from new investors can lead to significant dissolution for existing investors and in many cases to a complete change in the control of the company.
The state of the entire world economy is not particu larly conducive to overcoming the problem accumulated in the maritime industry.
Despite complex and contradictory macroeconomic realities, the world's maritime trade demonstrates posi tive dynamics without significant fluctuations and failu res (Fig. 3). Banks know that the value of the ship at the end of the building process will be underestimated compared to the size of the loan and reluctantly make final calcula tions. Most financial institutions that finance shipping companies are in the Eurozone, and some have problems with formation of the necessary financial resources, as a result, this may affect the cost of the proposed resources.
Some banks do not want to finance transactions with the acquisition of old ships [3]. In addition, the crediting of such agreements is associated with the complexity of efficiency calculation due to the increase in the price of resources.
Shipping companies are currently faced with signifi cantly worsened economic prospects due to the state of the market for cargo transportation services, and changes in charter rates. As a result, many companies decide on restructuring or refinancing.
For port companies, raising additional funds is also a difficult issue. Some companies actively invested in the development of infrastructure and in other noncurrent assets faced with losses due to changes in the structure and volume of freight flows.
Commercial structures provide a wide range of services, including urgent loans. The high cost of the necessary equipment for port and shipping companies forces its owners to use borrowed funds to finance the building or purchase of a new ship. Although the share of bank loans is relatively small compared to other ways of attracting resources (sale of shares, bonds), it is loans that are the main source of shortterm and mediumterm loans. The profit of a commercial bank is, first of all, income in the form of interest on loans. Thus, it is necessary to take exceptionally seriously the procedure for selecting a partner bank. Debt financing is probably the most popular source of financing, as it is a more flexible form and does not take ownership of the owner.
The key directions of the commercial term loan are: Loan duration. A loan can be offered for a shorter period of two to fifteen years, depending on the circum stances. Loans for some types of ships can be more than fifteen years.
The percentage of the loan: the ratio of the loan to the value of the asset, that is, the value of the ship is an important factor as to which credit line the bank is willing to open. Different banks have different policies that take into account the type of ship, the age and the influence of other factors on the shipping business. The loan amount that the bank is ready to offer will change, the loan will be from 40 % to 80 % of the company's assets, sometimes schemes where 100 % financing is pos sible through complicated financial schemes.
Effective use of the leasing mechanism for asset mod ernization is possible if the main aspects are taken into account: -leasing holder should be satisfied with the lessee, and so, whether he can fulfill his obligations under the lease agreement; -changes in tax legislation may be a problem; -tax schemes often complicate the mechanism by in cluding a large package of necessary documentation. Due to the nature of the isolation of the international maritime transport market, and given the complexity and uncertainty of both internal and external circumstances, transport companies should analyze, evaluate and judge from various elements so that they can monitor the decision making process and maximize the benefits and favorable outcome. This will reduce the risks of financing.
Some financial institutions prefer to invest in very profitable operations to finance port companies.
In general, the financial condition of transport com panies is characterized by a low level of sustainability. This necessitates the development of scientifically sound procedures for financial management at the companies of the industry, taking into account the specifics of trans port, and, first of all, the methods of financial planning.
Companies need a rational system for formation of financial flows, that is, financial planning, which allows to make informed management decisions and minimize financial risks.
The financial planning process should take into account the industry specific features of their activities.
Principles of financial planning include: 1. The soundness of financial planning. Before the ope ration, it is necessary to have information about where it can find potential investors, and the forecast for financial costs.
2. Complexity of financial planning. It is necessary to take into account not only the interest rate, but also the time and resources needed to conduct the search, as well as the possible losses associated with the incompleteness and imperfection of the information received.
3. Flexibility of financial planning. 4. Efficiency of financial planning. 5. Development of procedures for prompt budget ad justments.
SWOT analysis of research results
Strengths. The strength of research is to identify the current state of the financial market in order to attract additional financing from transport companies and identify the most relevant components of the financial planning process for transport companies.
Weaknesses. The weak side is that the data concerning the conditions for obtaining loans with commercial in formation does not allow to systematize and study more thoroughly the financial costs of transport companies.
Opportunities. The opportunities for further research are borrowing the experience of foreign countries in improving the conduct of risk analysis when obtaining funding.
Threats. The threats to the results of conducted research are that the conditions for attracting financial resources ISSN 2226-3780 are constantly changing. The globalization of the capital market at the present stage of development does not just require the presentation of objective and detailed financial information, but also provides conditions for comparability of this information at the international level.
Conclusions
1. The market state from the perspective of financing port and shipping companies is researched. For financial institutions investing in transport companies is more risky than in other sectors of the economy. Companies have a heavy asset structure, increasing the level of risk.
2. The prospects of the industry development for the current year are analyzed. Moderate recovery is expected in 2017, however, if commodity markets are not activated, then an increase in activity volumes for transport com panies is also undefined.
3. The main rules for financing its activities for the transport industry are a moderate amount of necessary capital, the search for the optimal cost of financing capital investments, taking into account the additional financial costs in the conditions of formation of the optimal capital structure. The most important is consideration of a changing the way of capital attraction. | 2019-05-19T13:06:50.330Z | 2017-05-30T00:00:00.000 | {
"year": 2017,
"sha1": "fc196f09b7db987f43cb9a4030bea7d6107a32a0",
"oa_license": "CCBY",
"oa_url": "http://journals.uran.ua/tarp/article/download/102275/100905",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "abfdbf4dbcd59255742c7b5e7bbcbd3144df251a",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
149615007 | pes2o/s2orc | v3-fos-license | AwaP-IC—An Open-Source GIS Tool for Measuring Walkable Access
: Within the broad field of walkability research, a key area of focus has been the relationship between urban form and capacities for walking. Measures of walkable access can be grouped into two key types: permeability measures that quantify the ease of movement through an urban fabric, and catchment measures, quantifying the potential to reach destinations within walking distance. Of numerous street network measures in use, it has been shown that many are poor proxies of permeability and catchment. Instead, two new measures have been proposed: the area-weighted average perimeter (AwaP) and interface catchment (IC), that, combined, better capture the capacities of urban morphologies to enable and attract pedestrian movement. In this paper, we present the QGIS tool AwaP-IC, developed to overcome the di ffi culty of computing these measures. Unlike GIS tools based on models that abstract streets to axial lines, by employing new algorithms and spatial computation techniques, AwaP-IC analyses actual urban morphologies, based on cadastral maps delineating public and private land. This can empower a new stream of urban morphological studies with the computational power of GIS. As an open-source tool, it can be further developed for use in urban mapping and to streamline the analysis of large datasets.
Introduction
Walkability has emerged over the last decades as a key topic in health, transport and urban research. Numerous walkability indexes have been developed for research and practice, incorporating various measures of density, functional mix and access networks [1][2][3][4]. While reducing distances between people and destinations, these key dimensions of urbanity are also synergistically interconnected [5,6]. A change to either density, mix or access can lead to a chain of transformations involving the others. Complicating this further, each of these interlinked properties is itself complex and multiplicitous, with various measures only capturing partial aspects of these. Within this broad field of walkability studies, our focus here is on the exploration of how the morphology of the public space at street level directly mediates capacities for walking.
Among studies of urban access networks, there are two distinct approaches. Street network studies focus on street configuration or connectivity. These rely on graph models that use road centre lines or other axial abstractions of the streetspace to calculate various topological network properties. Such measures include the "centrality" and "betweenness" of each street segment relative to the larger network. While there are major differences among such models [7], a common characteristic is that these make abstraction of street width, shape and sometimes even length. Yet these basic geometric attributes of streets are crucial to their capacity to enable mobility and social interaction [8].
While topological measures can embed such spatial attributes by weighting the "value" of each network element according to one or more of its spatial properties (such as length or width) [9], such measures are not able to consider geometric properties that have been lost in the process of converting a morphology into a network model, such as the capacity to cross diagonally a broad street or a square. Furthermore, such hybrid measures run the risk of conflating entirely different properties into a number that conceals key differences.
A second kind of approach in the study of pedestrian access is that of morphological studies, which focus on the analysis of urban form. The spatiality of the urban fabric is captured through mapping, which is then analysed visually, identifying patterns [10][11][12]. Maps are not replaceable with algebraic methods, as maps embody spatial knowledge that is not reducible to simple metrics [13]. Nevertheless, given the impetus for statistical analysis, various metrics of urban form related to access have proliferated [14,15]. By metrics of urban form, we mean here only direct measures of actual urban form, and do not include measures of topological models that are independent of urban forms and distances. Such measures of morphologies can be grouped into two types: permeability measures that quantify the ease of movement through the urban fabric, and catchment measures that quantify the potential to reach destinations within walking distance. This corresponds with the distinction made in transport studies between resistance against and attraction of movement [16]. While various such measures have been in use, including average block area, block diagonal, intersection density and pedsheds, it has been shown that many of these are poor proxies for permeability and catchment, the capacities to walk through and to the urban fabric [17]. Instead, Pafka & Dovey have proposed two new measures: the area-weighted average perimeter (AwaP) and interface catchment (IC) that, combined, can better capture the capacities of urban fabrics to enable and attract pedestrian movement [17].
AwaP calculates the average perimeter of urban blocks within a study area, weighing the perimeter of each block by its area. This way, the impact of a large block will be proportional with the share of the study area it occupies, and its effect as a major barrier to movement is not lost in the average. The lower the AwaP, the easier it is to walk through the urban fabric. An AwaP of 400 m corresponds to a square block of 100 × 100 m, often taken as the maximum block length that still allows good permeability. IC measures the total length of public/private interfaces reachable from a starting point and within a given walking distance. IC is relevant for walking as most urban attractions, such as dwellings, shops and workplaces, are accessed through the public/private interface, where buildings meet the street [18]. High IC values indicate high capacity for accommodating urban attractions. Together, these two measures account for both street width and block size, measuring both walkable access and what one may get access to [17].
In the past decades, urban research has increasingly benefited from big data and computational power. Until now, this has been most apparent in street network studies [19][20][21][22] that benefited from the widespread use of road models in transport planning and the computational power of GIS applications. Morphological studies, on the other hand, have largely remained reliant on hand-drawn maps, or graphic software enabled tracing of urban form [10][11][12]23]. Seeking to overcome these limitations in urban morphological research, this paper presents the development of the GIS tool "AwaP-IC" for measuring two key morphological properties of the urban fabric related to walkable access. The main challenge this tool had to overcome was to bridge between the logic of walkability through Euclidean space, an infinity of route choices within a continuous urban space, and the logic of the GIS platform, based on relations between a defined set of points. Thus, a key contribution this paper seeks to make is to open new ways of combining GIS computational power with morphological studies. The algorithms developed in this process may be used in or be the basis for many more complex applications of GIS to the study of urban form. Also, to our knowledge, the algorithm used in the IC tool is the first computational approach for calculating catchment that supports walking through open space. As such, it presents a novel contribution.
In the following section, we will present the algorithms developed for the AwaP-IC tool. Then, we will illustrate applications of the tool, based on a case study in Madrid. In the final section, we will Urban Sci. 2019, 3, 48 3 of 14 discuss the broader implications for walkability and morphological research, and highlight potentials for further development and use.
AwaP-IC Tool
While AwaP and IC should be considered together in the analysis of walkable access, in this (1.0) version of the AwaP-IC tool, the two measures are provided by distinct QGIS plugins (in the Supplementary Materials). The minimum required QGIS version for these plugins is 3.4. The plugins have been developed in Python using only the spatial computation libraries embedded in QGIS and will work on any desktop version of the software without the need to install any third-party libraries or additions.
The base requirement for the calculation of AwaP and IC is a layer of urban blocks drawn as polygons, or closed polylines. Lines within the blocks, such as lot subdivisions, will be ignored. However, other errors in the urban blocks layer may not be recognised by the software, and may lead to errors or long processing times. As both AwaP and IC are calculated in metres, a projected coordinate system should be used in the QGIS project. Instead of the very common WGS84 -EPSG:4326, which uses degrees as a unit for distance, the WGS84/Pseudo-Mercator-EPSG:3857, which measures distances in metres, may be used.
AwaP Tool
The area-weighted average perimeter (AwaP) is a measure of permeability [17] that takes into consideration the perimeters and areas of all urban blocks within a given study area, and is calculated as: where n is the number of blocks, P i and A i are the perimeter and the area of block i, respectively, and A T is the total area of all blocks combined. Low AwaP scores indicate high permeability and high scores indicate low permeability within the given area. Large open spaces within the study area will not affect AwaP as they pose no barriers to movement, but will have an impact on IC.
The AwaP tool is developed as a plugin in the QGIS software. The graphical interface of the tool is shown in Figure 1. It takes in several parameters in order to calculate AwaP: • Blocks layer-A layer containing the urban blocks for which AwaP will be calculated. This layer can have polygon geometries, or linear geometries where outlines of urban blocks are represented as closed polylines. • Boundary layer-A layer containing the boundary of the area of interest (i.e., the area that contains the blocks for which AwaP will be calculated). This layer, too, can have a polygon geometry, or a linear geometry where the boundary of the study area is represented as a closed polyline.
•
Blocks intersecting boundary-A parameter that specifies whether to consider the urban blocks which are only partly within the study area. The default option is to include a block if more than half of the block is within the area of interest. • Dead-end removal-A parameter that specifies if the dead-end streets should be removed from blocks prior to calculating AwaP, and the maximum width of the dead-end streets to be removed. The default maximum street width of 40 m should work for most urban morphologies. After the parameters have been set, AwaP is calculated in the following steps: (1) From the blocks layer, the urban blocks that are within the area of interest (i.e., specified by the boundary layer) are selected, taking into consideration the parameter for the blocks intersecting the boundary. (2) Dead-ends are removed, if this option is selected.
(3) AwaP is calculated for the selected urban blocks, according to the formula stated above. (4) The selected blocks which were used for calculating AwaP are stored into a new polygon layer where the calculated AwaP will be shown in the layer name and in the attribute table of the layer.
Blocks Intersecting the Boundary
In the AwaP tool, the urban blocks are selected by checking which blocks are within the area of interest represented with the specified boundary. Whether the blocks which are partly inside and partly outside of the area of interest should be considered in this calculation would depend on the specifics of each research. Regarding such urban blocks, there are three options: Include if at least some percentage of the block area is inside-Only the blocks for which more than a specified percentage of their area is inside the area of interest will be included in the AwaP calculation; others will be disregarded. The default value is 50%. Always include-All the blocks that are at least partly within the area of interest will be included in the AwaP calculation. Always exclude-Only the blocks that are entirely within the area of interest will be included in the AwaP calculation.
Removing Dead-End Streets from Urban Blocks
Since it is unlikely that a person seeking to walk around an urban block would enter the deadend streets, dead-ends are per default removed for AwaP calculation [17]. The tool has the capability to remove the dead-end streets by buffering the blocks out and back in by the same distance, an approach inspired by a Paul Ramsey blog post [24]. The dead-end removal takes a single parameter that specifies the maximum street width to be considered. Any streets in the block that are wider than this specified width will not be considered dead-ends and will not be removed prior to computing AwaP.
This feature is required as there is no general geometric definition of a dead-end street. A block with a dead-end street and a U-shaped block have the same geometric characteristics, with the tendency for dead-end streets to be narrower. However, as there is no fixed threshold for this After the parameters have been set, AwaP is calculated in the following steps: (1) From the blocks layer, the urban blocks that are within the area of interest (i.e., specified by the boundary layer) are selected, taking into consideration the parameter for the blocks intersecting the boundary. (2) Dead-ends are removed, if this option is selected.
(3) AwaP is calculated for the selected urban blocks, according to the formula stated above. (4) The selected blocks which were used for calculating AwaP are stored into a new polygon layer where the calculated AwaP will be shown in the layer name and in the attribute table of the layer.
Blocks Intersecting the Boundary
In the AwaP tool, the urban blocks are selected by checking which blocks are within the area of interest represented with the specified boundary. Whether the blocks which are partly inside and partly outside of the area of interest should be considered in this calculation would depend on the specifics of each research. Regarding such urban blocks, there are three options: • Include if at least some percentage of the block area is inside-Only the blocks for which more than a specified percentage of their area is inside the area of interest will be included in the AwaP calculation; others will be disregarded. The default value is 50%.
•
Always include-All the blocks that are at least partly within the area of interest will be included in the AwaP calculation.
•
Always exclude-Only the blocks that are entirely within the area of interest will be included in the AwaP calculation.
Removing Dead-End Streets from Urban Blocks
Since it is unlikely that a person seeking to walk around an urban block would enter the dead-end streets, dead-ends are per default removed for AwaP calculation [17]. The tool has the capability to remove the dead-end streets by buffering the blocks out and back in by the same distance, an approach inspired by a Paul Ramsey blog post [24]. The dead-end removal takes a single parameter that specifies the maximum street width to be considered. Any streets in the block that are wider than this specified width will not be considered dead-ends and will not be removed prior to computing AwaP.
This feature is required as there is no general geometric definition of a dead-end street. A block with a dead-end street and a U-shaped block have the same geometric characteristics, with the tendency for dead-end streets to be narrower. However, as there is no fixed threshold for this distinction, the Urban Sci. 2019, 3, 48 5 of 14 threshold value is defined through this adjustable parameter. If the tool erroneously distorts blocks rather than removing dead-ends, the value will need to be lowered. Figure 2a shows an example of an urban block which has two public spaces carved into it. On the left it is a 15 m wide street, while on the right it is a U-shaped non-dead-end lane. The dead-end removal is performed in two steps: (1) Buffer out-First, an outward buffer of the urban block polygon is created, where the polygon is buffered by half of the specified maximum street width (i.e., 40/2 = 20 m) as shown in Figure 2b. The reason for halving the maximum street width when determining the buffer distance is that the buffer is being created on both sides of the street and will meet in the middle if the buffer distance is at least half of the street width (and thus fill in the dead-end street). (2) Buffer in-Then, the newly created buffer of the urban block polygon is buffered "back in" by the same amount. This is done by buffering the new polygon by the negative of the previous step (i.e., −20 m). This procedure should fill in all the dead-end streets whose width is less than the maximum street width, while leaving the wider public spaces unaffected (Figure 2b).
In this version of the AwaP-IC tool, the dead-end removal uses rounded buffers. This is because during the testing of the tools, flat cornered buffers were creating incorrect results within the QGIS platform. Thus, some block corners will be slightly rounded after dead-end removal, but not enough to noticeably affect the results.
Urban Sci. 2019, 3, x FOR PEER REVIEW 5 of 14 distinction, the threshold value is defined through this adjustable parameter. If the tool erroneously distorts blocks rather than removing dead-ends, the value will need to be lowered. Figure 2a shows an example of an urban block which has two public spaces carved into it. On the left it is a 15 m wide street, while on the right it is a U-shaped non-dead-end lane. The dead-end removal is performed in two steps: (1) Buffer out-First, an outward buffer of the urban block polygon is created, where the polygon is buffered by half of the specified maximum street width (i.e., 40/2 = 20 m) as shown in Figure 2b. The reason for halving the maximum street width when determining the buffer distance is that the buffer is being created on both sides of the street and will meet in the middle if the buffer distance is at least half of the street width (and thus fill in the dead-end street). (2) Buffer in-Then, the newly created buffer of the urban block polygon is buffered ''back in'' by the same amount. This is done by buffering the new polygon by the negative of the previous step (i.e., −20 m). This procedure should fill in all the dead-end streets whose width is less than the maximum street width, while leaving the wider public spaces unaffected ( Figure 2b).
In this version of the AwaP-IC tool, the dead-end removal uses rounded buffers. This is because during the testing of the tools, flat cornered buffers were creating incorrect results within the QGIS platform. Thus, some block corners will be slightly rounded after dead-end removal, but not enough to noticeably affect the results.
IC Tool
Interface catchment (IC) is a measure of the total length of public/private interfaces within a given walking distance [17]. While there are various definitions of what constitutes public and private space [25], here we focus on ownership, which is closely linked to control over land use. This is captured by a cadastral map, in which the subdivisions between various publicly owned parcels and between adjacent privately owned parcels have been removed. This delineation between public and private land is one of the most permanent aspects of urban morphology that may remain relatively unaltered for centuries. The difference between IC and other catchment metrics is that it also accounts for the street width. IC looks at which edges of urban blocks may be accessed from a given starting point within a maximum walking distance. The assumption here is that a person is able to walk through any open space that is not occupied by urban blocks.
IC Tool
Interface catchment (IC) is a measure of the total length of public/private interfaces within a given walking distance [17]. While there are various definitions of what constitutes public and private space [25], here we focus on ownership, which is closely linked to control over land use. This is captured by a cadastral map, in which the subdivisions between various publicly owned parcels and between adjacent privately owned parcels have been removed. This delineation between public and private land is one of the most permanent aspects of urban morphology that may remain relatively unaltered for centuries. The difference between IC and other catchment metrics is that it also accounts for the street width. IC looks at which edges of urban blocks may be accessed from a given starting point within a maximum walking distance. The assumption here is that a person is able to walk through any open space that is not occupied by urban blocks. Like AwaP, the IC tool is developed as a plugin that enables users to calculate the interface catchment in QGIS. The plugin's graphical interface is shown in Figure 3. The IC tool requires the following parameters to be set: • Blocks layer-A layer containing the urban blocks for which the IC will be calculated. This layer can have polygon geometries, or linear geometries where outlines of urban blocks are represented as closed polylines.
•
Dead-end removal-A parameter that specifies if the dead-end streets should be removed from blocks prior to calculating IC, and the maximum width of the dead-end streets to be removed. Per default, this option is disabled, as IC is meant to measure all attractions within walking distance, including attractions located in dead-ends. For an explanation of the dead-end removal process, see Section 2.1.2. "Removing Dead-End Streets from Urban Blocks" above.
•
Starting point-A starting point from which the IC calculation will commence is required. This starting point can be set in one of three ways: By selecting the starting point layer-Take into consideration the fact that there must be only one starting point defined at a time. If the starting point layer has multiple point objects, a single point which is to be set as a starting point of IC calculation needs to be selected with a selection tool in QGIS. By selecting a point on the map-When the "SELECT" button is clicked, the IC tool interface will temporarily disappear from the screen and wait for the user to click on the map. The map coordinates of the point where the user has clicked will be set as the starting point. By defining the point coordinates-Whenever the starting point is set via one of the previously mentioned options, its coordinates will be shown in the starting point coordinates X and Y fields. However, these coordinates can also be edited directly.
Regardless of which option has been used for selecting the starting point, when the plugin is run, the current coordinates present in these fields will be used to define the starting point for IC calculation.
•
Maximum walking distance-The distance a pedestrian can walk in the IC calculation. The default value is 400 m, frequently used in urban planning as average walking distance. Like AwaP, the IC tool is developed as a plugin that enables users to calculate the interface catchment in QGIS. The plugin's graphical interface is shown in Figure 3. The IC tool requires the following parameters to be set: Blocks layer-A layer containing the urban blocks for which the IC will be calculated. This layer can have polygon geometries, or linear geometries where outlines of urban blocks are represented as closed polylines. Dead-end removal-A parameter that specifies if the dead-end streets should be removed from blocks prior to calculating IC, and the maximum width of the dead-end streets to be removed. Per default, this option is disabled, as IC is meant to measure all attractions within walking distance, including attractions located in dead-ends. For an explanation of the deadend removal process, see Section 2.1.2. "Removing Dead-End Streets from Urban Blocks" above. Starting point-A starting point from which the IC calculation will commence is required.
This starting point can be set in one of three ways: ○ By selecting the starting point layer-Take into consideration the fact that there must be only one starting point defined at a time. If the starting point layer has multiple point objects, a single point which is to be set as a starting point of IC calculation needs to be selected with a selection tool in QGIS. ○ By selecting a point on the map-When the "SELECT" button is clicked, the IC tool interface will temporarily disappear from the screen and wait for the user to click on the map. The map coordinates of the point where the user has clicked will be set as the starting point. ○ By defining the point coordinates-Whenever the starting point is set via one of the previously mentioned options, its coordinates will be shown in the starting point coordinates X and Y fields. However, these coordinates can also be edited directly. Regardless of which option has been used for selecting the starting point, when the plugin is run, the current coordinates present in these fields will be used to define the starting point for IC calculation.
Maximum walking distance-The distance a pedestrian can walk in the IC calculation. The default value is 400 m, frequently used in urban planning as average walking distance.
IC Computation
In contrast to the existing approaches, where catchment is calculated along the network of street centre lines, the algorithm for IC computation supports movement through open space. IC calculation may be described by analogy as a lasso whose length is the maximum walking distance, and whose one end is attached to the IC starting point. Then this lasso will be "whipped" around the blocks in all possible directions, and for all the parts of the blocks' boundaries that are touched by the lasso, it can be concluded that they can be reached with the given walking distance. The challenge occurs when this analogy needs to be formalised in a computational algorithm. The IC computation is based on the boundary points of urban blocks and the following assumptions: • If the destination point (i.e., any block boundary point) may be reached by walking in a straight line from the starting point without crossing any blocks, and if the distance walked is less than the maximum walking distance, then that point is reachable from the starting point. This assumption ensures that walking through open space (i.e., any space not occupied with blocks) is supported. Walking along the edges of blocks is also supported.
•
If the two consecutive block boundary points are reachable from the same starting point, then the entire portion of the boundary between these points is also reachable from the same starting point.
IC is computed in several steps which are then iteratively repeated until there is no more walking distance remaining on any of the starting points. These steps are illustrated below based on a generic model (Figure 4a): (1) Create a circle with a radius equal to the allowed walking distance around the starting point ( Figure 4b). This circle shows what the catchment would be if there were no obstacles. (2) For each point in the blocks' boundaries that is within this circle, test if the point can be reached from the starting point by walking in a straight line, and without running into any obstacles (Figure 4b). If yes, note that this point is reachable and that the distance from the starting point to this point needs to be walked (i.e., spent from the maximum walking distance budget) for this point to be reached. (3) If two points of the block's boundary are reachable from the starting point, then it can be concluded that all the parts of the block's boundary between these points can be reached from the starting point as well. Thus, the entire portion of the block's boundary between the reachable points that is facing the starting point can be considered reachable (Figure 4b). (4) The points which were reached in this iteration, and for which there is some walking distance remaining, will be used as new starting points for next iterations (Figure 4c). In these iterations, the same steps that were mentioned above will be repeated, where the maximum walking distance for each point will be the remaining walking distance after that point was reached. Figure 4d shows the IC calculation that is finalised after all iterations have been completed.
This IC computation algorithm presents a novel contribution. A drawback of this algorithm is that it analyses all the points in blocks' boundaries. In cases where blocks have curved or circular boundaries, there may be many more points to analyse than for blocks of rectangular shape. This may slow down the IC computation. Also, there may be some blocks which have "unclean" geometries with unnecessary points. One such example is a rectangular block which has more than two points defining a straight edge (i.e., other than start and end point of a straight edge, there are multiple points along the edge). Because in such cases more points need to be analysed, the algorithm's performance may slow down. This IC computation algorithm presents a novel contribution. A drawback of this algorithm is that it analyses all the points in blocks' boundaries. In cases where blocks have curved or circular boundaries, there may be many more points to analyse than for blocks of rectangular shape. This may slow down the IC computation. Also, there may be some blocks which have "unclean" geometries with unnecessary points. One such example is a rectangular block which has more than two points defining a straight edge (i.e., other than start and end point of a straight edge, there are multiple points along the edge). Because in such cases more points need to be analysed, the algorithm's performance may slow down.
Case Study
Previous applications of AwaP and IC measures [17] have been limited by the large amount of time required for calculating these measures manually. In the following, with the help of AwaP-IC, we show how by easily taking multiple measures by shifting a frame, we can also analyse the variability of AwaP across the urban fabric and its implications. Then, we explore how the variability of IC as the starting point is continuously shifted. As a basis, we are using a shapefile of the building blocks of Madrid, which was readily available and contains a wide range of urban block shapes, with subtle variations within neighbourhoods and major variations between neighbourhoods. This is based on the cadastral map of Madrid [26], for which we merged adjoining parcels. Publicly owned parcels, such as parks and squares that are not fenced and thus thoroughly permeable, have been merged with the street space. However, other discrepancies between the actual urban form and this representation of its urban blocks may remain. Some blocks, for instance, may have privately owned but publicly accessible through-connections.
Case Study
Previous applications of AwaP and IC measures [17] have been limited by the large amount of time required for calculating these measures manually. In the following, with the help of AwaP-IC, we show how by easily taking multiple measures by shifting a frame, we can also analyse the variability of AwaP across the urban fabric and its implications. Then, we explore how the variability of IC as the starting point is continuously shifted. As a basis, we are using a shapefile of the building blocks of Madrid, which was readily available and contains a wide range of urban block shapes, with subtle variations within neighbourhoods and major variations between neighbourhoods. This is based on the cadastral map of Madrid [26], for which we merged adjoining parcels. Publicly owned parcels, such as parks and squares that are not fenced and thus thoroughly permeable, have been merged with the street space. However, other discrepancies between the actual urban form and this representation of its urban blocks may remain. Some blocks, for instance, may have privately owned but publicly accessible through-connections. Figure 5 shows the Salamanca district of Madrid, a 19th century extension, following a grid pattern intersected by several diagonal streets. The urban blocks were inspired from Cerdá's early plans for Barcelona, slightly longer than 100 metres, with chamfered corners [27]. The 1 × 1 km frame in the top-left example (Figure 5a) captures the area around a circular plaza. The AwaP of 416 m indicates a relatively good permeability, with blocks only slightly larger than 100 × 100 m. When the frame is moved to the northeast (Figure 5b), AwaP decreases to 397 m, as many smaller blocks are included. While there are significantly more small blocks in this frame, AwaP only decreases moderately, as their weighting is lower-these only occupy about a third of the frame, and are surrounded by elongated blocks that constitute substantial barriers to walking. Moving the frame to the northwest (Figure 5c), AwaP increases to 425 m, as large elongated blocks along a wide boulevard are included. When shifting the frame to the southeast (Figure 5d), AwaP increases further to 436 m. Overall, AwaP stays relatively stable, with variations of only +/−5%, as the largest part of the frame contains the same blocks of ca. 100 × 100 m. Thus, while visually we can identify a range of block sizes and shapes, overall capacities for movement through the urban fabric appear to be fairly constant. Figure 5 shows the Salamanca district of Madrid, a 19th century extension, following a grid pattern intersected by several diagonal streets. The urban blocks were inspired from Cerdá's early plans for Barcelona, slightly longer than 100 metres, with chamfered corners [27]. The 1 × 1 km frame in the top-left example (Figure 5a) captures the area around a circular plaza. The AwaP of 416 m indicates a relatively good permeability, with blocks only slightly larger than 100 × 100 m. When the frame is moved to the northeast (Figure 5b), AwaP decreases to 397 m, as many smaller blocks are included. While there are significantly more small blocks in this frame, AwaP only decreases moderately, as their weighting is lower-these only occupy about a third of the frame, and are surrounded by elongated blocks that constitute substantial barriers to walking. Moving the frame to the northwest (Figure 5c), AwaP increases to 425 m, as large elongated blocks along a wide boulevard are included. When shifting the frame to the southeast (Figure 5d), AwaP increases further to 436 m. Overall, AwaP stays relatively stable, with variations of only +/−5%, as the largest part of the frame contains the same blocks of ca. 100 × 100 m. Thus, while visually we can identify a range of block Figure 6 shows the same district and the IC for a 200 m and 400 m walking distance, starting from different points. At the centre of the study area, IC 200 is 2.5 km and IC 400 is almost 10 km (Figure 6a). In the northeast, IC 200 increases to over 3 km and IC 400 to over 13 km, as blocks are smaller and streets narrower (Figure 6b). In the northwest, it is only slightly higher than in the central area (Figure 6c). Moving to the southeast where we encounter more elongated blocks, IC 200 drops close to 2 km and IC 400 to 9 km (Figure 6d). Overall, the variation here is around +/−25%, much higher than in the case of AwaP. streets narrower (Figure 6b). In the northwest, it is only slightly higher than in the central area ( Figure 6c). Moving to the southeast where we encounter more elongated blocks, IC200 drops close to 2 km and IC400 to 9 km (Figure 6d). Overall, the variation here is around +/−25%, much higher than in the case of AwaP. It is notable that the overall shape of these walking catchments is different from those we find in other studies. In the past, walking catchments have been either approximated with a circle of a given radius that ignores the urban morphology, or calculated based on street centre lines, that for a grid leads to a square-shaped catchment. The catchments shown in Figure 6 are between these two shapes, appearing as a square with rounded edges. The reason for this is that the IC tool calculates all possible routes through urban space, including diagonal crossings of streets, squares or parks. While such trajectories may not be possible because of car traffic, they do reflect overall capacities for walking assuming no other constraints imposed by footpaths, formal crossings or traffic conditions.
Both for AwaP and IC, the largest contrast is between example (b), which includes the smallest blocks and narrowest streets, and (d), which includes some of the largest blocks and widest streets. While capacities to walk through the urban fabric (AwaP) remain similar, interface catchments (IC) It is notable that the overall shape of these walking catchments is different from those we find in other studies. In the past, walking catchments have been either approximated with a circle of a given radius that ignores the urban morphology, or calculated based on street centre lines, that for a grid leads to a square-shaped catchment. The catchments shown in Figure 6 are between these two shapes, appearing as a square with rounded edges. The reason for this is that the IC tool calculates all possible routes through urban space, including diagonal crossings of streets, squares or parks. While such trajectories may not be possible because of car traffic, they do reflect overall capacities for walking assuming no other constraints imposed by footpaths, formal crossings or traffic conditions.
Both for AwaP and IC, the largest contrast is between example (b), which includes the smallest blocks and narrowest streets, and (d), which includes some of the largest blocks and widest streets. While capacities to walk through the urban fabric (AwaP) remain similar, interface catchments (IC) vary considerably. This may, of course, not correlate with actual pedestrian flows, which are also influenced by functional mix and density, as well as other nonmorphological factors. Rather, these measures reveal potentials within this urban fabric that may inform future planning and design. Figure 7 shows the combination of AwaP and IC measures for the four sample sites in Madrid, compared to measures of six 1 km 2 morphologies from a previous study [17]. It shows this part of Madrid having a combination of permeability and interface catchment that ranges between the values measured in Barcelona, New York and Nagoya, all grids with short blocks. This is in contrast with morphologies that have low permeability and high catchment such as Venice, low permeability and low catchment such as Houston, or morphologies that have very low catchment such as the modernist Brasilia. The diagram also shows the high variability of IC 200 compared to the relative stability of AwaP.
Urban Sci. 2019, 3, x FOR PEER REVIEW 11 of 14 vary considerably. This may, of course, not correlate with actual pedestrian flows, which are also influenced by functional mix and density, as well as other nonmorphological factors. Rather, these measures reveal potentials within this urban fabric that may inform future planning and design. Figure 7 shows the combination of AwaP and IC measures for the four sample sites in Madrid, compared to measures of six 1 km 2 morphologies from a previous study [17]. It shows this part of Madrid having a combination of permeability and interface catchment that ranges between the values measured in Barcelona, New York and Nagoya, all grids with short blocks. This is in contrast with morphologies that have low permeability and high catchment such as Venice, low permeability and low catchment such as Houston, or morphologies that have very low catchment such as the modernist Brasilia. The diagram also shows the high variability of IC200 compared to the relative stability of AwaP.
Discussion
The GIS tool AwaP-IC presented here measures two key properties of urban morphologies related to walkable access: the capacity to walk through the urban fabric (AwaP) and the capacity to walk to various attractions located at the public/private interface (IC). AwaP scores below 400 m indicate high permeability, while high IC scores indicate the capacity to sustain access to a multiplicity of entrances. Combined AwaP and IC differentiate between the high permeability and catchment of urban areas, high permeability and low catchment of modernist developments, the low permeability and high catchment of intricate labyrinthine urban fabrics as well as the low permeability and catchment of cul-de-sac suburbs.
Rather than following previous approaches of abstracting public space to a street network graph model, this tool directly analyses public space based on its basic form, as captured by cadastral maps. Street width, urban squares and parks, are not removed from the analysis, but considered as spaces of pedestrian movement. However, as a planar tool, AwaP-IC does not account for underpasses and overpasses, which may constitute a significant type of pedestrian connection in cities like Hong Kong [28].
The main challenge that had to be overcome in the development of AwaP-IC was to bridge between the logic of walkability through Euclidean space, an infinity of route choices within a continuous space with discrete obstacles, and the spatial computation logic of the GIS platform, based on relations between a defined set of points, lines and polygons. This was most evident in the development of an algorithm for the IC calculation. The algorithm may be described by analogy as a lasso whose base is the IC starting point and length is the maximum walking distance. This lasso is
Discussion
The GIS tool AwaP-IC presented here measures two key properties of urban morphologies related to walkable access: the capacity to walk through the urban fabric (AwaP) and the capacity to walk to various attractions located at the public/private interface (IC). AwaP scores below 400 m indicate high permeability, while high IC scores indicate the capacity to sustain access to a multiplicity of entrances. Combined AwaP and IC differentiate between the high permeability and catchment of urban areas, high permeability and low catchment of modernist developments, the low permeability and high catchment of intricate labyrinthine urban fabrics as well as the low permeability and catchment of cul-de-sac suburbs.
Rather than following previous approaches of abstracting public space to a street network graph model, this tool directly analyses public space based on its basic form, as captured by cadastral maps. Street width, urban squares and parks, are not removed from the analysis, but considered as spaces of pedestrian movement. However, as a planar tool, AwaP-IC does not account for underpasses and overpasses, which may constitute a significant type of pedestrian connection in cities like Hong Kong [28].
The main challenge that had to be overcome in the development of AwaP-IC was to bridge between the logic of walkability through Euclidean space, an infinity of route choices within a continuous space with discrete obstacles, and the spatial computation logic of the GIS platform, based on relations between a defined set of points, lines and polygons. This was most evident in the development of an algorithm for the IC calculation. The algorithm may be described by analogy as a lasso whose base is the IC starting point and length is the maximum walking distance. This lasso is "whipped" around the blocks in all possible directions, recording all the public/private interfaces that are reached. This is akin to a person seeking to move along the shortest route (crossing streets, squares and parks diagonally) while turning around corners, passing by a range of attractions-entries to various buildings.
As AwaP and IC are measures of capacities of moving through and to the urban fabric, they can be used for assessing designs and plans for future urban scenarios, defining planning controls for permeable neighbourhoods and detecting barriers to movement or a limited potential to accommodate a diversity of attractions within existing morphologies. AwaP may enable replacing rigid urban codes stipulating the maximum length of each block [29] with more flexible codes of a maximum AwaP for a neighbourhood that would allow for a greater diversity of street layouts in new urban developments.
The tool has been successfully tested with formal urban morphologies, but many challenges remain, particularly for informal conditions where public/private access distinctions are less clear. Furthermore, there are many barriers to walking that are not captured by cadastral boundaries, such as topography, highways and median fences. Nevertheless, we hope that AwaP-IC can empower a new stream of urban morphological studies with the computational capacity of GIS. As an open-source tool, it can be further developed for use in urban mapping and to streamline the analysis of large datasets. The algorithms developed for AwaP-IC may be used in or be the basis for more complex applications of GIS to the study of urban form.
In its current version, AwaP-IC can only take singular measures of AwaP within a frame or of IC from one starting location. An immediate opportunity for further development would be to extend the operationality of AwaP from frames to grids, thus mapping the entire territory of a city or metropolis. Similarly, IC could be expanded to map catchments across a larger territory based on a set of points: a dataset of intersection nodes, midblock points, public transport stops or a regular array of equidistant points. The output of these could be colour-coded to illustrate the various degrees of permeability and catchment. A more integrated AwaP-IC could be used to map the various types of permeability and catchment relations. While AwaP-IC can provide better measures of walkable access as a capacity embodied in urban morphology, it is important to acknowledge the limitations of any such measure for a comprehensive assessment of walkability. Walkability is not a simple extensive property of the urban fabric, but a much more complex and somewhat elusive concept, related to a broad range of urban morphological attributes such as density and pavement, but also nonmorphological attributes such as climate and safety. Walking distance is highly variable depending on environmental and social factors, while route choice may follow various logics, depending on the purpose of a trip.
While urban design knowledge cannot be reduced to numbers [30], we need to advance a "science of cities" [21] that better measures what is measurable. We should be also steering away from the risk of engaging in the production of pseudo-science [31] that ignores the nonmeasurable aspects of urbanity. In this endeavour, it is key to study actual cities and their built form, as there is no greater risk to urban design and planning than deriving our understanding from models that are removed from the lived experience of the urban realm [5,32]. | 2019-05-01T12:56:22.966Z | 2019-04-29T00:00:00.000 | {
"year": 2019,
"sha1": "7b99e34605fb6b5a63115aefd381b91ff9d918b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2413-8851/3/2/48/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5ba3cdc591002134de32041a9b589567ab2a6a36",
"s2fieldsofstudy": [
"Geography",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
267650293 | pes2o/s2orc | v3-fos-license | Spatiotemporal evolution of deformation and LSTM prediction model over the slope of the deep excavation section at the head of the South-North Water Transfer Middle Route Canal
Slope deformation is one of the focal issues of concern during the normal operation and maintenance of the South-North Water Transfer Middle Route Project. To study the slope deformation evolution in the deep excavation section at the head of the canal, we applied 88 views of Sentinel-1A ascending image data from 2017 to 2019 and MT-InSAR(Multi-temporal InSAR) deformation monitoring technology to obtain long-time series deformation rates and cumulative deformation fields over the slope in the study area. Based on the analysis of the time-series monitoring data of the deformation field sample points, a LSTM (Long Short Term Memory Network) slope deformation predictive model was constructed to predict the slope deformation for the next 12 months at 12 sample points of the deep excavation slope. The impact of rainfall on slope deformation was investigated, and the reliability of the LSTM model was verified by using the measured data. The results show that the average annual deformation rate of the slope ranges from 10mm/a to 25mm/a, the maximum cumulative deformation is about 60 mm, and the slope of the excavated section is generally in an uplifted state. The rainfall-induced repeated uplift or subsidence of the canal slopes together with the peak deformation was closely related to the amount of rainfall during the wet season, and the longer the duration of the wet season, the more obvious the crest. Among the12 sample sites, the minimum and maximum deformation predicted using the LSTM model were 51.7 mm and 73.9 mm respectively, with the lowest correlation coefficient of 0.994 and the highest of 0.999. The maximum and minimum values of RMSE (Root Mean Square Error) were 4.4 mm and 3.6 mm respectively, indicating reliable prediction results. The results of the study can provide reference for the prevention and control of geological hazards in the South-North Water Transfer Canal.
Introduction
The South-to-North Water Transfer Project originates from the Danjiangkou Reservoir, the head of which is located in Xichuan County, Nanyang City, Henan Province, China.As a vital infrastructure project to optimize water resources allocation in North China Plain, the construction of the project will not only alleviate the water shortage in the Huang-Huai-hai Plain but also significantly enhance the economic and social development level of the water-receiving regions along the route [1].The trunk canal is 1432 km long and passes through complex geological environment areas such as expansive soils, wet sink loess as well as saturated sandy soils, which causes the construction and later slope maintenance difficult [2].The expansive soil is a typical viscous soil with swelling and shrinkage, fissure and overconsolidation properties.The soil in situ is characterized by water-loss shrinkage and crack opening.The water body invades and then expands, forming a low stress zone near the fracture surface, while the primary fracture and micro fracture close.Shear expansion occurs in soil element of the slope during channel excavation, and the negative pore pressure will accelerate water intrusion and reduce the strength [3].Water-driven swelling of soil slopes may result in slope instability, building (structure) damage, and even safety accidents [4].Therefore, the research on deformation monitoring, characterization analysis, and prediction of deep excavation slopes of the South-North Water Transfer Middle Route has significant value for the operation and maintenance of the Channel as well as for the avoidance or mitigation of geological disasters.
A review of the relevant literature reveals that scholars have mainly conducted research on the stability of the slopes of the South-North Water Transfer Canal from two aspects.One is to design and improve the slope by physical and mechanical methods, such as anti-slip piles [5], umbrella anchorage, modified soil replacement, and other techniques, taking into account the specific geological environment.The other is to monitor the slope on-site to obtain deformation information of the study area, and then to screen and judge the stability of the slope in conjunction with relevant engineering specifications, and whether to take further prevention or reinforcement measures.The acquisition of deformation information can be achieved in two ways, one is contact monitoring and the other is non-contact monitoring.For the former, observation stations need to be set up on-site, and then relevant equipment and instruments are used to obtain deformation information at specific locations in real-time or afterward, such as borehole inclinometers, Beidou monitoring systems, pressure gauges, etc. Non-contact measurements can also be carried out by drones, LiDAR (Light Detection and Ranging, LiDAR) or InSAR (Interferometric Synthetic Aperture Radar technology (InSAR) [6,7].Result from these observative advantages of SAR, such as large area, traceability, non-contact, and high accuracy, it has gained significant attention in recent years and has produced fruitful research results in several fields [8], e.g., earthquake and landslide monitoring [9][10][11][12], subway observation [13,14], mine and urban ground deformation detection [15,16], etc.However, on account of the issues including spatiotemporal decoherence and atmospheric delay, the D-InSAR approach in practice has significant constraints in terms of data processing [17].In contrast, the SBAS-InSAR (Small Baseline Subsets InSAR, SBAS-InSAR) technique, by setting appropriate time baseline and spatial baseline thresholds to form differential interferometric image pairs, can obtain the time-series deformation field and deformation rate in the study area while reducing the effects of decoherence and elevation and atmospheric errors in D-InSAR processing information in the study area [18].PS-InSAR (Permanent Scatter InSAR, PS-InSAR) generally does not perform pre-filtering and multi-view processing, avoiding the reduction of image spatial resolution, and can go deeper into the pixel when studying the deformation of a particular scatterer, which can theoretically improve the accuracy of PS point deformation monitoring [19], in areas with slow deformation rates and small magnitudes (a few centimeters or tens of centimeters a year) is particularly suitable.
According to an analysis of the aforementioned literature, most scolors focused on deformation monitoring techniques and methods in view of the stability of the slopes of the South-North Water Transfer project [20].Gong et al. [21] studied the destabilization mechanism of the South-North Water Transfer slopes through on-site monitoring and indoor simulation tests.They came to the conclusion that the change in soil water content brought on by rainfall caused swelling deformation, but they did not identify the reasons why the soil on the South-North Water Transfer slopes swells when exposed to water.Hu Jiang et al. [22] employed a differential autoregressive moving average model to forecast slope deformation by analyzing the data that was already available and exponentially smoothing it, however, the prediction period was only one month.The slope of the Majiapo reservoir was predicted by Zhang et al. [23] using a combination of the grey theory and genetic algorithm, however, the prediction time was shorter while the accuracy was greater.With the development of deep learning, recurrent neural networks (RNNs) have been widely used for time series prediction due to their ability to model temporal dependencies.However, RNNs are prone to gradient explosion or vanishing, resulting in poor performance when processing long-term data.The emergence of long short-term memory (LSTM) has addressed the issue of gradient explosion and vanishing by incorporating gating mechanisms, making it a popular tool for time series prediction.For example, Wei et al. used an LSTM model to predict complex linear traffic flow on roads for intelligent transportation systems, and compared to other methods, the LSTM approach resulted in more stable prediction results [24].Ching-Ru et al. utilized an LSTM model to predict stock prices by incorporating historical trading information and sentiment analysis of textual data.The resulting predictions exhibited a high level of accuracy [25].Chen et al. employed an LSTM model based on empirical mode decomposition to predict short-term subway passenger flow, which demonstrated higher accuracy compared to ARIMA and BPN models [26].In recent years, LSTM has been gradually applied to various fields such as oil and gas reserves [27], meteorology [28], and urban water supply and drainage [29].However, there are few studies on the combination of LSTM and InSAR for monitoring and predicting geological disasters.For the deep excavation channel portion of the South-North Water Transfer, there are only a few reports of collaborative and systematic research on deformation monitoring, mechanism analysis, and prediction models.
This paper uses the deep excavation channel section at the head of the South-North Water Diversion Canal (8 km from the head of the canal and 47 m excavation depth) as the engineering test area.It is based on 88-view Sentinel-1A elevation image data for the past three years from 2017 to 2019.PS-InSAR was used to extract high-coherence sites and take part in the resolution of SBAS-InSAR elevation errors and deformation information, allowing for the acquisition and analysis of the deformation velocities and overlaid deformation fields of the left and right banks.To determine the effects of soil mineral composition and rainfall on the channel slope deformation and to identify the mechanism of channel slope deformation in the deep excavation channel section, the superimposed deformation were extracted from 12 typical sample points in the channel slope deformation area.Based on an analysis of the L. Ding et al. deformation pattern of typical slope sample points, the LSTM deep learning model was used to predict the slope deformation variables.The accuracy of the prediction model was confirmed by comparing it with actual measurement results, and the deformation trend for the upcoming 12 months was also predicted and examined.
Overview of the study area
The study area is in Jiu Chong Town, Xichuan County, Henan Province, China, at the western edge of the Nanyang Basin, about 8 km from the head of the South-North Water Diversion Central Project, which is the deepest excavated section of the entire water transmission line, passing through the area of strong swelling soil and medium weak swelling soil, with a complex geological environment.The section's entire length is 14.165 km, with piles number ranging from 0 + 300 to 14 + 465.And the excavation depth is between 9.9 and 47 m.The water runs through Jiuzhong Town's precinct from the southwest to the northeast, as illustrated in Fig. 1.The paper chooses a deep excavation segment of about 5 km in length for the investigation.The study region belongs to monsoonal climate zone in transition from subtropical zone to warm temperate climate zone.It has plentiful rainfall all year long and distinct rainfall time range.The rainfall concentration period is from June to August and the annual rainfall is between 390 and 1420 mm.
The South-North Water Transfer Middle Route Project replaced the original slope soils with modified soils during construction to lessen the effect of expansive soils on the stability of the slopes of the channel section.According to the construction specifications of the South-to-North Water transfer project, replacement plan was mainly based on the excavation section of the original canal designed.The original soil was overexcavated in line with the thickness of the replacement, and then the modified soil with no expansion was backfilled.There were two options for replacing the original soil with modified soil.Below the first level platform, the original soil was replaced with 1.5 m thick modified soil with 4%-6% cement content.Above this level, the original soil was replaced with cement modified soil or modified soil that met the requirements of the construction scheme.The backfilling thickness of modified soil depended on the expansibility of expansive soil.For slopes with weak and medium expansive soil above the first level platform, the thickness of replacement fill was 1.0 m, and the thickness of replacement fill was 1.5 m for slopes with strong expansive soil.The profile of the deep excavation section slope at pile No.8 + 700 is shown in Fig. 2.
Data source
Taking into account the characteristics of the test area, the ground cover condition, and the convenience of data acquisition, this paper applied 88-view Sentinel-1A SAR orbit-lifting image data as data source, with a revisit period of Meteorological Science Data Center [30].
Methodologies of data process 2.3.1. Extraction of high coherence points
When processing multi-view SAR data for a given time series using small baseline set interferometry, the interferometric image pairs are first acquired, interferometric flow processing is performed, and then high coherence points are acquired based on the coherence coefficient (actually the mean value of the coherence coefficient for each pixel point in the interferogram) image, as shown in Fig. 3.The linear deformation and elevation error solution is then performed to finally obtain the deformation field information of the study area.Compared with SBAS-InSAR, PS-InSAR does not require smoothing of elevation when analyzing object-stabilized scatterers, and therefore it has an advantage in elevation accuracy.Because of this, we propose to adopt a deformation extraction method of the SBAS-InSAR technique that takes into account permanent scatterers and uses the high coherence points extracted using the PS-InSAR technique as ground control points to participate in the solution of elevation errors and deformation information, to obtain high accuracy deformation information of the target object.
The commonly used methods for extracting Ground Control Points (GCP) using PS-InSAR technology mainly include the coherence coefficient method [31], the amplitude dispersion threshold method [32], and the phase dispersion threshold method [18], etc.To improve the reliability of the reference points and combine them with the surface characteristics of the test area, this paper selects the secondary detection method combining the amplitude dispersion threshold method and the coherence coefficient method.
The amplitude deviation threshold method is based on the mean and standard deviation of the intensity of the same image element in multiple SAR images to determine the selection of the reference point.As in equation (1).
where σ A and m A are the standard deviation and mean values of the amplitude of the same image element in the time-series SAR image, respectively, and D A is the amplitude deviation index.The smaller the D A more stable the image element is and the more suitable for use as a ground control point.In the data processing process, it is necessary to set the amplitude deviation threshold T D .If D A ≤ T D , then this image element is selected as the reference point.
The coherence coefficient method is based on the comparison of the information of the same image element with its surrounding image elements in different SAR images to estimate the coherence coefficientγof the image element.As in equation (2).
where M(i, j) and S(i, j) denote the coordinates of the master and slave SAR images forming the interference pair, * denotes the complex conjugate, and m and n denote the size of the sliding window, respectively.The control points are selected using a secondary detection method combining the amplitude dispersion threshold method and the coherence coefficient method.Firstly, the amplitude dispersion threshold method is used to set the amplitude dispersion threshold for the image element to perform primary detection, and then the coherence coefficient threshold is set based on the primary detection to perform secondary detection for the image element, and finally, the available ground control points are determined.After several simulations, the decision coefficient is equal to 0.896 when the amplitude dispersion threshold and coherence coefficient threshold are set to 3.1 and 0.95 respectively, and the final number of GCP points is 88, as shown in Fig. 3.
Extraction of canal slope deformation
SBAS-InSAR is mainly calculated by optimally matching multiple time-series SAR images into several short baseline image pair collections by setting a spatiotemporal baseline threshold to meet a higher data sampling rate.In the pre-processing of multiple SAR images in the study area, to ensure strong coherence between image pairs, the image of 16 February 2017 was selected as the super master image, and the spatial baseline threshold was set to 45% of the critical baseline and the temporal baseline threshold was set to 40 d after several simulations based on the comprehensive consideration of computing time, time interval and engineering entities, and the connection diagram is shown in Fig. 4. When interfering, filtering, and untangling the generated image pairs, the classical frequency domain rate filtering algorithm Goldstein method [33,34] was used for filtering and the Minimum Cost Flow (MCF) method [35,36] was chosen for untangling, and it was found that the best result was achieved when the threshold value of the untangling correlation coefficient was set to 0.35.The ground control points were obtained using the second detection method for orbit refinement, re-de-platforming, and two inversions.The first inversion is an estimation of the deformation rate and residual deformation, and the results are optimized under the secondary phase de-entanglement.The second inversion is to calculate the displacement on the time series, customizing the corresponding atmospheric filter to remove the effect of the atmospheric phase and improve the time series displacement monitoring accuracy.Finally, the deformation rate on the line-of-sight (LOS) direction is obtained by geocoding.
Regional average precipitation calculation method
The Thiessen Polygons Method (TPM) was used to calculate the average precipitation in the study area.The Thiessen Polygons is an algorithm proposed by the Russian mathematician Georgy Voronoi for the dissection of the spatial plane [30], which is widely used in meteorology [37,38].A-H Thiessen used this algorithm to calculate the average precipitation and called the Thiessen Polygons Method [39].The calculation formula is as in equation (3).
where P is the average precipitation of the basin; P is the precipitation of each station in the corresponding period; f is the weight of each station.
Weather conditions within a certain region are extremely similar.Five meteorological stations near the study area were selected to calculate the regional average precipitation, namely Xixia, Nanyang, Laohekou, Xiangyang, and Zaoyang (around the study area), as shown in Fig. 5, and the final precipitation data obtained for the study area are shown in Fig. 6.
LSTM model
Time-series InSAR deformation monitoring results are only a characterization of the current state of slope deformation.If future slope deformation trends can be predicted based on the available monitoring data, it will provide a technical basis for the construction management to carry out slope stabilization and management in advance, and improve the efficiency of the operation and maintenance of the South-North Water Transfer Project.
The slopes of drains are repeatedly lifted or sunk by rainfall.The continuous rise and shrinkage will inevitably weaken the strength of the geotechnical structure of the slope and induce cracks in the slope and its reinforcement structures (e.g.hoof-shaped berms, anchor umbrellas, etc.), which will easily lead to catastrophic changes.Slope deformation is mainly determined by the geological environment in which the slope is located, such as the strength of the geotechnical structure, slope size, and soil moisture, while external factors such as rainfall have a cyclical effect on slope deformation.Therefore, the measured slope deformation is the final state of expression under the combined influence of many factors.If a specific function model can be used to predict the slope deformation at different locations, the slope deformation variables can be deduced for a certain time in the future, so that corresponding prevention and control measures can be taken to reduce the probability of disaster occurrence.
The slope deformation trend of the deep excavation channel section contains obvious fluctuation components, which are difficult to predict accurately by traditional methods.Therefore, based on the theory and knowledge of deep learning, the LSTM slope deformation prediction model is proposed.Firstly, the SAR data is normalized and secondly, the LSTM slope deformation prediction model is developed.
where Y is the post-reconstruction deformation variable; y T refers to the cumulative deformation variable of the slope relative to the initial date image acquired using InSAR means for phase T images, y 0 = 0; T is the SAR data acquisition date; and w is the satellite revisit period.
(2) LSTM predictive model LSTM neural network is a deep learning model.It has the advantage of mitigating the gradient disappearance during training of long time series data and learning long time series dependencies when processing sequence-related data.An LSTM network layer is represented by a chain structure consisting of several time steps, and all LSTM layers unify the network training parameters, enabling a non-linear mapping of real sequence inputs to sequence outputs.The network uses three gating structures: forgetting gates, input gates as well as output gates, prompting the hidden layer to have a memory function unit.The unit of LSTM network can be expressed by the mathematical formula as in equation (3).
where f t , i t , o t , c t is the amount of oblivion gate, input gate, output gate, and memory cell state respectively; W fx , W ix , W ox is the weight of input layer x t and hidden layer h t at moment t; W fh , W ih , W oh is the weight of hidden layer at moment t ∼ t − 1; W fc , W ic , W oc is the weight of cell at moment t ∼ t − 1; W cx , W ch is the weight between cell and input and cell and hidden layer respectively; b f , b i , b o , b c is the bias of each gating cell respectively; h t is the output value of the cell at moment t; and σ is the activation function.
The input layer data of the LSTM model in this paper is single-featured time-series data.As in equation (6).
where steps are the step size of each slide and m is the data dimension.The output of the training set is.As in equation (6).
where y s is a cumulative type variable for period s.To implement the LSTM algorithm for slope prediction, this paper uses the Python object-oriented programming language to prepare a prediction program, and the preparation process is based on method calls in third-party libraries such as Sklearn, Seaborn, and Torch, through model training and parameter seeking to attempt prediction.The InSAR monitoring data was used as the training set.Preprocessing in Sklearn is used to preprocess the data, and the training set is used to output the fitted values through the LSTM network, calculate the Loss loss function, adjust the neuron weights, judge whether the training times Epochs reach a predetermined value, and judge whether the model converges; the converged neural network will be combined with the LSTM network to output the predicted values, and calculate the MRE, RMSE, and the MRE.RMSE, judge whether all the parameters grid search is completed, and finally outputs the optimal parameters and prediction accuracy by inverse normalization of the data.The prediction process is shown in Fig. 7.
Deformation rate field
To accurately obtain the deformation rate of the canal slope in the study area, a buffer zone was extracted using ArcGIS for a range of 300 m on both sides of the trunk canal to obtain the annual average deformation rate map of the slope of the deep excavated canal section of the South-North Water Transfer Middle Route and its surrounding area between 11 January 2017 and 27 December 2019, as shown in Fig. 8.As can be seen from the graph, the annual mean deformation rate within the image ranges from − 30mm/a to 30mm/a, where positive values represent surface uplift, meaning that the distance between the stars and the land decreases along the LOS direction, and conversely surface subsidence and the distance between the stars increases.The uplift is more pronounced on both sides of the canal (canal slopes).After the field research, the surface cover of the study area is less changed, especially on the slope of the South-North Water Diversion Canal and the vegetation on both sides is even less, and there are relatively stable buildings distributed on both sides of the canal, therefore, there is no loss of coherence during the data processing, which indicates that the overall quality of the data is high and the extracted deformation field information is accurate and reliable.9, it can be seen that there is a large amount of severe subsidence outside the study area, with a maximum cumulative subsidence of 243 mm.Field investigations have shown that the areas surrounding the canal are mostly sloping farmland, which is prone to soil erosion and sediment transport caused by rainwater.On the other hand, during the construction of facilities of the canal, measures such as replacement, hardening, and interception ditches were taken (as shown in Fig. 2).Therefore there is a significant amount of deformation outside the study area, as shown in Fig. 9, while the deformation on both sides of the canal is relatively weak.
Predicted results
Based on the MT-InSAR monitoring data of the deep excavation section at the head of the South-North Water Diversion Canal, timeseries data from January 2017 to December 2019 were extracted from the time baseline of the 12 monitoring points.The LSTM prediction model was used to predict the future 365-day canal slope deformation at the 12 monitoring points.The results are shown in Fig. 10.
As can be seen from the figure, the predicted values of Fig. 10(a), (c), 10(d), 10(i),10(j)and 10(l) fluctuate widely, and the predicted increment of deformation in June-July 2020 can reach about 10 mm, and the probability of damage occurring on the slopes of drains at sample points A, C, D and I is significantly higher than the rest of the sample points (Fig. 10(b), (e), 10(f), 10(g),10(h),10(j)and 10(k)); in the coming year, there are still seasonal fluctuations in the predicted results, but the overall trend of deformation growth is obvious, and the deformation The maximum value of deformation is point C, with a cumulative deformation of 73.9 mm, and the minimum value is point A, with a cumulative deformation of 51 mm.Therefore, the deformation of slopes in the study area is not uniform, and the periodic rise and contraction of slopes caused by seasonal changes may damage the soil structure of slopes, which may lead to slope cracking or sliding instability, posing a safety hazard.It is recommended to strengthen monitoring and take reinforcement measures to ensure the safety and stability of the project slopes.
Mechanism analysis of the canal slope deformation
During the construction of the South-North Water Diversion Channel, due to the presence of clay soils with different expansive properties, to prevent their adverse effects on the slope of the channel, the overlying soil layers in the study area were refilled, i.e. overexcavated to the refill thickness in conjunction with the designed excavation section of the channel, and then backfilled with nonexpansive soils to the design elevation.According to the project statement for the deep excavation section of the South-to-North Water Transfer Project, for medium swelling soils, the thickness of the overburden is 1.0 m, while for strongly swelling soils, the maximum thickness of the overburden does not exceed 1.5 m.The study area has a large excavation depth (maximum 47 m), a long slope (class IV road), and a side slope factor of 2.5, but the thickness and extent of the surface overburden are always limited and driven by regional groundwater runoff and lateral infiltration of rainfall.Driven by regional groundwater runoff and lateral infiltration recharge from rainfall, groundwater will still impact the deep in-situ expansive soils of the canal slopes, as shown in Fig. 11.
Therefore, it was necessary to test the composition of the in situ soils in the study area.Based on this, samples were taken sequentially for mineral content testing along the channel outside the right bank replacement area of the deep excavated channel section, with the geotechnical embodiment field sampling located approximately 10 m from the outside of the parapet at the edge of the channel slope, as shown in Fig. 12 (soil samples were numbered SZ1, S1, and S4 in order along the flow direction).Fig. 12(a), (b), and 12(c) show the spectra of the samples, and Fig. 12(d), (f), and 12(h) show the results of the mineralogical composition of the soil samples.For each sampling point (sampling date 22 March 2021), the soil was collected at 1 m below ground level, and the temperature and moisture content of the soil samples were recorded using a soil moisture monitor, and the sampling time, sampling equipment, soil sample number, and other parameters are shown in Table 1.It can be shown from Table 1 that the proportion of potassium feldspar in the soil sample No.1 is 15.5%, and the proportion of Muscovite in the soil sample No.2 and No.3 is 20% and 30%, respectively.Potassium feldspar and Muscovite are the main components of clay mineral illite.Therefore, it can be confirmed by the experimental test that expansive soil with main composition of illite is involved in the geotechnical layer of the test area.Expansive soil has strong hydrophilicity.It expands when it meets water and contracts when it loses water.Therefore, it can trigger fluctuated deformation of different degrees in the canal slope.With respect to this point, we have made further analysis in section 4.2 combined with local rainfall.
The influence of rainfall on slope deformation
As previously stated, expansive soil is extremely water sensitive, and its physical and mechanical qualities fluctuate greatly before and after contact with water.As a result, the channel slopes on both sides of the South-to-North water transfer canal may slip under long-term regional geological conditions and rainfall [40].The rainfall data with the same time axis as the radar image is used for this subsection.The two are compared to explore their association based on the cumulative shape variables of 12 sample points (see Fig. 8 for a particular position of sample points).Fig. 13 depicts the comparative findings between the two.The entire canal slope deformation process is separated into three age periods based on rainfall data, local climatic factors, and flood season, as illustrated in Fig. 13.On the complete time axis (from January 2017 to December 2019, the cumulative time is 3 years), the line charts variables of 12 sample points steadily grow with time, and the cumulative shape variables are between 40 and 60 mm.The form variable alters somewhat with the changing of seasons for each era, and there are 1-2 peaks.The form variable drops dramatically after the peaks.The cause of this phenomenon may be described in conjunction with the time axis's local rainfall characteristics: Although the undisturbed soil on the canal slope is replaced during the canal head section construction, the replacement soil is less influenced by rainfall.However, under the conditions of regional groundwater runoff and rainfall lateral infiltration recharge, the groundwater environment changes, and the water content increases, causing the physical and chemical reaction of the undisturbed soil below the replacement soil to increase, the volume of expansive soil to increase, the canal slope deformation to rise, and a wave peak to form on the relationship diagram between shape variable and rainfall (as shown in Fig. 13).
There are four wet spells from 2017 to 2019.The amount of expanding soil grows after each wet spell, and the form variable has a peak.The expanding soil then loses water and shrinks, causing the canal slope to fall and the overall form variable to decrease.During the wet season, the volume of expanding soil grows again, and the canal slope continues to rise.This demonstrates that rainfall has a considerable influence on slope deformation, and there is a lag in the peak value.Furthermore, the longer the wet season lasts, the more visible the wave crest.Although the canal slope shape variability rises throughout the dry season, the deformation rate remains rather consistent.To summarise, slope deformation is closely tied to local seasonal rainfall and has a periodicity, but the duration of the cycle is unknown.Table 2 shows the three years of copious rainfall and the lag time of the highest value of surface deformation.
Predictive model reliability validation and accuracy assessment (1) Model reliability validation
To verify the reliability of the LSTM predictive model used in this paper, MT-InSAR data from January 2017 to December 2018 were used as training data (blue dots in Fig. 14) to predict the deformation trend of the slope of the South-to-North Water Transfer Project from 2017 to 2019 (green and pink lines in Fig. 14).Fig. 14 serves as a validation plot for the model using point f as an example.By comparing the predicted curve with the MT-InSAR data curve, it is evident that the two curves have a highly consistent trend, indicating that the predictive model used in this paper is reliable in terms of fitting.However, to further verify the reliability of the model, we also validate our predictive model from the perspective of error analysis.
To verify the reliability of the prediction results, SBAS-InSAR monitoring data from 12 sample sites from 2017 to 2019 were used as the modal and predicted values.The reliability of the LSTM model prediction results was tested by counting the absolute value of the most-valued error between the predicted and SBAS-InSAR monitored values.We calculate the absolute values of the difference between the predicted value and the SBAS-InSAR monitoring value, and then screen the maximum and minimum values for these absolute errors.Due to over 1000 data points per position, we only list the Maximum and minimum values for each monitoring point, as indicated in Table 3.
(2) Model prediction accuracy assessment To verify the prediction accuracy of the LSTM model for slope deformation, the SBAS-InSAR monitoring values were compared with the predicted values as the measured values to evaluate the accuracy of the model.Two indicators, correlation coefficient, and root mean square error RMSE, were used for the evaluation, and the indicators were calculated by equation ( 8) and equation ( 9) respectively, and the prediction accuracy evaluation results are shown in Table 4.
R =
Cov(y, y) ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ Var|y, y|Var|y, y| As can be seen from Table 4, the correlation coefficient R is a maximum of 0.999 and a minimum of 0.994, indicating a high correlation between the predicted and measured values.The minimum and maximum values of RMSE were 3.6 mm and 4.2 mm respectively, therefore, the prediction of the overall trend of deformation using the LSTM model adopted in this paper is scientific and reasonable, taking into account the influence of seasonal factors on slope deformation.
Conclusion
In this paper, the SBAS-InSAR monitoring technique is used to interpret 88 Sentinel-1A images from January 11, 2017, to December 27, 2019, taking into consideration persistent scatterers.In the study area, the average yearly deformation rate of the slope in the deep excavated canal section of expansive soil is between 10mm/a and 25mm/a.The left riverbank is slightly smaller than the right side, the maximum cumulative deformation is around 60 mm, and the canal slope of the deep excavated section is in an uplift state.The findings of soil sample maps and mineral compositions were produced using an X-ray diffractometer to detect three sets of soil samples taken from the study area.In soil sample No. SZ1, potassium feldspar accounted for 15.5 percent, whereas Muscovite accounted for 20 percent and 30 percent in soil samples No.S1 and No.S4, respectively.The test area's rock and soil strata were covered in expansive soil, mostly made up of illite.canal slope deformation was significantly influenced by changes in the groundwater environment during the wet season.During the research period, canal slope deformation progressively develops, and the peak deformation is strongly connected to rainfall during high water periods, displaying a lag, with the highest lag of 80 d and the lowest lag of 27 d.The deformation of canal slopes will fluctuate in a short period due to the action of the water environment, but the general deformation trend will remain unchanged.Slow sliding of shallow rock and soil mass on the canal slope for a long time is limited by anti-sliding piles on the channel slope's bottom edge and is the principal driving factor for canal slope raising.In light of this, the LSTM model proposed in this paper firstly analyses the effect of seasonal elements on slope deformation then fits regression to the data after separation fluctuation and reconstructs with seasonal index to produce the anticipated slope deformation value.This model is used to forecast canal slope deformation for the coming year.When the predicted and measured values are compared, the correlation coefficient of the two values can reach 0.999.The RMSE ranges from 3.6 mm to 4.4 mm.As a result, it has a high prediction accuracy.
The LSTM model proposed in this paper can forecast not only the slope deformation of the South-to-North Water Transfer Project but also the long-term periodic variation of other objectives influenced by numerous causes (such as load, precipitation, temperature, etc.).Some phenomena, such as earth settlement, crack growth, and building settling, are ubiquitous.It should be noted that the effect of a single precipitation component is only taken into consideration in the forecast of canal slope deformation in the deep excavation portion of the South-to-North Water Transfer Project, but no other factors are included.The model design under the combined effect of various elements will be investigated in depth in the next stage to increase the universality of the prediction model.
Fig. 1 .
Fig. 1.Location map of the study area.
L
.Ding et al.
Fig. 9
Fig. 9 shows the cumulative temporal deformation field (along the LOS direction) of the slope of the deep excavated canal section of the South-North Water Transfer Middle Route extracted between 16 February 2017 and 27 December 2019, with the image data from 11 January 2017 as the base for the extraction, with a time interval of about 1 month and an individual interval of 2 months.As can be seen from the figure, the slope deformation field changed very little before 23 June 2018, and the slope gradually lifted with time, with a maximum cumulative deformation of about 60 mm.According to Fig.9, it can be seen that there is a large amount of severe subsidence outside the study area, with a maximum cumulative subsidence of 243 mm.Field investigations have shown that the areas surrounding the canal are mostly sloping farmland, which is prone to soil erosion and sediment transport caused by rainwater.On the other hand, during the construction of facilities of the canal, measures such as replacement, hardening, and interception ditches were taken (as shown in Fig.2).Therefore there is a significant amount of deformation outside the study area, as shown in Fig.9, while the deformation on both sides of the canal is relatively weak.
L
.Ding et al.
Fig. 10 .
Fig. 10.LSTM model prediction results.(a)Prediction of point A; (b) Prediction of point B; (c)Prediction of point C; (d)Prediction of point D; (e) Prediction of point E; (f)Prediction of point F; (g)Prediction of point G; (h)Prediction of point H; (i)Prediction of point I.(j)Prediction of point J; (k) Prediction of point K; (l)Prediction of point L.
L
.Ding et al.
'
This research was funded by National Natural Science Foundation of China (41671507, U1810203), Science and Technology Project of Henan Province (212102310404), State Grid Henan Electric Power Company Technology Project (5217L0230004); Project to Create a "Double First Class" Discipline in Surveying and Mapping Science and Technology (BZCG202301); Open Fund of the Key Laboratory of Mining Spatial and Temporal Information and Ecological Restoration of the Ministry of Natural Resources (KLM202303, KLM202306); Research project of Henan Provincial Department of Natural Resources (2023-6).
L
.Ding et al.
Table 1
Parameter of the collected soil samples.
Table 2
Time period of abundant rainfall and peak time of deformation.
Table 3
Maximum or minimum error for sample points.Note: A,B …...L represents the point number; Pr represents the predicted value; Ma refers to the measured value; Va represents the absolute errors of the maximum or minimum error value.
Table 4
Results of accuracy evaluation.
L.Ding et al. | 2024-02-14T16:16:11.891Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "382a1ece467504f68acf54cbdd350d5e758a7d92",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2024.e26301",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ffae2729710e7759ada262b493f3fe0762e173e",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267736188 | pes2o/s2orc | v3-fos-license | Elemental Fingerprinting of Pecorino Romano and Pecorino Sardo PDO: Characterization, Authentication and Nutritional Value
Sardinia, located in Italy, is a significant producer of Protected Designation of Origin (PDO) sheep cheeses. In response to the growing demand for high-quality, safe, and traceable food products, the elemental fingerprints of Pecorino Romano PDO and Pecorino Sardo PDO were determined on 200 samples of cheese using validated, inductively coupled plasma methods. The aim of this study was to collect data for food authentication studies, evaluate nutritional and safety aspects, and verify the influence of cheesemaking technology and seasonality on elemental fingerprints. According to European regulations, one 100 g serving of both cheeses provides over 30% of the recommended dietary allowance for calcium, sodium, zinc, selenium, and phosphorus, and over 15% of the recommended dietary intake for copper and magnesium. Toxic elements, such as Cd, As, Hg, and Pb, were frequently not quantified or measured at concentrations of toxicological interest. Linear discriminant analysis was used to discriminate between the two types of pecorino cheese with an accuracy of over 95%. The cheese-making process affects the elemental fingerprint, which can be used for authentication purposes. Seasonal variations in several elements have been observed and discussed.
Introduction
Dairy products and milk are among the most valuable foods due to their high nutritional value.Milk is the primary food of mammals at birth and contains, on average 4.9% of lactose, 6.2% of proteins, 7.9% of fats, 1.93 g kg −1 of calcium, 1.58 g kg −1 of phosporus and vitamins (vitamin A, 146 IU; vitamin D, 0.18 IU) [1].Dairy products are derived from the milk of major and minor ruminant species, such as cows, buffaloes, sheep, and goats.In addition to their nutritional properties, dairy products are important for the economy and traditions of many countries.Among them, Italy is one of the most recognized in the world to produce protected designation of origin (PDO) products.The most widely renowned dairy products include Parmigiano Reggiano, Grana Padano, and Pecorino Romano.
Pecorino Romano (PR) is a sheep cheese primarily produced in Sardinia, an Italian region where the sheep dairy industry is economically relevant [2].In fact, two other PDO sheep's milk cheeses are produced here: Pecorino Sardo PDO (PS) and Fiore Sardo PDO.The production of PDO cheeses from sheep's milk is the main source of income for the livestock industry on the island.Semi-extensive farming is the primary method for rearing milk sheep in Sardinia.This results in cheeses that exhibit unique sensorial properties due to the distinguishing features of pastures and climate.
Despite its relevance to the industry, the economic model is fragile due to its dependence on price fluctuations of PR.This often results in farms failing to cover production costs during times of price drops [2].For these reasons, recent studies have aimed to enhance economic performance and sustainability of the supply chain [3], develop new marketing strategies, improve farm technologies [4], and diversify the production [5,6].This latter strategy appears to be the most promising in reducing the reliance of milk costs on PR.
Another key method for enhancing product value is to capitalize on consumer awareness of dairy quality and nutritional properties [7,8].For instance, vitamins and minerals are of significant interest because of their association with various health benefits [7,9].
Elements such as Na, Mg, K, Ca, Fe, Cu, Zn and Se are essential for supporting the immune response, cellular processes, and antioxidant defenses [10,11].These elements can be obtained naturally through a balanced and tailored diet, which prevents health complications caused by deficiency or excess.On the other hand, toxic elements such as As, Cd, Hg and Pb pose health risks to humans at any concentration [12].Anthropogenic activities often lead to pollution by toxic elements, which contaminate food through water, soil, and air [13].To ensure food safety, regulations and safety measures have been implemented to monitor and limit the presence of toxic elements in food.For instance, the European community has established maximum levels of toxic elements in food [14,15].However, most of the toxic elements in milk and milk products are not currently regulated, such as As, Hg and Cd, while the limit for Pb in such matrices is 0.020 mg kg −1 [14].Previous studies have investigated the dietary intake of toxic metals from milk and its derivatives [16][17][18].
In addition to nutritional and safety aspect, the elemental composition of foods [19] can provide valuable information for authentication [20], traceability [21], and origin assessment of dairy products [22].The concentration of elements in foods can be affected by various factors, including climate and translocation from soil, water, and air [19].Research has shown that elemental fingerprints of dairy products can be used to discriminate their geographical origin [23][24][25], verify their PDO authenticity [20,[26][27][28][29], identify breeding methods [30], assess production processes [31], and trace the production chain [32].Accurate sampling is necessary to achieve these goals and encompass all variables that can influence the elemental fingerprint.This includes seasonality, product processing steps, soil characteristics, pollution sources, and class variability [33].
For several decades, this research group has concentrated on the valorization [34][35][36][37][38], quality protection [39][40][41][42], classification [43], and food safety [44][45][46] of dairy products from Sardinia.Therefore, the potential of elemental analysis in food valorization and authentication was evaluated in this study by measuring the elemental fingerprint of two Sardinian PDO sheep cheeses.The concentrations of 31 elements in 200 samples of PR and PS were determined using inductively coupled plasma-optical emission spectroscopy (ICP-OES) and inductively coupled plasma-mass spectrometry (ICP-MS).The main objective was to evaluate nutritional properties and food safety to enhance products and record data for food authentication studies, e.g., for geographical discrimination.In addition, the effects of cheesemaking and seasonality on elemental fingerprints were evaluated.
Elemental Composition of Pecorino Romano PDO and Pecorino Sardo PDO
The elemental analysis of PR and PS was conducted using ICP-OES to determine the concentrations of macroelements (Ca, K, Mg, Na, P, and S), and ICP-MS to measure the amounts of trace elements (Zn, Fe, Mn, Cu, Se, Rb, Sr, Al, B, Co, Ni, Cr, V, Li, and Ag) and toxic elements (As, Cd, Hg, Pb, Sn, Sb, Tl, Te, Bi, and U).The results are presented in Table 1 and are expressed in kg of dry matter in the cheese.Both cheeses had comparable levels of Ca, Mg, K and P in terms of macroelements, with an order of abundance that reflected their concentration in the original milk: Ca > P > K > S ≥ Mg [1,9].However, due to the distinct salting process employed, the Na concentration in PR was higher than that in PS.Typically, the NaCl concentration in PR ranges from 3% to 7%, whereas in PS it rarely exceeds 2%.Even in terms of trace elements, both cheeses have a similar elemental content.Consistent with the initial milk composition, the trace elements found in highest abundance were Zn, Fe, and Cu.Both PR and PS contained similar amounts of Se, Rb, Sr, and Al.Other trace elements, such as Co, Ni, Cr, V, Li, and Ag, were present in both cheeses at levels near or below the limit of quantification.
Regarding toxic elements, both cheeses contained low levels of As, Cd and Pb.Hg was never detected.Other toxic elements, such as Sn, Sb, Te, Tl, Bi, and U, were generally either not quantified or present at very low levels.It is worth noting that the level of Te in PS was significantly higher than that in PR.However, the European Food Safety Agency (EFSA) is currently investigating the potential toxicity of Te [47].
Finally, a semi-quantitative analysis of rare earth elements (REEs) was preliminarily performed.The REEs were seldom detected above the instrumental detection limit, with a few exceptions for the LREEs.Further investigations will be conducted to optimize the limits of quantification of the analytical method and enable the determination of markers for traceability of the production chain [32].
To the best of our knowledge, the determination of trace elements in PR and/or PS has rarely been accomplished.Previous literature has mainly focused on quantifying macroelements [28,48,49], with only occasional attention given to trace elements such as Zn, Fe, Se, Cu [48], Ba [28], and Al, Ba, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Pt, Sr, and Zn [49].The data obtained in this study are in good agreement with those of previous studies.Table S1 enables a comparison of the elemental compositions of PS and PR as measured in this study and in the literature.
Differentiation Due to Cheese-Making Process Technology
Elemental fingerprinting has been reported in the literature as a method for authenticating cheeses [23,24,[26][27][28]50].This technique is commonly used to discriminate cheeses made from milk of different animal origins [26,27,50], from different geographical areas [23,28], or from significantly different cheese-making processes, such as final moisture content and salting [24,26].Samples were collected from various dairies in Sardinia (Italy) and varied in cheese-making technologies and production period (seasonality).
Principal Component Analysis (PCA) was used for data visualization.The data was cleaned by removing any elements that were not quantified in at least 90% of the samples or were not significant for the analysis.Additionally, Na was excluded as a variable to eliminate the influence of the salting process.Outliers were identified and removed using T 2 and Q statistics after performing a preliminary PCA with p > 0.05.The results of the PCA are presented in Figure 1.
PC1 and PC2 accounted for 24.5% and 19.7% of the total variance, respectively.Figure 1A shows that the most abundant trace elements, such as Zn and Cu, are characterized by positive values of PC1, while macroelements, including Ca, Mg, P, and K, are characterized by positive PC2 values.Looking at the score plot (Figure 1B), positive PC1 values tended to occur in the PS cluster (red samples), which was associated with a higher concentration of trace elements, while the PR cluster (black samples) tended to have negative PC1 values.The differentiation between the two clusters was accentuated upon observing PC3, which explained 12.1% of the variance (3D score plot, Figure 1C).This evidence suggests that the elemental fingerprint may be used to discriminate between the two types of cheese.Linear Discriminant Analysis (LDA) was used for classification.The MANOVA test showed a significant difference between the two groups (F (15, 180), Wilks = 0.175, approx.F = 56.78,p < 0.001).Prior to LDA, the dataset was randomized and split into a training set (n = 140) and a test set (n = 55).The results obtained from cross-validation and prediction are reported in Table 2.The levels of discrimination achieved in cross-validation (97.1%) and prediction (95.7%) were highly accurate.The elemental fingerprint can discriminate between PR and PS using macro-elements (i.e., Ca, K, Mg, P, and S) and trace elements (i.e., Zn, Fe, Mn, Cu, Se, Rb, Sr, Al, Co, and V).These findings were consistent when LDA was used to analyze data from samples produced by three farms that yielded both PR and PS during the same period.The statistical significance of the data was reduced due to the smaller sample size (n = 62).
However, the Principal Component Analysis (PCA) in Figure S1 showed that the samples were distinguishable, and the LDA successfully classified them with an accuracy of 98.1% in cross-validation.Among the literature reviewed, the study by Di Donato et al. [28] was the most comparable to the present study as it adopted a similar approach for authenticating Italian Pecorino cheese.However, it is important to note that the samples in their study were geographically diverse, collected from three different regions of Italy.In contrast, the differences found in our study were solely attributed to the cheese production method used.
The study confirmed the impact of cheese-making technology on the elemental fingerprint.The differences in the chemical equilibrium of milk's various elements, as well as the chemical form (ionic soluble or colloidal bound to milk proteins and fats), and the variations in cheese-making techniques (coagulation, wheying, and salting) could explain the distinct elemental fingerprints of the two cheeses analyzed in this research.During cheese production, the main stages that lead to a significant variance in the concentration of elements found in the cheese are salting and coagulation.
Salting triggers an osmotic phenomena, resulting in fluctuations in the levels of unbound minerals, which can cause a loss of water and cationic elements such as Na, K, Al, Cd, Co, and Rb.On the other hand, the elements bound to caseins and fats, such as Ca, P, Fe, Mg, Mn, Ni, Pt, and Zn, become concentrated.
During coagulation, there is a non-uniform distribution of elements between the curd and whey.Na and K are soluble elements and tend to be distributed in the aqueous phase (whey).On the other hand, Ca, Mg and P are associated in different proportions with the colloidal suspension of casein micelles and are more concentrated in the curd during cheese production [51].Several studies have shown that the soluble phase of sheep's milk may contain different percentages of Ca, Mg and P, with deviations from the total content ranging from 20-25%, 35-64% and 35-40%, respectively [52][53][54].Currently, there is limited information available on the distribution of trace elements during sheep milk cheese production.However, data indicate that Zn and Mn are primarily distributed in the curd, accounting for about 90%, while Fe and Cu account for about 70% [52].Fluctuations in pH, temperature, and milk storage conditions affect the equilibrium between soluble and colloidal forms.Generally, a decrease in pH and temperature shifts the balance towards soluble ionic forms, while an increase in pH and temperature favors solubilization.This results in an increase in the retention of certain elements in the curd during cheese production.
Based on these considerations, the various conditions of acidification (which are more intense in PR compared to PS), curd breaking (which is more extensive in PR compared to PS), cooking (with curd cooking at 45 • C in PR and 43 • C in PS), whey removal, heating, curd cooling, and salting may have induced differences in the balance between curd and whey.As a result, the different cheesemaking technologies favor the retention of certain elements, especially Zn, Fe, Cu, and Mn in PS compared to PR (see Figure 1).Milder acidification and breaking of the curd may have resulted in less demineralization during the production of PS.This could have led to greater retention of trace elements that are partially bound to casein micelles, such as Zn, Mg, Fe, and Cu, as previously observed.Additionally, rapid lactic fermentation, followed by effective whey removal, promotes curd demineralization [55].
Effect of Seasonality
In Sardinian sheep farming births are synchronized and the lactation period starts in November and ends in June-July.The chemical composition of sheep's milk changes during this period, depending on the diet, lactation stage, and climate [4].The concentration of minerals and major classes of compounds can be affected by the lactation stage [56].Therefore, we evaluated the effect of seasonality on the elemental composition of PR and PS cheeses using PCA and ANOVA.
The PCA analysis in Figure S2 shows a distinct trend for Pecorino Romano PDO cheese.The loading plot indicates that PC1 describes the correlation between trace element concentrations and seasonality.Negative values indicate elements that are more abundant in PR produced in summer (V, Al, Rb, and Fe), while positive values indicate elements that are more abundant in PR produced in winter (Zn and Cu).The ANOVA results confirmed that seasonality had an impact on 12 out of 15 elements (see Figure S3).Winter-produced cheeses had the highest concentrations of Zn, Cu, K, and Mn, while spring-produced cheeses had the highest concentrations of Ca, Mg, P, and S. Summer-produced cheeses had the highest concentrations of Rb, Fe, Al, and V.The concentrations of Se, Na, and Sr were not affected by seasonal variations.
In contrast, the impact of seasonality on Pecorino Sardo PDO is relatively minor.Although the PCA did not reveal any clear trends (see Figure S4), the ANOVA indicates a significant effect of seasonality on 8 out of 15 elements (Zn, Ca, K, P, S, Cu, Rb, and V), as shown in Figure S5.Notably, the trends for these elements were like those observed for PR, with higher concentrations of Ca, P, and Mg found in PS during the spring season.Additionally, the concentration of Cu was highest in winter cheese.
As expected, both types of pecorinos exhibit similar seasonal variations in their composition, reflecting the composition of sheep's milk.Ca is closely linked with P in casein micelles, which provide the structure and stability of the micelles.Colloidal calcium phosphate links the casein submicelles together, occupying 6% of the micellar structure.Therefore, there is a positive correlation between Ca, P, and casein content in ruminant milk [57].Both cheeses (PS and PR) exhibit the highest concentrations of Ca, Mg, P, and S during the spring season when the amount of casein in the ewe's milk reaches its daily maximum level [4].S is not directly involved in the stabilization of micelles, but it is present in proteins, specifically whey proteins (cysteine and cystine amino acids).Therefore, the higher concentration of sulfur in spring cheeses could be linked to the protein concentration found in the milk of spring sheep.Despite this, the cheese still contains only small amounts of whey proteins.When examining the alkaline elements, no evident trends were found.Na cannot be evaluated due to the salting process.K remained constant in winter and spring but decreased significantly in summer.Rb in PR increased from winter to summer, but this trend was not observed in PS.This difference was likely due to different production methods.Regarding trace elements, there were two opposite trends observed.At the start of the lactation period, the concentrations of Zn, Mn, and Cu were highest.However, towards the end of the lactation period, the concentrations of Fe and V tended to increase.
The elemental composition of sheep's milk during lactation is subject to seasonal variations, which are influenced by various factors such as the lactation stage, nutritional status of the animal, as well as environmental and genetic factors [51,58].The mineral content in milk is weakly affected by ruminant feeding because the maternal skeleton tends to demineralize during periods when dietary mineral intake does not meet the mineral requirements of the newborn, thus, compensating for the deficit [59].Skeletal demineralization typically occurs during periods of high mineral demand, such as early lactation and colostrum production [60].The influence of the lactation stage on the mineral composition of milk is not well-documented.In bovine milk, Ca, P, Mg, and Na levels tend to increase towards the end of the lactation period [61].This is likely due to increased permeability of the mammary epithelium as lactation progresses [62].Finally, the mineral content of milk can also be influenced by the animal's health status and genetic type.The concentration of most minerals in milk decreases when mastitis is present in the mammary gland, except for sodium and chloride ions, which increase instead [63].
Nutritional and Safety Aspects
Milk and dairy products are considered highly nutritious.Mineral content is an important factor in determining food value, according to consumer preferences.Cheese is a well-known source of minerals, especially Ca, P, and Mg.Casein peptides in milk or cheese prevent the precipitation of calcium in the intestine, making it easily bioavailable [64].Although the etiology of osteoporosis is complex, adequate calcium intake during childhood and adolescence is important for developing high peak bone mass.Maximizing bone mass early in life is considered a crucial preventive factor against osteoporosis [65].This study's results indicate that Pecorino Romano PDO and Pecorino Sardo PDO are sources of several nutritional elements.Figure 2 Figure 2 shows that the daily consumption of both cheeses meets the Recommended Dietary Allowances (RDAs) and Adequate Intakes (AIs) for many elements (daily portion of 100 g).According to European guidelines [67,68], both cheeses are rich in Ca, P, Zn, and Se (DRI > 30%).Additionally, PR and PS cheeses are potential sources of Mg (females, DRI > 15%) and Cu (DRI > 15%).The Cu content in cheeses may vary seasonally, as shown in Figures S3-S5.Therefore, it may be possible to produce cheeses with high Cu content during the winter season.These findings are significant as they allow dairies and protection Consortia to implement nutritional labeling in accordance with European regulations [69].As for toxic elements such as As, Cd, Hg, Pb, and Sn, both PR and PS cheeses have a high level of food safety.Additionally, the concentrations of Tl, Bi, and U were frequently below their limits of quantification.Moreover, the cheeses analyzed in this study were obtained from various locations (see Figure 3), indicating the high level of food safety in Sardinian sheep's milk production.Figure 2 shows that the daily consumption of both cheeses meets the Recommended Dietary Allowances (RDAs) and Adequate Intakes (AIs) for many elements (daily portion of 100 g).According to European guidelines [67,68], both cheeses are rich in Ca, P, Zn, and Se (DRI > 30%).Additionally, PR and PS cheeses are potential sources of Mg (females, DRI > 15%) and Cu (DRI > 15%).The Cu content in cheeses may vary seasonally, as shown in Figures S3-S5.Therefore, it may be possible to produce cheeses with high Cu content during the winter season.These findings are significant as they allow dairies and protection Consortia to implement nutritional labeling in accordance with European regulations [69].As for toxic elements such as As, Cd, Hg, Pb, and Sn, both PR and PS cheeses have a high level of food safety.Additionally, the concentrations of Tl, Bi, and U were frequently below their limits of quantification.Moreover, the cheeses analyzed in this study were obtained from various locations (see Figure 3), indicating the high level of food safety in Sardinian sheep's milk production.
Samples
A total of 200 samples of Pecorino cheese produced in Sardinia in 2021 were obtained from 16 dairy farms that collected milk from livestock farming in surrounding areas.Two PDO sheep cheeses were considered: PR (n = 103) and PS (n = 97).The production technologies adhere closely to the specifications outlined in their respective consortium regulations [70,71].Pasteurized whole milk is curdled using cultures of milk enzymes from the milk's place of origin.To produce PR, the milk is coagulated at 38-40 °C and the curd is cooked at 45-48 °C.The resulting 20-35 kg wheels mature for 5-18 months.On the other hand, PS is produced by coagulating the milk at 35-39 °C, cooking the curd at 43 °C and ripening the 1.7-4.0kg wheels for 2-6 months.The salting process can occur through both dry and wet methods.For PR dry salting, the side and plate of the cheese receive about 2800 g of NaCl distributed thrice over a maximum span of 50 days.Alternatively, the wheels are immersed in a dynamic brine (at 12 °C and a NaCl concentration of 20-22%) for 20 days.Similarly, PS can also be salted dry or wet, but generally, it is kept in brine (24% of NaCl) for 10 h kg −1 of cheese at a temperature of 10-12 °C.
In this study, cheeses were produced by each dairy in three periods of the year: winter, 37%; spring, 35%; and summer, 28%.Thus, samples differed in seasoning and cheesemaking.Additionally, the cheeses varied in maturation (PR: 5-18 months, PS: 2-6 months).The wheels were first divided according to what reported in previous studies ISO 707:2008 [72].Then, aliquots obtained were aggregated, homogenized, and stored at a temperature between −18 °C and −24 °C until analysis.Figure 3 shows the selected information on Pecorino cheese samples.
Instrumentation and Reagents
Elemental analysis was performed on a NexION 350X spectrometer equipped with an S10 autosampler, a glass concentric nebulizer, a glass cyclonic spray chamber, and a kinetic energy discrimination (KED) collision cell, all from Perkin Elmer (Milan, Italy).The most abundant elements were determined using an OPTIMA 7300 DV spectrometer (Perkin Elmer, Waltham, MA, USA) equipped with a GemTip Cross-Flow II nebulizer (Perkin Elmer, Waltham, MA, USA) and an autosampler (SC-2 DX, Elemental Scientific Inc., Omaha, NE, USA).To determine macroelements (Ca, K, Mg, Na, P, and S) cheese samples were previously dried in a drying oven (Memmert, Schwabach, Germany) and then calcined in a muffle furnace (Gelman Instrument, Opera, Italy).To detect trace elements (Zn, Fe, Mn, Cu, Se, Rb, Sr, Al, B, Co, Ni, Cr, V, Li, and Ag) and toxic elements (As, Cd, Hg, Pb,
Samples
A total of 200 samples of Pecorino cheese produced in Sardinia in 2021 were obtained from 16 dairy farms that collected milk from livestock farming in surrounding areas.Two PDO sheep cheeses were considered: PR (n = 103) and PS (n = 97).The production technologies adhere closely to the specifications outlined in their respective consortium regulations [70,71].Pasteurized whole milk is curdled using cultures of milk enzymes from the milk's place of origin.To produce PR, the milk is coagulated at 38-40 • C and the curd is cooked at 45-48 • C. The resulting 20-35 kg wheels mature for 5-18 months.On the other hand, PS is produced by coagulating the milk at 35-39 • C, cooking the curd at 43 • C and ripening the 1.7-4.0kg wheels for 2-6 months.The salting process can occur through both dry and wet methods.For PR dry salting, the side and plate of the cheese receive about 2800 g of NaCl distributed thrice over a maximum span of 50 days.Alternatively, the wheels are immersed in a dynamic brine (at 12 • C and a NaCl concentration of 20-22%) for 20 days.Similarly, PS can also be salted dry or wet, but generally, it is kept in brine (24% of NaCl) for 10 h kg −1 of cheese at a temperature of 10-12 • C.
In this study, cheeses were produced by each dairy in three periods of the year: winter, 37%; spring, 35%; and summer, 28%.Thus, samples differed in seasoning and cheesemaking.Additionally, the cheeses varied in maturation (PR: 5-18 months, PS: 2-6 months).The wheels were first divided according to what reported in previous studies ISO 707:2008 [72].Then, aliquots obtained were aggregated, homogenized, and stored at a temperature between −18 • C and −24 • C until analysis.Figure 3 shows the selected information on Pecorino cheese samples.
Sample Preparation
Sample preparation for macroelements analysis was performed as previously described [6].The sample digestion for trace and toxic elements was made using an ultra-WAVE™ microwave single reaction chamber (SRC) system (Milestone, Sorisole).With respect to conventional microwave instruments, SRC technology reaches higher temperatures and pressures and can manage higher amounts of samples using lower amounts of reagents [73].In accordance with previous studies [73,74], nitric acid and hydrogen peroxide were used as oxidizing agents in this study.Approximately 0.450 g of the sample (exactly weighted on the analytical balance) was treated with 1 cm 3 of HNO 3 (67-69%), 2 cm 3 of H 2 O 2 (30%), and 4 cm 3 of ultrapure H 2 O.
The digestion program is listed in Table 3.After cooling, the samples were collected, diluted to 15 cm 3 using ultrapure H 2 O, and filtered using a syringe filter.The final residual acidity determined by titration with 0.1 mol dm −3 sodium hydroxide was 2.5 ± 0.2%.To ensure the quality of the analytical data, each digestion batch included a blank and a sample of certified reference material (CRM), ERM-BD151.The same CRM was used to assess the efficiency of microwave acid digestion in terms of matrix effect and trueness.
Elemental Analysis, Validation, Quality Control and Assurance
Macroelements (Ca, K, Mg, Na, P, and S) were determined using inductively coupled plasma-optical emission spectroscopy (ICP-OES), whereas trace and toxic elements were analyzed using inductively coupled plasma-mass spectrometry (ICP-MS).The instrumental parameters used for the analysis are reported in Table S2 (ICP-OES) and Table S3 (ICP-MS).Further details regarding the ICP-OES method have been previously reported [6], whereas the ICP-MS method was fully developed and validated in this study.For each PDO cheese, three samples were randomly selected and analyzed using the semi-quantitative TotalQuant ® method (Syngistix software v 2.3).This preliminary analysis allowed the assessment of the elements that were always within the instrumental detection limit.Excluding these, the elements of interest for a possible ICP-MS quantification were Ag, Al, As, B, Bi, Cd, Co, Cr, Cu, Fe, Hg, Li, Mn, Ni, Pb, Rb, Sb, Se, Sn, Sr, Te, Tl, U, V, and Zn.Subsequently, the possible presence of polyatomic interferences in the real matrix was ascertained, and for each element, the most suitable analysis mode (STD mode or KED mode) was determined.Validation was accomplished in terms of limits of detection and quantification, precision, and trueness.The validation parameters are listed in Table S4.The limits of detection (LoD) and quantification (LoQ) were calculated according to Currie [75].Method repeatability (CV% r ) was assessed by analyzing samples in triplicate within the same analytical session, whereas intermediate precision (CV% IP ) was calculated using data obtained from different analytical sessions.Finally, trueness was evaluated by analyzing certified milk CRM ERM BD-151 and spiking tests.In the last case, for each analyte, samples were spiked three times at increasing concentration levels.The trueness measured by analyzing the CRM (Table S5) was between 89 ± 5% (P) and 114 ± 5% (Na) for the macroelements (ICP-OES method), whereas that measured for the trace and toxic elements (ICP-MS method) ranged from 92 ± 5% (Cd) to 110 ± 5% (Se).Furthermore, the recoveries measured by spiking tests ranged from 86 ± 1% (Ag) to 149± 7% (As).The recovery results (Table S4) show that the determination of more than 80% of the elements was bias-free (criteria: t-test, p = 95%).Moderate underestimation was observed for Ag and Bi, and slight overestimation for Sr and V.However, the observed bias are acceptable according to the AOAC guidelines [76].Finally, for Na and As, a meaningful overestimation bias was observed.
Quantification was performed by external calibration using single-and multi-element standard solutions in 2.5% HNO 3 .The calibration was performed according to the expected analyte concentrations.Additionally, measurements were performed in triplicate and the data were blank-corrected.To account for signal fluctuations and matrix effects, Rh (50 µg dm −3 ) and Ir (1 µg dm −3 ) were used as internal standards.A 60-s wash with a 2% aqueous solution of HNO 3 was introduced between consecutive samples to prevent memory effects.
Statistical Analysis
Data analysis was performed using R-Studio (v.4.3.1)and Chemometric Agile Tool (CAT) [77].The Shapiro-Wilk test was used to confirm the normal distribution of the data.ANOVA and MANOVA tests were used to compare groups, and Tukey's HSD test was used as a post hoc test.The data set was cleaned by removing all elements that were not quantified in at least 90% of the samples or were not significant for the analysis.Elements retained for chemometric analysis were Mg, Ca, P, K, Zn, Cu, Fe, Mg, Al, Rb, Sr, S, Se, and V. Principal Component Analysis (PCA) was performed for data visualization.Linear Discriminant Analysis (LDA) was used to discriminate samples from different categories.Statistical significance was set at α < 0.05.
Conclusions
Pecorino Romano PDO and Pecorino Sardo PDO are two popular and highly regarded sheep's cheeses.This study aimed to address the lack of knowledge regarding their elemental composition and trace element content.To achieve this, 31 elements (macro, trace, and toxic) were analyzed in a comprehensive sample of cheeses using ICP-based methods.The results showed that these cheeses are rich in essential minerals such as calcium, phosphorus, zinc, and selenium, and could potentially be a source of copper and magnesium.
Additionally, the data allowed for an assessment of the impact of cheesemaking and seasonality on the elemental composition.Linear discriminant analysis showed that the elemental fingerprint can effectively distinguish dairy products based on their production method.This result could be a useful methodology for detecting food fraud involving other cheaper pecorinos.
Molecules 2024 , 16 Figure 1 .
Figure 1.Principal component analysis performed on data obtained from the elemental determination of 14 elements on 196 pecorino samples: (A) loading plot; (B) score plot; (C) 3D score plot.Object colored according to cheese type.
Figure 1 .
Figure 1.Principal component analysis performed on data obtained from the elemental determination of 14 elements on 196 pecorino samples: (A) loading plot; (B) score plot; (C) 3D score plot.Object colored according to cheese type.
16 Figure 2 .
Figure 2. Mineral daily intakes for adult males and females (17-70 years old) for the consumption of 100 g of Pecorino Romano PDO and Pecorino Sardo PDO.
Figure 2 .
Figure 2. Mineral daily intakes for adult males and females (17-70 years old) for the consumption of 100 g of Pecorino Romano PDO and Pecorino Sardo PDO.
Molecules 2024 , 16 Figure 3 .
Figure 3. Description of pecorino sampling in terms of dairies, samples, and period of production.
Figure 3 .
Figure 3. Description of pecorino sampling in terms of dairies, samples, and period of production.
Table 2 .
Results of the LDA performed for the discrimination based on cheese type.Confusion matrix and accuracy in cross-validation (training) and prediction (testing). | 2024-02-18T16:10:18.137Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "6d177f0c60e4fc5ba5cb54dceea3988dcdf1068f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c3ac2bde5d99001e591f3fad379e0affde4a5ab9",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.