id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
5187602 | pes2o/s2orc | v3-fos-license | Fat embolism due to bilateral femoral fracture: a case report
Fat embolism syndrome is usually associated with surgery for large bone fractures. Symptoms usually occur within 36 hours of hospitalization after traumatic injury. We present a case with fat embolism syndrome due to femur fracture. Prompt supportive treatment of the patient’s respiratory system and additional pharmaceutical treatment provided the positive clinical outcome. There is no specific therapy for fat embolism syndrome; prevention, early diagnosis, and adequate symptomatic treatment are very important. Most of the studies in the last 20 years have shown that the incidence of fat embolism syndrome is reduced by early stabilization of the fractures and the risk is even further decreased with surgical correction rather than conservative management.
Introduction
A fat embolus is a fat particle that enters the circulatory system causing vascular occlusion. Fat emboli can cause a more serious condition called fat embolism syndrome (FES), in which circulating fat emboli or macroglobules result in multisystem dysfunction. 1 In different studies, the incidence of FES ranges from ,1% to 29%, but the exact incidence has not yet been determined. 2,3 Approximately 90% of the cases are associated with trauma, especially fracture or surgery of a large bone, such as the femur. As a result of the disrupted bone, the bone marrow fat escapes into circulation. Although this may be a cause, fat embolism may also occur due to conditions such as extensive trauma or syndromes that modify lipid metabolism. 3 Symptoms of fat embolism usually occur 12-36 hours after a traumatic injury. 4 Pulmonary dysfunction (dyspnea, tachypnea, hypoxemia) is the primary manifestation, occurring in 75% of cases. 5 Up to 10% of cases may develop respiratory failure and 5%-8% of patients may progress to the severe acute respiratory distress syndrome (ARDS). 6 In cases with bilateral fractures, the incidence of ARDS has been reported to be higher than for single fractures, reaching almost 43%. 7 Half of FES patients develop severe hypoxemia and respiratory insufficiency requiring mechanical ventilation. 8 However, the role of fat emboli as a cause of ARDS after injury has not yet been defined. 9 Neurological features (agitation, delirium, seizures or coma) are seen in 86% of patients with FES. 10 Some other minor symptoms that may also be present are: anemia, low platelets, tachycardia, pyrexia, myocardial depression, and renal changes (eg, oliguria or hematuria). 5 Clinical findings are important in diagnosing FES, while biochemical changes may also be of value. The most common classification scheme for diagnosis is that of Gurd and W ilson, providing major and minor diagnostic criteria (Table 1), according to which, diagnosis of FES requires the presence of at least one major and four minor criteria. 5 The reliability of these criteria has been questioned and other schemes have been proposed based on the involvement of the respiratory system alone ( Table 1). 12 More recently, Schonfeld et al 13 proposed a semiquantitative measure to diagnose FES, as proposed by Lindeque et al, 12 in which a score of more than five is required for a positive diagnosis ( Table 2).
Case report
A 20-year-old male was transferred to the emergency room after a car accident. The patient suffered bilateral closed femoral fractures (stage II) and was hemodynamically stable, without any deterioration of consciousness. Chest radiography and arterial blood gas analysis revealed normal findings. Nine hours later, reamed femoral nailing was performed bilaterally. Twenty-four hours after admission, the patient manifested tachycardia (.110/minute), dyspnea, and hypoxemia (PO 2 = 74 mmHg, PaCO 2 = 35 mmHg, pH = 7.44). The blood test results were normal but because the D-dimer levels were elevated, a spiral computed tomography (CT) scan was performed for greater visualization of pulmonary vessels.
The spiral chest CT scan revealed peripheral emboli and mild pleural effusion in both hemithoraxes ( Figure 2). The echocardiography showed normal systolic function (ejection fraction = 60%-70%) and moderate right ventricular dilatation. These findings excluded the occurrence of cardiogenic pulmonary edema, but the suspicion of pulmonary embolism emerged. The patient was put on supplemental oxygen and heparin IV, but gradually his clinical status deteriorated. Forty-eight hours after admission, he developed widespread petechiae on the chest and respiratory failure (PO 2 = 51 mmHg, PaCO 2 = 33 mmHg, pH = 7.47, FIO 2 = 21%). Chest X-ray radiography showed the appearance of ARDS, requiring mechanical ventilation of the patient ( Figure 1).
The combination of long-bone fracture, petechial rash, hypoxemia, tachycardia, and the rapid onset of ARDS within 24-48 hours after surgery prompted a diagnosis of FES. With supportive treatment in the intensive care unit (ICU), good hydration, and cortisone therapy (100 mg × 3) for 10 days, the patient subsequently improved. On the patient's second day in the ICU, cerebral and pelvic CT scans were also performed showing normal findings. The patient remained in the ICU for a period of 10 days until extubation on day 12. On this same day he was transferred to the pulmonary department with good oxygenation and hemodynamic stablity. He was discharged from the hospital 5 days later.
Discussion
Fat embolism is most commonly associated with skeletal injury and is most likely to occur in patients with multiple long-bone and pelvic fractures. Some other traumatic causes include: rib fractures, massive soft tissue injury, severe burns, bone marrow biopsy, and liposuction. More rarely, fat embolism is also associated with some nontraumatic disorders, such as pancreatitis, diabetes mellitus, and high-dose steroid therapy. 14 Additionally, a few related studies report that the factors that increase the risk of FES's development are: young age, closed fractures, multiple fractures, and conservative therapy for long-bone fractures. 13,15 The risk of fat embolism in bilateral femoral fractures is higher than in isolated long-bone fractures. Patients with bilateral femoral fractures have a higher mortality rate than those with single femoral fractures. 16 Overall mortality is estimated at 5%-15% and up to 36% in patients who require mechanical ventilation. 3,17 For the development of FES, a mechanical theory and a biochemical theory have been proposed. According to the mechanical theory, FES occurs when large fat globules enter the venous circulation resulting in the obstruction of the pulmonary vascular system. However, this theory cannot substantiate the delay in the development of symptoms. 18 The biochemical theory suggests that hormonal changes after extensive trauma induce hydrolysis of triglycerides and release of free fatty acids, causing toxic endothelium damage in pulmonary capillary beds, as well as ARDS in animal models. 19 In this theory, the time required to produce these toxic intermediaries explains the delay in development of symptoms. Despite the large number of studies supporting the involvement of these mechanisms in the development of FES, evidence is considered circumstantial. 13,15,17 Among the reasons for the difficulty in diagnosis of FES is the complication of widely different clinical conditions that may vary in severity. Diagnosis is established on the basis of patients' clinical condition and symptoms, using the process of exclusion for other possible causes. The most useful examination in diagnosing FES includes imaging studies such as chest radiography, CT scans, pulmonary ventilation/perfusion scans, and cerebral magnetic resonance imaging, as well as cardiac investigations so as to exclude cardiac causes. In our case, the diagnosis of FES was prompted on the basis of rapid onset of ARDS ( Figure 1) and petechial rash combined with the existing hypoxemia and tachycardia, with no evidence of sepsis, cardiogenic pulmonary edema, or other causes of ARDS. Even though these symptoms met only two major and one minor criteria of Gurd and Wilson's classification, they totaled 14 points in Schonfeld's classification. These findings confirm the results of previous studies suggesting that many of the major and minor criteria of FES may be outdated or nonspecific regarding ARDS. 20 The treatment of fat embolism is only supportive and includes maintenance of adequate oxygenation, stable hemodynamics, normal blood levels, hydration, prevention of deep venous thrombosis and gastrointestinal bleeding, and nutrition.
The purpose of medication is to reduce morbidity and prevent complications. High-dose corticosteroids have been effective in preventing the development of FES in several studies, but the use of corticosteroid prophylaxis remains controversial. 21 Albumin has also been recommended because it not only restores blood levels but also combines the fatty acids that may limit lung injury. 22 The timing and the type of surgery for fractures constitute modifiable factors for the development of FES. Previous studies have revealed that after a traumatic injury, early surgical fixation in patients with isolated femoral fractures could prevent the development of FES. Interestingly, in a study of 60 cases that underwent surgery within 10 hours of injury, none of the patients was diagnosed with FES. 23 However, in our case, even though early surgical stabilization was performed 10 hours after injury, the development of FES was not prevented. Consequently, early surgery in bilateral femoral fractures may not have as positive an influence as seen in previous cases of single femoral fractures. Therefore, it was considered that bilateral fractures have a greater magnitude of injury and may cause a more severe impact to the immune system.
On the other hand, a number of studies have pointed out that surgical orthopedic treatment, especially intramedullary nailing, is associated with a higher probability of fat e mbolism and pulmonary complications such as ARDS. Reamed intramedullary nailing, performed in our patient, is the preferred method for stabilization of femoral shaft fractures; 24 yet, it has been shown to cause systemic complications due to the release of fat emboli from the bone marrow of the medullary canal. 25 Moreover, the absence of other causative factors of pulmonary dysfunction (aspiration, previous pulmonary disease), led to the hypothesis that the procedure of intramedullary stabilization triggers the development of FES postoperatively.
In summary, there is no specific therapy for FES; prevention, early diagnosis, and adequate symptomatic treatment are very important. Most of the studies in the last 20 years have shown that the incidence of FES is reduced by early stabilization of the fractures and the risk is even further decreased with surgical correction rather than conservative management. 26 In the present study, the bilateral femoral fractures and intramedullary stabilization were assumed to be the most significant factors responsible for the development of FES postoperatively.
Conclusion
The diagnosis of FES may be complex because there are no pathognomonic signs (except for the petechiae). Early suspicion combined with chest radiography and cerebral CT is the key answer to diagnosis. Bilateral fractures should be approached for damage management with the principles of orthopedics in order to eliminate potential complications. As the literature to date is limited, further research is needed to investigate the controversial relationship of bilateral femoral fractures and FES.
Consent
Written informed consent was obtained from the patient upon discharge for publication of this case report and all accompanying images.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/international-journal-of-general-medicine-journal The International Journal of General Medicine is an international, peer-reviewed open-access journal that focuses on general and internal medicine, pathogenesis, epidemiology, diagnosis, monitoring and treatment protocols. The journal is characterized by the rapid reporting of reviews, original research and clinical studies across all disease areas.
A key focus is the elucidation of disease processes and management protocols resulting in improved outcomes for the patient.The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. | 2014-10-01T00:00:00.000Z | 2012-01-16T00:00:00.000 | {
"year": 2012,
"sha1": "cd583f3fd2571ec2d1daca24a114ed1509bdcf2f",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=11836",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "318fad1c190f082f83d048a86a13607f2ea2f01b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226208060 | pes2o/s2orc | v3-fos-license | Food/Feed and Environmental Risk Assessment of Insect-Resistant and Herbicide-Tolerant Genetically Modified Maize Bt11 x GA21 in the European Union under Regulation (EC) No 1829/2003 (EFSA/GMO/UK/2007/49)
In preparation for a legal implementation of EU-regulation 1829/2003, the Norwegian Environment Agency (former Norwegian Directorate for Nature Management) has requested the Norwegian Food Safety Authority (NFSA) to give final opinions on all genetically modified organisms (GMOs) Grey Literature Andreassen et al.; EJNFS, 12(3): 39-42, 2020; Article no.EJNFS.55587 40 and products containing or consisting of GMOs that are authorized in the European Union under Directive 2001/18/EC or Regulation 1829/2003/EC within the Authority’s sectoral responsibility. The Norwegian Food Safety Authority has therefore, by letter dated 13 February 2013 (ref. 2012/150202), requested the Norwegian Scientific Committee for Food Safety (VKM) to carry out scientific risk assessments of 39 GMOs and products containing or consisting of GMOs that are authorized in the European Union. The request covers scope(s) relevant to the Gene Technology Act. The request does not cover GMOs that VKM already has conducted its final risk assessments on. However, the Agency requests VKM to consider whether updates or other changes to earlier submitted assessments are necessary. The insect-resistant and herbicide-tolerant genetically modified maize Bt11 x GA21 (Unique Identifier SYN-BTØ11-1 x MON-ØØØ21-9 ) from Syngenta Seeds is approved under Regulation (EC) No 1829/2003 for food and feed uses, import and processing since 28 July 2010 (Commission Decision 2010/4263/EC). Genetically modified maize Bt11 x GA21 has previously been risk assessed by the VKM Panel on Genetically Modified Organisms (GMO), commissioned by the Norwegian Food Safety Authority and the Norwegian Environment Agency related to the EFSAs public hearing of the application EFSA/GMO/UK/2007/49 in 2008 (VKM 2009a). In addition, Bt11 and GA21 has been evaluated by the VKM GMO Panel as single events and as a component of several stacked GM maize events (VKM 2005a,b, 2007, 2008, 2009b,c,d, 2010, 2012a,b). The food/feed and environmental risk assessment of the maize Bt11x GA21 is based on information provided by the applicant in the application EFSA/GMO/UK/2007/49, and scientific comments from EFSA and other member states made available on the EFSA website GMO Extranet. The risk assessment also considered other peer-reviewed scientific literature as relevant. The VKM GMO Panel has evaluated Bt11 x GA21 with reference to its intended uses in the European Economic Area (EEA), and according to the principles described in the Norwegian Food Act, the Norwegian Gene Technology Act and regulations relating to impact assessment pursuant to the Gene Technology Act, Directive 2001/18/EC on the deliberate release into the environment of genetically modified organisms, and Regulation (EC) No 1829/2003 on genetically modified food and feed. The Norwegian Scientific Committee for Food Safety has also decided to take account of the appropriate principles described in the EFSA guidelines for the risk assessment of GM plants and derived food and feed (EFSA 2011a), the environmental risk assessment of GM plants (EFSA 2010), selection of comparators for the risk assessment of GM plants (EFSA 2011b) and for the post-market environmental monitoring of GM plants (EFSA 2011c). The scientific risk assessment of maize Bt11x GA21 include molecular characterisation of the inserted DNA and expression of novel proteins, comparative assessment of agronomic and phenotypic characteristics, nutritional assessments, toxicology and allergenicity, unintended effects on plant fitness, potential for gene transfer, interactions between the GM plant and target and nontarget organisms and effects on biogeochemical processes. It is emphasized that the VKM mandate does not include assessments of contribution to sustainable development, societal utility and ethical considerations, according to the Norwegian Gene Technology Act and Regulations relating to impact assessment pursuant to the Gene Technology Act. These considerations are therefore not part of the risk assessment provided by the VKM Panel on Genetically Modified Organisms. The genetically modified maize stack Bt11 x GA21 has been produced by conventional crossing between inbred lines of maize containing the single events Bt11 and GA21. The F1 hybrid was developed to provide protection against certain lepidopteran target pests, and to confer tolerance to glufosinate-ammonium and glyphosate-based herbicides. Molecular Characterization: Southern blot and PCR analyses have indicated that the recombinant inserts in the parental maize lines Bt11 and GA21 are retained in the stacked maize Bt11 x GA21. Genetic stability of the inserts Andreassen et al.; EJNFS, 12(3): 39-42, 2020; Article no.EJNFS.55587 41 has previously been demonstrated in the parental maize lines. Protein measurements show comparable levels of the Cry1Ab, PAT and mEPSPS proteins between the stacked and single maize lines. Phenotypic analyses also indicate stability of the insect resistance and herbicide tolerance traits in the stacked maize. The VKM Panel on GMO considers the molecular characterisation of maize Bt11 x GA21 and its parental events Bt11 and GA21 as adequate. Comparative Assessment: Comparative analyses of data from field trials located at representative sites and environments in North America during the 2005 growing season indicate that maize stack Bt11 x GA21 is compositionally, agronomically and phenotypically equivalent to its conventional counterpart, with the exception of the insect resistance and the herbicide tolerance, conferred by the expression of Cry1Ab, PAT and mEPSPS proteins. Based on the assessment of available data, the VKM GMO Panel is of the opinion that conventional crossing of maize Bt11 and GA21 to produce the hybrid Bt11 x GA21 does not result in interactions between the newly expressed proteins affecting composition and agronomic characteristics. Food and Feed Risk Assessment: A whole food feeding study on broilers has not indicated any adverse health effects of maize Bt11 x GA21, and shows that maize Bt11 x GA21 is nutritionally equivalent to conventional maize. The Cry1Ab, PAT or mEPSPS proteins do not show sequence resemblance to other known toxins or IgE allergens, nor have they been reported to cause IgE mediated allergic reactions. Some studies have however indicated a potential role of Cry-proteins as adjuvants in allergic reactions. Based on current knowledge, the VKM GMO Panel concludes that maize Bt11 x GA21 is nutritionally equivalent to conventional maize varieties. It is unlikely that the Cry1Ab, PAT or mEPSPS proteins will introduce a toxic or allergenic potential in food or feed based on maize Bt11 x GA21 compared to conventional maize. Environmental Risk Assessment: The scope of the application EFSA/GMO/UK/2007/49 includes import and processing of maize stack Bt11x GA21 for food and feed uses. Considering the intended uses of maize Bt11 x GA21, excluding cultivation, the environmental risk assessment is concerned with accidental release into the environment of viable grains during transportation and processing, and indirect exposure, mainly through manure and faeces from animals fed grains from maize Bt11 x GA21. Maize Bt11 x GA21 has no altered survival, multiplication or dissemination characteristics, and there are no indications of an increased likelihood of spread and establishment of feral maize plants in the case of accidental release into the environment of seeds from maize Bt11 x GA21. Maize is the only representative of the genus Zea in Europe, and there are no cross-compatible wild or weedy relatives outside cultivation. The VKM GMO Panel considers the risk of gene flow from occasional feral GM. Maize plants to conventional maize varieties to be negligible in Norway. Considering the intended use as food and feed, interactions with the biotic and abiotic environment are not considered by the GMO Panel to be an issue. Overall Conclusion: Based on current knowledge, the VKM GMO Panel concludes that maize Bt11 x GA21 is nutritionally equivalent to conventional maize varieties. It is unlikely that the Cry1Ab, PAT or mEPSPS proteins will introduce a toxic or allergenic potential in food or feed based on maize Bt11 x GA21 compared to conventional maize. Andreassen et al.; EJNFS, 12(3): 39-42, 2020; Article no.EJNFS.55587 42 The VKM GMO Panel likewise concludes that maize Bt11 x GA21, based on current knowledge, is comparable to conventional maize varieties concerning environmental risk in Norway with the intended usage.
and products containing or consisting of GMOs that are authorized in the European Union under Directive 2001/18/EC or Regulation 1829/2003/EC within the Authority's sectoral responsibility. The Norwegian Food Safety Authority has therefore, by letter dated 13 February 201313 February (ref. 2012, requested the Norwegian Scientific Committee for Food Safety (VKM) to carry out scientific risk assessments of 39 GMOs and products containing or consisting of GMOs that are authorized in the European Union. The request covers scope(s) relevant to the Gene Technology Act. The request does not cover GMOs that VKM already has conducted its final risk assessments on. However, the Agency requests VKM to consider whether updates or other changes to earlier submitted assessments are necessary.
The insect-resistant and herbicide-tolerant genetically modified maize Bt11 x GA21 (Unique Identifier SYN-BTØ11-1 x MON-ØØØ21-9 ) from Syngenta Seeds is approved under Regulation (EC) No 1829/2003 for food and feed uses, import and processing since 28 July 2010 (Commission Decision 2010/4263/EC). Genetically modified maize Bt11 x GA21 has previously been risk assessed by the VKM Panel on Genetically Modified Organisms (GMO), commissioned by the Norwegian Food Safety Authority and the Norwegian Environment Agency related to the EFSAs public hearing of the application EFSA/GMO/ UK/2007/49 in 2008(VKM 2009a. In addition, Bt11 and GA21 has been evaluated by the VKM GMO Panel as single events and as a component of several stacked GM maize events (VKM 2005a(VKM ,b, 2007(VKM , 2008(VKM , 2009b(VKM ,c,d, 2010(VKM , 2012a.
The food/feed and environmental risk assessment of the maize Bt11x GA21 is based on information provided by the applicant in the application EFSA/GMO/UK/2007/49, and scientific comments from EFSA and other member states made available on the EFSA website GMO Extranet. The risk assessment also considered other peer-reviewed scientific literature as relevant.
The VKM GMO Panel has evaluated Bt11 x GA21 with reference to its intended uses in the European Economic Area (EEA), and according to the principles described in the Norwegian Food Act, the Norwegian Gene Technology Act and regulations relating to impact assessment pursuant to the Gene Technology Act, Directive 2001/18/EC on the deliberate release into the environment of genetically modified organisms, and Regulation (EC) No 1829/2003 on genetically modified food and feed. The Norwegian Scientific Committee for Food Safety has also decided to take account of the appropriate principles described in the EFSA guidelines for the risk assessment of GM plants and derived food and feed (EFSA 2011a), the environmental risk assessment of GM plants (EFSA 2010), selection of comparators for the risk assessment of GM plants (EFSA 2011b) and for the post-market environmental monitoring of GM plants (EFSA 2011c).
The scientific risk assessment of maize Bt11x GA21 include molecular characterisation of the inserted DNA and expression of novel proteins, comparative assessment of agronomic and phenotypic characteristics, nutritional assessments, toxicology and allergenicity, unintended effects on plant fitness, potential for gene transfer, interactions between the GM plant and target and nontarget organisms and effects on biogeochemical processes.
It is emphasized that the VKM mandate does not include assessments of contribution to sustainable development, societal utility and ethical considerations, according to the Norwegian Gene Technology Act and Regulations relating to impact assessment pursuant to the Gene Technology Act. These considerations are therefore not part of the risk assessment provided by the VKM Panel on Genetically Modified Organisms.
The genetically modified maize stack Bt11 x GA21 has been produced by conventional crossing between inbred lines of maize containing the single events Bt11 and GA21. The F1 hybrid was developed to provide protection against certain lepidopteran target pests, and to confer tolerance to glufosinate-ammonium and glyphosate-based herbicides.
Molecular Characterization:
Southern blot and PCR analyses have indicated that the recombinant inserts in the parental maize lines Bt11 and GA21 are retained in the stacked maize Bt11 x GA21. Genetic stability of the inserts has previously been demonstrated in the parental maize lines. Protein measurements show comparable levels of the Cry1Ab, PAT and mEPSPS proteins between the stacked and single maize lines. Phenotypic analyses also indicate stability of the insect resistance and herbicide tolerance traits in the stacked maize. The VKM Panel on GMO considers the molecular characterisation of maize Bt11 x GA21 and its parental events Bt11 and GA21 as adequate.
Comparative Assessment:
Comparative analyses of data from field trials located at representative sites and environments in North America during the 2005 growing season indicate that maize stack Bt11 x GA21 is compositionally, agronomically and phenotypically equivalent to its conventional counterpart, with the exception of the insect resistance and the herbicide tolerance, conferred by the expression of Cry1Ab, PAT and mEPSPS proteins.
Based on the assessment of available data, the VKM GMO Panel is of the opinion that conventional crossing of maize Bt11 and GA21 to produce the hybrid Bt11 x GA21 does not result in interactions between the newly expressed proteins affecting composition and agronomic characteristics.
Food and Feed Risk Assessment:
A whole food feeding study on broilers has not indicated any adverse health effects of maize Bt11 x GA21, and shows that maize Bt11 x GA21 is nutritionally equivalent to conventional maize. The Cry1Ab, PAT or mEPSPS proteins do not show sequence resemblance to other known toxins or IgE allergens, nor have they been reported to cause IgE mediated allergic reactions. Some studies have however indicated a potential role of Cry-proteins as adjuvants in allergic reactions.
Based on current knowledge, the VKM GMO Panel concludes that maize Bt11 x GA21 is nutritionally equivalent to conventional maize varieties. It is unlikely that the Cry1Ab, PAT or mEPSPS proteins will introduce a toxic or allergenic potential in food or feed based on maize Bt11 x GA21 compared to conventional maize.
Environmental Risk Assessment:
The scope of the application EFSA/GMO/UK/2007/49 includes import and processing of maize stack Bt11x GA21 for food and feed uses. Considering the intended uses of maize Bt11 x GA21, excluding cultivation, the environmental risk assessment is concerned with accidental release into the environment of viable grains during transportation and processing, and indirect exposure, mainly through manure and faeces from animals fed grains from maize Bt11 x GA21.
Maize Bt11 x GA21 has no altered survival, multiplication or dissemination characteristics, and there are no indications of an increased likelihood of spread and establishment of feral maize plants in the case of accidental release into the environment of seeds from maize Bt11 x GA21. Maize is the only representative of the genus Zea in Europe, and there are no cross-compatible wild or weedy relatives outside cultivation. The VKM GMO Panel considers the risk of gene flow from occasional feral GM.
Maize plants to conventional maize varieties to be negligible in Norway. Considering the intended use as food and feed, interactions with the biotic and abiotic environment are not considered by the GMO Panel to be an issue.
Overall Conclusion:
Based on current knowledge, the VKM GMO Panel concludes that maize Bt11 x GA21 is nutritionally equivalent to conventional maize varieties. It is unlikely that the Cry1Ab, PAT or mEPSPS proteins will introduce a toxic or allergenic potential in food or feed based on maize Bt11 x GA21 compared to conventional maize. This work was carried out in collaboration between all authors. The opinion has been assessed and approved by the Panel on Genetically Modified Organisms of VKM. All authors read and approved the final manuscript.
Competence of VKM experts:
Persons working for VKM, either as appointed members of the Committee or as external experts, do this by virtue of their scientific expertise, not as representatives for their employers or third party interests. The Civil Services Act instructions on legal competence apply for all work prepared by VKM. | 2020-04-23T09:08:30.693Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "d32f078455e56e05b8593123de1655e98708b906",
"oa_license": null,
"oa_url": "http://www.journalejnfs.com/index.php/EJNFS/article/download/30201/56661",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "070c2a1d5f273abae8796658fb7f39c6971dc22a",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
222154283 | pes2o/s2orc | v3-fos-license | Lockdowns and the COVID-19 pandemic: What is the endgame?
An overall long-term strategy for managing the coronavirus disease 2019 (COVID-19) pandemic is presented. This strategy will need to be maintained until herd immunity is achieved, hopefully through vaccination rather than natural infection. We suggest that a pure test-trace-isolate strategy is likely not practicable in most countries, and a degree of social distancing, ranging up to full lockdown, is the main public-health tool to mitigate the COVID-19 pandemic. Guided by reliable surveillance data, distancing should be continuously optimised down to the lowest sustainable level that guarantees a low and stable infection rate in order to balance its wide-ranging negative effects on public health. The qualitative mixture of social-distancing measures also needs to be carefully optimised in order to minimise social costs.
Introduction
The rapid spread of the coronavirus disease 2019 (COVID-19) pandemic led to the widespread introduction of social distancing ranging up to full lockdown. As countries are considering scaling back distancing amidst considerable scientific uncertainty, a clear overall strategy for the management of the COVID-19 pandemic is often lacking. Here, we try to conceptualise such a strategy -one that is robust to the uncertainties and to the implicit assumptions behind the various public-health actions proposed. To do that, it is important first to clarify some fundamental facts of the case.
Herd immunity is an end state, not a strategy
Herd immunity has been widely bashed as the 'failed strategy' that the UK followed before changing tack and imposing a national lockdown. The ensuing controversy has all but poisoned this scientific term, which just refers to a state where the number of people immune in a population is so high that a pathogen cannot find enough susceptibles to infect and gradually dies out. In reality, herd immunity is the only possible endgame of the COVID-19 pandemic. Given the worldwide extent of viral spread and the large degree of asymptomatic or mildly symptomatic transmission [1], containing the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus in the same way that Ebola or SARS-CoV-1 were managed is beyond the realms of possibility. Therefore, the pandemic will only definitively end once herd immunity is reached, whether that be through vaccination, natural infection or a mixture of the two [2].
Furthermore, even a modest degree of population immunity, at levels below those required for herd immunity, still results in a proportional reduction in the transmissibility of the pathogen. Therefore, it does help to bring the effective reproduction (R) number <1 and reverse the course of the pandemic, alongside proper control measures. This suggests that countries that had more infections in the first pandemic wave may face fewer challenges in controlling a potential second wave, and vice versa.
An important caveat is that the duration of protective immunity after natural infection with SARS-CoV-2 is not currently known, and is one of the most urgent questions for research. Antibodies have been shown to last for at least a few months [3], and T-cell responses are likely to persist for several years more [4,5]. Nevertheless, if people can be reinfected with SARS-CoV-2 within a few years, the virus could become endemic like other seasonal respiratory viruses (e.g. influenza, respiratory syncytial virus, etc.). In such a case, herd immunity might only be achievable through vaccination, which might have to be repeated in order to sustain immunity levels.
Lockdowns are not free in terms of public health
The dilemma between maintaining or lifting lockdowns is often counterproductively framed as a contrast between population health and the economy. In the absence of effective therapeutics and vaccines, lockdowns are intended to prevent mass casualties from the rapid introduction of a virus in an immunologically naïve population, especially in countries with a poor health infrastructure, limited surge capacity and/or social inequalities. However, lockdowns themselves have a variety of negative effects on health, which must be balanced against their benefit for controlling the COVID-19 pandemic. These effects are difficult to quantify and often overlooked. Among others, they include difficulty in accessing health care for chronic and other diseases [6], mental and physical issues due to isolation and inactivity, and the long-term effects of children being out of school [7,8]. The economic damage from the lockdown also negatively impacts public health, especially through increased unemployment and inequality [9]. Therefore, restrictive measures should be used judiciously, with a clear rationale and a reasonable expectation of net benefit in terms of population health.
Maintaining a strong lockdown indefinitely implies another strong, and usually unstated, assumption: that there will be a safe and effective vaccine available at the end of the road, produced in sufficient quantities and with a substantial proportion of the population yet uninfected. Success is not guaranteed though [10]. Real concerns exist, for example regarding antibody-dependent enhancement, and any candidate vaccine will have to be thoroughly tested before being rolled out [11]. By the time we have a vaccine, it may already be too late for it to alter the course of the COVID-19 pandemic substantially.
One must focus on what is practicable
There is intense discussion about scaling up testing for COVID-19, in large part promoted by the World Health Organization and supported by successful examples such as in South Korea [12]. Large-scale testing is essential for strong epidemiological surveillance, which is a prerequisite for making informed public-health decisions. However, in terms of controlling the pandemic, testing can only be effective when combined with case isolation and exhaustive contact tracing. In turn, this requires immense resources, which are likely beyond reach for most countries. 'Digital contact tracing' using smartphone apps might be an alternative, but this creates new issues about individual privacy and human rights. For a test-trace-isolate strategy to be practicable, case numbers first need to be brought substantially down to manageable levels through social distancing. Even then, the extent of transmission by asymptomatic or otherwise unascertained COVID-19 cases is such that this strategy may not be effective in isolation [1]. It will need to be combined by some degree of social distancing, which will continue to be the main tool for controlling the pandemic and protecting public health.
An approach for the long term, and the endgame
Controversy persists about the infection fatality rate of COVID-19, and whether it is closer to 1% or to 0.1% -a difference in deaths of an order of magnitude [13,14]. That debate often transforms itself into respective calls for indefinite maintenance or early lifting of lockdowns. An undisputable principle, however, is that the benefit from any such measures must clearly outweigh their harms to public health. It is therefore reasonable to move away from full lockdowns and calibrate social distancing down to a sustainable optimal level -one that minimises both the morbidity and mortality of COVID-19 but also the negative effects of distancing. This balance point will be continuously revised as we accumulate more scientific knowledge about COVID-19, the effectiveness of control measures and their wider impact on population health. In any case, the rational goal is not to prevent each and every SARS-CoV-2 infection at any cost, but rather to protect and maximise public health for everybody.
Such an optimisation will have to be both qualitative and quantitative. On a quantitative level, the aggregate effect of all social-distancing measures should maintain the effective R number of COVID-19 at ⩽1. This is a hard limit to ensure a stable infection rate in the population, rather than an exponentially increasing one, which would risk depleting health-care capacity, at least in some locations. If COVID-19 cases cannot be eliminated given the extent of asymptomatic transmission and continuous introductions from abroad, then a low and stable rate is the next reasonable goal. Full lockdowns were fully justified in the initial phase of the pandemic out of an abundance of caution and to bring down COVID-19 cases rapidly. Once this had been sufficiently achieved, social distancing measures could be dialled down to the lowest level that maintains R at ⩽1.
For this strategy to work, COVID-19 surveillance is paramount and needs to be substantially upscaled, alongside laboratory capacity, to cover the entire population in all geographic areas. Importantly, surveillance will continuously guide and revise the appropriate level of social distancing. If, for example, SARS-CoV-2 transmissibility decreases in the summer and rises in the autumn, surveillance indicators will reflect this, and social distancing will be calibrated to maintain a stable infection rate. Similarly, if COVID-19 cases flare up in a defined geographic area, additional targeted measures may be taken to bring the pandemic back under control.
On a qualitative level, and in order to select an optimal combination, each measure will have to be individually evaluated for both its potential benefit and its social and public health cost [15]. In this evaluation, the strong age gradient in mortality from COVID-19 needs to be taken into account [16]. A case in point is school closures, whose impact on COVID-19 transmission is uncertain and whose social costs are very high [17]. Children are the age group least vulnerable to COVID-19, and might also be less likely to infect others [18,19]. Therefore, accepting some risk of infections among children may be a reasonable compromise for the wider societal benefit of keeping schools open, with the additional side effect of building up a degree of population immunity in the safest possible way. On the other hand, very stringent measures will need to be continuously maintained in health-care facilities and elderly care homes, which are both important drivers of infection and locations where the most vulnerable are exposed. Steering infection away from those most at risk is no less important than keeping a low infection rate in order to minimise morbidity and mortality from COVID-19.
In selecting the appropriate mix of social distancing, there is often a paucity of evidence about the effectiveness of individual measures. In such a context, choices about what socio-economic activities to allow inevitably become political, based primarily on assessments about the costs to society. At the same time though, plans should be made to collect the required evidence and formally evaluate the effectiveness of each measure, for example by comparing the effective R number of the pandemic before and after its introduction.
In conclusion, using epidemiological surveillance to calibrate social-distancing measures appropriately and to achieve a low and stable infection rate, thereby minimising overall morbidity and mortality, is a reliable long-term approach to follow and maintain until the COVID-19 pandemic reaches its herd immunity endgame, hopefully through the discovery and application of a safe and effective vaccine.
Declaration of conflicting interests
The authors declared the following potential conflicts of interest with respect to the research, authorship and/or publication of this article: S.T. is a spokesman for the Hellenic Ministry of Health; the views expressed here are his own.
Funding
The authors received no financial support for the research, authorship and/or publication of this article. | 2020-09-29T13:06:10.419Z | 2020-09-26T00:00:00.000 | {
"year": 2020,
"sha1": "b8cc8d035559fe3e0d62181f085d2e40e3e09573",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1403494820961293",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "694294b59befd16a57333fe85a12a37fb1e38ffe",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
225321123 | pes2o/s2orc | v3-fos-license | The importance of microelements in forming duck liver morphology
Microelement composition in a diet influences the morphology of duck liver is described. Microelement composition of the duck food was studied in accordance with GOST (State Standard Specification), which states the selenium content amounting to 0.06 mg/kg for the growing ducks and 0.14 mg/kg for adult ducks. The experiment on Pekin ducks aged from 1 to 120 days implied that the control group received basic diet and the experimental group received DAFS-25k (Diacetofenonilselenide) in accordance with the product instruction during the whole period of raising. The studies were undertaken at an interval of 15 days. Liver structure of one-day ducks had typical anatomy. Connective tissue was moderately marked, tubular structure was clearly marked, hepatocytes had polygon shapes, the nuclei were oval or round, located centrally, they contained from one to four nucleoli. Vacuolated cytoplasm was observed for 15-day ducks from the control group. For the experimental group: hepatocyte nuclei had the same size, cytoplasm was homogeneously coloured, sinusoidal capillaries with red blood cells were clearly marked. In critical periods of raising, namely on the 30th day, when neoptile was replaced with primary plumage, and on the 75th day, during post-juvenile moult, for the control group we could observe granular cytoplasm structure, which is a characteristic feature of granular degeneration; for the experimental ducks: liver structure had definitive structure, morphofunctionally active. For 120-day ducks from the experimental group: the liver retained its tubular structure, had singular fat build-ups; for the control group: clear signs of hepatosteatosis. DAFS-25k did not have negative influence on morphofunctional activity of the organ; the selenium content in the liver of 120-day ducks from the experimental group amounted to 0.52 mcg/kg compared to 0.31 mcg/kg for the control group.
Introduction
Healthy growing ducks and further productivity is determined by well-balanced diet, as it influences functioning of organs and systems. Early period of post-embryonic development is characterized by adaptive processes activation in the setting of rapidly changing environment. [1]. The load received by the liver increases in such conditions. The liver is the largest organ in the abdomen for newly born individuals. Normally, it provides homeostasis of organism [2,3]. The problem of liver formation at the early stage of development is associated with immaturity of liver enzymes and continuation of morphological differentiation of structural elements [4], which influences metabolic process and clearance of potentially toxic compounds, both endogenous and exogenous [5]. Liver cooperative cell system is comprised of hepatocyte -Kupffer's cellendotheliocytelipocyte -Pit-cell. In this AGRITECH-III-2020 IOP Conf. Series: Earth and Environmental Science 548 (2020) 042015 IOP Publishing doi: 10.1088/1755-1315/548/4/042015 2 cell cooperation, Kupffer's cells represent the system of mononuclear phagocytes. They act as bloodliver barrier; ¬ by interacting with immune¬ system, they establish the basis¬ of maintaining consistent connective tissue of liver with the help of monokines and collagenase. They influence the processes of hepatocyte regeneration. Transport and metabolic functions are fulfilled by endotheliocytes, ¬Kupffer's cells and hepatocytes¬ [6], which are directly connected to blood vessels and bile capillaries. It determines histological liver structure as compared to any other gland [7]. As opposed to other species, bird liver is delicate and fragile, it is easily damaged by applying pressure [8]. Adult birds have dark brown liver, growing birds after hatching have clayish-ochreish liver with a faint tinge of pink. Such liver colour for growing birds is the result of the following factors: high content of fat, which came from yolk-bag; destruction of fetal haemoglobin and red blood cells [9]; more developed vascular tree [10,11].
For the majority of species, regardless of their habitat, we can observe liver dynamic stability, i.e. morphological and functional structure supported by plastic dynamism of the organ [12]. Nevertheless, every species is characterized by unique type of metabolism, which is determined by such factors as species, breed, age, sex, heredity, etc. [13]. Besides, morpological structure of organs reflects deficiency of amino acids, vitamins and mineral substances [14][15][16][17][18].
The objective of the research
The objective is to study morphological features of Pekin duck liver in relation to age at the setting of consuming DAFS-25k in the province distinguished by selenium deficiency of the soil.
Materials and methods
The study was undertaken in 2016-2019. The subjects of the study are Pekin ducks aged from 1 to 120 days, the object is duck liver and particular microelement content of duck food. Using analogue method, we formed two groups of one-day ducks. The groups were formed with the consideration of live weight (weight variation within 0.5%), each group consisted of 250 ducks. The first group is the control group, which received basic diet; the second is the experimental group, which received DAFS-25k feed additive apart from basic diet in accordance with the product instruction, dose: 1.6 mg/kg of the food given as per the duck weight. A preparation containing moderate doses of selenium is capable of preventing oxidative destruction of cell membranes and symptoms of hypoxia. It neutralizes toxic action of heavy metals and stimulates activity of enzymes and hormones; it increases the effectiveness of digestive bronchopulmonary system diseases treatment. It is capable of improving the functions of phagocytosis and modulations of apoptosis. [19,20,21,22,23,24,25].
The iron, zinc, copper, manganese, nickel, cobalt, cadmium and lead concentration in combined fodders for adult and growing ducks was estimated at Federal State Budgetary Institution 'Ivanovskaya Station of Agrichemical Service' (SAS Ivanovskaya) by means of atomic adsorption spectroscopy using spectrophotometer Kvant-2A; sample ashing in accordance with GOST 30178-96. The iodine and selenium content was analysed at Federal Research Centre 'All-Russia Research and Development Technological Institute of Poultry Breeding' within Russian Academy of Sciences, in accordance with GOST Р52471-2005.
Ducks at the age of 1, 15, 30, 45, 60, 75, 90, 105 and 120 days old underwent morphological examination of liver. Samples of the organ were fixed in 10% preparation of neutral formalin. The material was processed with the application of tissue processor TLP-720 (Russia, Mt Point TM), and embedded in paraffin using tissue embedding station ESD-2800 (Russia, Mt Point TM). Sections with the thickness of 5-8 μm were prepared at rotary semi-automatic microtome RMD-3000 (Russia, Mt Point TM), stained with haematoxylin and eosin using automatic linear stainer ALS-96 (Russia, Mt Point TM). The preparations were studied using microscope Micmed-6 (Russia, LOMO). We used E31S video camera (China) and TopView software with the amplification of x100 and x400 for measurements and photographic documentation. A measuring scale of the camera was calibrated using object micrometre of transmitted light OMP (Russia, LOMO). The volume of hepatocyte and nuclei of hepatocytes were calculated using the formula: in which π = 3.14, Dm is minor cell (nucleus) diameter and Db is major cell (nucleus) diameter. Cytoplasm volume represents the difference between hepatocyte volume and nucleus volume.
Weight content of selenium in the liver of 120-day ducks was determined by means of atomic adsorption spectroscopy with decomposition of samples in closed vessels (Federal Service for Veterinary and Phytosanitary Surveillance, 2001) in the modification of Ivanovo State University of Chemistry and Technology, 2004.
We used Microsoft Excel-2010 for statistical data processing. We used Student's t-test to estimate the statistical significance of differences between the parameters (G. F. Lakin, 1980).
Results and discussion
Ducks are characterized by high sensitivity to excessive amount or deficiency of mineral substances. Before introducing biologically active microelements into the food, we analysed the diet. As a result, we established the following content of substances for the adult and growing ducks: copper 3.40-3.68 mg/kg, zinc 34.32-36.70 mg/kg, iron 115-185 mg/kg, manganese 68-92 mg/kg, cobalt -0.27-0.36 mg/kg.
In the combined fodder for growing ducks the iodine content amounted to 0.69 mg/kg. Fodder for adult ducks contained submicrograms of the microelement. The concentration of such dangerous pollutants as lead and cadmium did not exceed the maximum permissible concentration and amounted to 2.12 mg/kg and 0.024 mg/kg for growing ducks; 2.31 mg/kg and 0.040 mg/kg for adult ducks respectively. The nickel content in fodder for growing and adult ducks amounted to 1.80 -2.00 mg/kg, which exceeded ecologically permissible concentration by 35-50% (р≤0,05). The concentration of selenium in combined fodder for growing and adult ducks amounted to 0.06 mg/kg and 0.14 mg/kg.
The liver of 1-day Pekin ducks had typical structure; stroma was represented by connective tissue of capsule and interlobular partitions. The connective tissue was moderately marked and could be observed only at the periphery of the organ, where it formed a thin capsule, and at the area of triads; interlobular connective-tissue partitions were not observed. Tubular structure was clearly marked, hepatic tubules were positioned radially and had branching, curvy and sometimes glomerular form. Thickness of tubules amounted to 18.43±0.40 μm, sinusoidal lumen was 4.46±0.19 μm. Blood corpuscles could be observed in the lumen of central veins and the branches of portal vein. Branches of portal vein with extended lumens could be encountered. The boundaries of hepatocytes were moderately marked, cells had polygon shape, their volume amounted to 553.51±42.23 μm3. The nuclei were located centrally, in some places moved to the periphery; intensively coloured; round-oval shape; contained from one to four nucleoli. The nuclear volume amounted to 38.73±2.00 μm3. The cytoplasm was coloured inhomogeneously and had granular structure; its volume amounted to 383.16±12.45 μm3. Nuclearcytoplasmic proportion amounted to 0.12±0.01 μm3(figure 1).
By the age of 15 days the volumes of hepatocytes significantly increased for the control and the experimental groups by 12.7% and 27.4% respectively. The increase in the volumes of hepatocytes was more prominent for the experimental group, the difference in the amount of cells was 14.7%. The volume of hepatocytes increased due to cytoplasm; the nuclear volume remained unchanged. For the control group, the cytoplasm was coloured inhomogeneously and had foamy structure due to its vacuolization.
For the experimental group, hepatocyte nuclei were clearly marked, had the same size, cytoplasm was coloured homogeneously, sinusoidal capillaries with red blood cells were clearly marked. Red blood cells could be observed in the central vein as well. Cells of mononuclear phagocyte system were activated, which is an indicative feature of protection from the activity of toxic substances.
For 30-day ducks from the control and the experimental groups we could observe the decrease in the volume of hepatocytes by 7.4% and by 6.3% respectively. At the same time, in comparison with the previous age, the difference between the control groups is significant (p≤0,05). The decrease in the volume of hepatocytes is associated with the critical period in the development of ducks, when neoptile By the age of 45 days we could observe the increase in the volume of hepatocytes in comparison with the previous age. For the experimental group, the volume of hepatocytes was 17.3% higher (р≤0.05) than for the control group. For the control group, the cytoplasm coloured in homogeneously, with clearly marked granular structure, which is a characteristic feature of granular degeneration ( figure 2). For the experimental group, cell boundaries were well-observed, the cytoplasm was homogeneous, the nuclei contained from 1 to 4 nucleoli; sinusoidal capillaries contained red blood cells. For the experimental group, cytoplasm volume was 19.8% higher (figure 3), than for the control group. For 60-day ducks from the control group, the cytoplasm dye was heterochromous, inhomogeneous; in some places the cytoplasm was vacuolated. The volume of hepatocytes increased significantly due to the increase of the volume of nuclei and cytoplasm by 9.2% and 20.6% respectively. Liver structure of experimental group of ducks was characterized by tubular structure. Boundaries between hepatocytes AGRITECH-III-2020 IOP Conf. Series: Earth and Environmental Science 548 (2020) 042015 IOP Publishing doi:10.1088/1755-1315/548/4/042015 5 were observed; cytoplasm dye was homogeneous; nucleoli could be distinguished in the nuclei. The tendency for the increase in hepatocyte volume, due to the increase in nuclear volume, is observed.
For the 75-day ducks of both control and experimental groups, we could observe the tendency for decrease in the volume of hepatocytes by 8.0% and 2.6% respectively. It is explained by the second critical period in the development of ducks, namely with the post-juvenile moult. For the control group, the signs of hepatosteatosis remained in liver parenchyma. Despite the critical period, for the experimental group of birds we could observe that the volume of hepatocytes and cytoplasm was 8.4% and 14.2% higher respectively compared to the control group.
For the 90-day ducks of both control and experimental groups we did not observe any differences in the sizes of the described structures. Hoverer, the tendency for development of the features of hepatosteatosis still remained. This is a regular phenomenon for productive birds and is considered to be a positive factor in terms of nutritional quality; at the same time, it is definitely a negative factor in terms of health (figure 4). In comparison with the previous point of study, for the control group, the volume of hepatocyte increased by 7.9% due to the increase of cytoplasm volume. For the experimental group, the liver structure was characterized by the definitive structure, morphofunctionally active; the tendency for increasing height of sinusoidal capillaries and trabecules was observed. The tendency for the increase in hepatocyte volume, due to the increase in nuclear volume, still remained (figure 5). For 105-day and 120 day ducks, the micrometrical parameters of liver structure did not change significantly as compared to the previous age. However, for the control group, in liver parenchyma the signs of hepatosteatosis were clearly marked. It had a foamy structure due to micro-and macrocellular fat infiltration. The nuclei of the cells were moved to the periphery ( figure 6).
DAFS-25k prevented hepatosteatosis; the liver retained its tubular structure. Occasional build-ups of fat could be observed; cytoplasm was coloured homogeneously (figure 7).
In accordance with the existing data, DAFS-25k stimulates productivity and accumulation of selenium in egg albumen and yolk, in thigh and chest muscles, in blood and liver of birds [26,27,28,29].
Conclusion
We established that the concentration of nickel in duck fodder exceeded ecologically permissible amount. It amounted to 1.80-2.00 mg/kg. The concentration of selenium is obviously insufficient. Combined fodder for growing ducks contained 0.06 mg/kg of selenium; for adult duck fodder it amounted to 0.14 mg/kg.
For 30-day ducks of the experimental group, during the first critical period, the volume of hepatocyte and the size of trabecules exceeded the same parameters for the experimental group. For 75-day ducks of the experimental group, during the second critical period, cytoplasm volume of hepatocytes, the size of trabecules and sinusoidal capillaries were higher than for the control group. For the experimental group, by the period of attaining physiological maturity, the volume of hepatocyte, nucleus and cytoplasm was higher than for the control group. Nuclear-cytoplasmic proportion amounted to 0.10.
For both groups of 120-day ducks we could observe the increase in the volume of hepatocyte. In comparison with the results established for 1-day ducks, for the control group the hepatocyte volume increased by 35.5%, nuclear volume increased by 18.4%, cytoplasm volume increased by 37.7%. For the experimental group, hepatocyte volume increased by 36.9%, nuclear volume increased by 19.0%, cytoplasm volume increased by 38.9% (р≤0.05).
For the experimental group, selenium concentration in the liver is 67.7% higher than for the control group and amounted to 0.52±0.04 mcg/kg.
The data analysis established that introduction of organic selenium in the form of DAFS-25k into the diet of ducks (in the dose which is recommended by the manufacturer), does not lead to any pathological changes and stimulates morphofunctional activity and selenium content in the liver. | 2020-09-03T09:11:02.087Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "b36fef6e9a55c3c23f93dcb714a47fd9b921c1bc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/548/4/042015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "79476589302e61bd3b7b0c6e56a759edd96143cc",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
} |
237478682 | pes2o/s2orc | v3-fos-license | Interaction of Human C5a with the Major Peptide Fragments of C5aR1: Direct Evidence in Support of “Two-Site” Binding Paradigm
The C5a receptor’s (C5aR1) physiological function in various tissues depends on its high-affinity binding to the cationic proinflammatory glycoprotein C5a, produced during the activation of the complement system. However, an overstimulated complement can quickly alter the C5a–C5aR1 function from physiological to pathological, as has been noted in the case of several chronic inflammation-induced diseases like asthma, lung injury, multiorgan failure, sepsis, and now COVID-19. In the absence of the structural data, the current study provides the confirmatory biophysical validation of the hypothesized “two-site” binding interactions of C5a, involving (i) the N-terminus (NT) peptide (“Site1”) and (ii) the extracellular loop 2 (ECL2) peptide of the extracellular surface (ECS) of the C5aR1 (“Site2”), as illustrated earlier in the reported model structural complex of C5a–C5aR1. The biophysical and computational data elaborated in the study provides an improved understanding of the C5a–C5aR1 interaction at an atomistic resolution, highlighting the energetic importance of the aspartic acids on the NT-peptide of C5aR1 toward binding of C5a. The current study can potentially advance the search and optimization of new-generation alternative “antibodies” as well as “neutraligands” targeting the C5a to modulate its interaction with C5aR1.
INTRODUCTION
The complement system and host defense are complexly intertwined in most of the vertebrates, as it acts as a feedback loop connecting the host's innate and adaptive immune response. On encountering a trigger, complement puts its proteolytic machinery into action, which liberates potent proinflammatory mediators like C3a, C4a, and C5a anaphylatoxins. 1 The termination of the proteolytic signal cascade manifests an inflammatory response 2 by directing the most potent anaphylatoxin C5a to recruit the C5aR1, a G-protein coupled receptor (GPCR) with high binding affinity required for initiating the desired cellular signaling. Under native-like conditions, C5a recruits C5aR1 with picomolar−nanomolar potency, which elicits inflammatory response 3 through the production of reactive oxygen species (ROS) in both myeloid and nonmyeloid tissues expressing the C5aR1, including the activation of platelets. 4 It is noteworthy that C5a−C5aR1 interactions in neutrophils are known to increase the cytosolic pH, affecting the basic functionality of the neutrophils. 5 In addition, overstimulation of the C5a−C5aR1 system triggers the surge of the proinflammatory cytokines, 6 including microvesicle shedding of neutrophils leading to the hyperinflammation-induced neutrophil dysfunction. 7 Given the inflammatory angle, it is logical to regulate the interaction of C5a with C5aR1 so that the risk of collateral damage to the neighboring tissues can be minimal. Dysregulated complement accentuates the C5a−C5aR1 interaction in several tissues leading to several chronic inflammation-induced diseases, 8 such as asthma, lung injury, kidney failure, rheumatoid arthritis, cardiovascular complications, multiorgan failure, and sepsis, including the most recent pandemic COVID-19. 9 Thus, complement and complement-regulated pathways have immense potential and pharmacological value for therapeutic intervention. 10 Therapeutic intervention of the C5a−C5aR1 system 11 usually involves the following strategies: (i) block the ECS of C5aR1, (ii) block the generation of C5a, and (iii) block or neutralize the C5a. 12−15 Large-scale mutagenesis and biomolecular signaling data evidence that the nanomolar to picomolar potency of C5a toward C5aR1 16 is due to the recruitment of two distal sites with large surface areas, respectively, engaging (i) the NT-peptide (Site1) and (ii) ECS of C5aR1 (Site2) with the core and the C-terminus (CT) peptide of C5a through a specific protein−protein interaction. In addition, extensive mutational studies 17−21 on both C5a and C5aR1 had provided the following two key observations: (i) truncation of the NT-peptide region significantly affects the affinity of C5aR1 toward C5a, and the NT-peptide of C5aR1 is very important for activation by C5a, 22,23 (ii) the NTtruncated C5aR1 could be effectively activated by short peptide analogues based on the CT-region of C5a albeit with relatively weak binding affinity. 20,24 Therefore, it was hypothesized that the NT-peptide contains the first binding site (Site1), and the poorly defined interhelical crevice in the transmembrane region contains the second binding site (Site2) on C5aR1 for recognizing the C5a to trigger the downstream biomolecular signaling. The high-affinity interaction at the "Site1" plays the anchorage function to arrest the C5a, and the relatively low-affinity interaction at the "Site2" helps in docking the CT-peptide of the C5a, triggering the activation and the cellular signaling of C5aR1. 25 The "two-site" binding paradigm, a hallmark feature consistently illustrated for many peptide/protein binding GPCRs of the secretin family, 26 has also been convincingly illustrated over the years in several chemokines complexed to their cognate receptors belonging to the rhodopsin family. 27,28 The recently available structural data for the CCL20-CCR6 system 28 clearly illustrates the strong involvement of the ECS (consisting of the three ECLs) of CCR6 as a part of the "Site2" in binding to CCL20. Nevertheless, the full-length structures of the NT-peptides are not entirely resolved in several reported biomolecular complexes of the chemokine receptors, though the importance of the binding interaction involving the synthetic NT-peptides of CXCR4 with the CXCL12 chemokine has been structurally illustrated. 29 While both computational and experimental model structures of C5aR1 both in free and in complex with smallmolecule ligands are available in the literature, 30−33 the structural biology approach has not successfully illustrated the hypothesized biomolecular recognition of the C5a−C5aR1 system at an atomistic resolution so far. Interestingly, most of the structural studies reported so far for C5aR1 demonstrate a truncated structure of the NT-peptide. More importantly, the plausible conformational changes that are likely to be triggered in C5a by binding to the NT-peptide of C5aR1, a vital transition state, perhaps required for the sequential docking of the C5a on the ECS of C5aR1 is poorly understood. Thus, an effort toward the atomistic understanding of the high-affinity intermolecular interactions at "Site1" is essential, as it will pave the way for designing the new age alternative antibody-like molecules for effectively neutralizing the pathophysiological concentration of C5a under disease settings without downregulating the physiological function of the complement system or completely shutting down the C5a-induced lowgrade cellular response of C5aR1.
In this context, a highly refined "two-site" model structural complex describing a plausible activation mechanism of the C5a−C5aR1 system ( Figure 1) has been made available from our group earlier, 34 which requires further evaluation to understand the importance of the intermolecular interactions postulated in the model structural complex of C5a−C5aR1. In the current study, major synthetic peptide fragments of C5aR1, such as the NT-peptide and its mutants (codenamed as SR3, SR4, and SR5), including the ECL2 peptide (codenamed as SR1), are subjected to a battery of biophysical studies against the recombinant human C5a to understand their role in anchoring the C5a to C5aR1. The data obtained from the circular dichroism (CD) and fluorescence titration studies find support from the molecular dynamics (MD) and free energy calculation studies, which not only validates the highly refined "two-site" binding model structural complex of C5a−C5aR1 34 but also indicates that the electrostatic interactions between the amino acids with anionic side chains on the NT-peptide of C5aR1 and the amino acids with cationic side chains on the surface of C5a play a crucial role in C5a−C5aR1 interaction over and above the canonical intermolecular interactions involved in arresting the bulk of C5a.
RESULTS
2.1. Design Rationale behind the Synthetic Peptides. The "two-site" model structural complex of C5a−C5aR1 hypothesized in our earlier study 34 suggests that the free NTpeptide of the C5aR1 (Site1), which harbors several amino acids with anionic side-chain structure, has a strong preference to get wrapped around the cationic surface of C5a ( Figure 2). Point mutation of the highlighted cationic amino acids on C5a has been shown to affect both the binding and signaling activity of the C5a. 21 Previous mutagenesis studies on NTpeptide of C5aR1 have evidenced the involvement of Asp16, Asp18, Asp21, and Asp27 in the binding and signaling of C5a through C5aR1. 22 Further, in agreement with the mutagenesis studies, the model complex also hints at the involvement of aromatic amino acids like Tyr11 and Tyr14, 35 including the Tyr6 in binding to C5a. In addition to this, the model complex has hypothesized that binding of NT-peptide to the bulk of C5a can trigger conformational changes in C5a, which will lead to the docking of its conformationally altered CT-peptide to the ECS of C5aR1 (Site2), which is composed of ECL2, 36 one of the largest extracellular loop of C5aR1, in addition to the others. Thus, to evaluate the model complex of C5a−C5aR1 further, three NT-peptides (Figures S1−S3) and one ECL2 peptide ( Figure S4) of C5aR1 ( Figure 3) were synthetically prepared.
Out of the three NT-peptides, SR3 represents the native NT-sequence of C5aR1 (Met1-Lys28), whereas SR4 represents the Asp/Ala (Asp2/Ala, Asp10/Ala, Asp15/Ala, Asp16/ Ala, Asp18/Ala, and Asp27/Ala) and SR5 represents the Tyr/ Ala (Tyr6/Ala, Tyr11/Ala, and Tyr14/Ala) mutant NTpeptide sequences of C5aR1. The ECL2 is the major peptide fragment of the ECS of C5aR1, and its role in the binding and signaling of C5a is well known. The SR1 peptide represents the native ECL2 sequence of C5aR1 (Tyr174-Arg198), except that it is acylated and amidated, respectively, at the N-and Ctermini. Also, it carries the Cys188/Ser mutation to avoid undesired aggregation issues in the solution. The other mutants of the ECL2 peptide were not prepared to avoid disruption of the folded β-hairpin structure of the free ECL2 peptide in solution. In addition, given the short sequence length, the ECL1 and ECL3 peptides were not synthesized, as it is evidenced that both ECL1 and ECL3 may play a more sensitive role in the activation of C5aR1 than the binding of C5a. More importantly, mutations in the ECL1 of C5aR1 have also been shown to have no effect on the binding affinity of C5a. 37 2.2. Conformational Analysis of the Synthetic Peptides. The native and the mutant NT-peptides were completely soluble in 1× PBS (pH ∼ 7.4) and thus subjected to conformational analysis studies by recruiting the CD spectroscopy. As presented in Figure 4, the NT-peptides demonstrated progressive conformational ordering between 0.05 and 1 μM, demonstrating a signature signal broadly similar to the extended sheet structure of the polypeptides.
Interestingly, the formation of the ordered β-sheet structure was noted earlier over certain regions of the NT-peptide ( Figure 5) of C5aR1 complexed to C5a throughout 0.25 μs MD simulation in POPC bilayers. 34 Indeed, ordered β-sheet structure has also been noted over certain sections of the 38mer NT-peptide of CXCR4 complexed to CXCL12 in solution. 29 However, the comparison of the CD signal observed for the SR3 peptide (1 μM) with the mutant peptides (SR4 and SR5) indicates that alanine mutations can potentially affect the overall conformational ordering of the NT-peptide. Further, the characteristic CD signal of the peptides diminished to a greater extent beyond 1 μM, which can be attributed to the formation of soluble aggregates of the peptides under experimental conditions. 38 Thus, most of the further binding studies involving the NT-peptides were performed below 1 μM. On the other hand, up to 100 μM, the ECL2 peptide of C5aR1 (SR1) has been reported to demonstrate a CD signature commonly attributed to twisted short-stranded β-hairpin-like conformation in solution. It is noteworthy that the ECL2 peptide was also predicted to harbor a β-hairpin fold in our earlier studies, 31 which was subsequently confirmed from the observation made in the crystal structure of the thermostabilized C5aR (PDB ID: 5O9H) known as StaR. 33 Interestingly, StaR carried 11 strategic point mutations, truncated by 29 amino acids in the NT-peptide region and 17 amino acids in the CT-peptide region, similar to our truncated model structure of C5aR1 31 that lacked 26 amino acids on the NT-peptide region and 34 amino acids in the CT-peptide region. 2.3. Probing the Intermolecular Interaction between the NT-Peptides and C5a. The intermolecular interactions hypothesized between the NT-peptide of C5aR1 and C5a in the model complex were subjected to scrutiny, respectively, using CD and fluorescence spectroscopy. The concentrations of the native (SR3) and the mutant NT-peptides (SR4 and SR5) were varied between 0 and 1 μM for titration against 0.1 μM recombinant human C5a, and the corresponding conformational changes, as well as the change in fluorescence intensity observed for C5a was monitored for gauging the interaction of the NT-peptides with C5a ( Figure 6).
As noted in Figure 6a, the signal intensity of the signature CD spectra demonstrated by the C5a enhanced significantly with the increase in the concentration of the NT-peptides, indicating the strong association of the peptides with the C5a. Interestingly, the rise in the CD signal intensity was also accompanied by the change in the signature CD spectra of C5a, suggesting that binding of the NT-peptides triggers an intrinsically disordered conformational state in C5a. As presented in Figure 6a, the SR4 peptide appears to induce a robust conformational alteration in C5a compared to SR3 and SR5 peptides that demonstrate almost similar interaction patterns with the C5a. It is noteworthy that in comparison to the native NT-peptide SR3, SR5 harbors Tyr6/Ala, Tyr11/Ala, and Tyr14/Ala mutations in its sequence, whereas SR4 carries Asp2/Ala, Asp10/Ala, Asp15/Ala, Asp16/Ala, Asp18/Ala, and Asp27/Ala mutations in its sequence. Further, to maintain some degree of native affinity toward C5a, the Asp21 was not mutated to Ala in the SR4 peptide. In the absence of the anchorage naturally imparted by the transmembrane domain of C5aR1, these synthetic NT-peptides are relatively more labile, which empowers them to establish biologically nonspecific interaction with C5a with the slightest change in their conformational ordering. The SR4 peptide harbors a significantly mutated sequence, which also appears to affect its conformational ordering ( Figure 4). Thus, it is likely that the SR4 peptide will have a strong potential to drive a biologically nonspecific interaction with C5a compared to the SR5 peptide.
The observation made in the CD titration studies was further subjected to scrutiny by probing the intrinsic tyrosine fluorescence of recombinant C5a both in the presence and absence of increasing concentration of NT-peptides ( Figure 6b). Among the three NT-peptides, SR5 does not fluoresce at all due to the lack of aromatic amino acids, whereas SR3 and SR4 have very negligible intrinsic tyrosine fluorescence ( Figure S5) at the working concentrations, which does not overlap with the emission maximum of C5a. It is evident from Figure 6b that the intrinsic fluorescence of C5a substantially increases with an increase in the concentration of NT-peptides and eventually gets saturated in the presence of 500 nM peptides, indicating the strong intermolecular interaction between the C5a and the NT-peptides, as observed in the CD studies. Fitting the normalized CD and fluorescence titration data suggests that while SR3 binds to C5a with an estimated K d ∼ 126−193 nM, SR5 binds to C5a with an estimated K d ∼ 105− 123 nM. Interestingly, the binding affinity of the SR4 peptide toward C5a could not be estimated from either CD or fluorescence titration data, as the normalized response was too scattered in response to an increase in the concentration of the peptide.
The observed binding data appears to be in sync with the reported biomolecular signaling studies in cultured cells, where it has been shown that Asp15/Ala, Asp16/Ala, Asp18/Ala, and Asp21/Ala mutations in the NT-region of C5aR1 collectively reduces the binding affinity by ∼42-fold, whereas further addition of Asp10/Ala mutation to NT-region reduces its binding affinity by ∼140-fold toward C5a compared to the native C5aR1. 22 It is also reported that Asp15/Ala and Asp18/ Ala mutations in NT-peptide trigger a tremendous loss in C5aR1 signaling. 39 It is worth mentioning that SR4 carries six Asp/Ala mutations in its sequence compared to SR5, which harbors three Tyr/Ala mutations compared to the native SR3 peptide. The K d values of SR3 and SR5 estimated from the CD data are almost identical. However, SR5 peptide with three Tyr/Ala mutations demonstrated relatively tighter binding to C5a compared to SR3 peptide in the fluorescence titration studies, though the statistical significance of the same remains to be pursued. Thus, broadly, it can be concluded that both SR3 and SR5 peptides appear to have a comparable binding affinity toward C5a. On the other hand, the SR4 peptide that contained all of the tyrosines but lacked the aspartic acids except the Asp21 did not demonstrate a quantifiable binding affinity based on the CD and fluorescence titration studies. This is in contrast to the earlier observations that suggest that tyrosine sulfation on the NT-region of C5aR1 is an important post-translational modification required for the efficient binding of C5a. 35 Overall, the current data indicate that collectively the aspartic acids on the synthetic NT-peptides may be crucial than the tyrosine residues for specific binding of the C5a to C5aR1. However, in the context of the model C5a− C5aR1 complex reported in our earlier studies, the specific contribution of tyrosines toward overall binding affinity cannot be completely ruled out.
2.5. Comparison of the Biomolecular Complexes Formed between the NT-Peptides and C5a. The CD and fluorescence titration studies suggested that mutation of aspartic acids on the NT-peptide region affects the binding affinity toward C5a. However, concerning the mode of interactions of the NT-peptides with the C5a, the data is virtually blind, as the inference is derived from the limited number of variants of NT-peptide, which cannot delineate the specific contribution of each amino acid toward the estimated binding affinity. More importantly, the synthetic NT-peptides are free at both ends, compared to the native conditions, where the C-terminal end will be connected to the transmembrane helix number 1 (TM1) of C5aR1. Thus, the biomolecular complexes involving the C5a and SR4/SR5 peptides were modeled based on the C5a−SR3 complex extracted from the reported C5a−C5aR1 complex. Subsequently, the C5a−SR3, C5a−SR4, and C5a−SR5 complexes were subjected to comparative MD studies over 100 ns each at 300 K.
The data presented in Figure 8 suggests that free NTpeptides of C5aR1 could remain bound to the C5a over the duration of the MD trajectory, irrespective of the number or type of mutations in the peptides, supporting the physical viability of the strong intermolecular interactions between the C5a and the NT-peptides, as noted in the CD and fluorescence titration studies. Interestingly, both the native and mutant NTpeptides were able to experience ∼17 intermolecular hydrogen bonds consistently over the duration of the MD, in addition to the other type of interactions, which could be the reason Figure 7. Estimation of the binding affinity from the normalized CD and fluorescence titration data points of C5a observed against the variable concentrations of the NT-peptides. Binding affinity could not be obtained reliably for the SR4 peptide due to the incomplete data fitting.
ACS Omega
http://pubs.acs.org/journal/acsodf Article behind the observed stability of the biomolecular complexes involving the mutant NT-peptides that demonstrated altered interactions with the C5a compared to the native SR3 peptide (Figure 9). Though the NT-peptides were found to be orientationally drifted to a certain extent compared to the C5a−C5aR1 complex, none of the peptides were dislodged entirely from the surface of C5a over the duration of MD ( Figure 10). It is noteworthy that in the case of both SR3 and
ACS Omega
http://pubs.acs.org/journal/acsodf Article SR4 peptides, some tyrosine amino acids on the NT-peptide demonstrated strong interaction with several amino acids on C5a over the duration of the MD. 2.6. Comparison of the Binding Free Energy of the NT-peptide−C5a Complexes. To further understand the residue-specific contributions toward the overall binding affinity observed for the NT-peptides in the experimental studies, the respective MD trajectories of C5a−SR3, C5a−SR4, and C5a−SR5 complexes were subjected to molecular mechanics Poisson−Boltzmann surface area (MM-PBSA) calculation to estimate the binding free energy of the biomolecular complexes. The estimation of the binding free energies involved 500 conformers from each trajectory, randomly selected from the most populated cluster ( Figure S6), evolved over the duration of 100 ns MD. In strong agreement with the experimental data, the binding free energy ( Figure S7) estimated for the C5a−SR3 complex (−553.48 ± 8.50 kcal/mol) was found to be similar to that for the C5a− SR5 complex (−568.11 ± 10.57 kcal/mol), which was significantly higher compared to the free energy of binding estimated for the C5a−SR4 complex (−31.99 ± 10.87 kcal/ mol), indicating a tighter binding of the SR3/SR5 peptides to the C5a than the SR4 peptide.
Further, the decomposition of the MM Energy in the context of the amino acids of the NT-peptides clearly evidences ( Figure 11) that the aspartates contribute significantly over other amino acids on the NT-region of C5aR1 toward the overall binding free energy. The data indicates that while Asp27 makes the highest contribution, Asp2 makes the lowest contribution. During the MD, it was observed that the Asp2 makes contact with the Arg74 of the C5a in few structures, which could be due to the inherent conformational flexibility of the CT-region of C5a. In addition, Asp10, Asp15/ Asp16/Asp18, and Asp21 also make solid contributions toward the binding free energy.
As presented in Figure 11, the significant energetic contribution made by the aspartates in the case of SR3 and SR5 peptides is expectedly absent in the case of the SR4 peptide, which harbors Asp/Ala mutations. This could be the reason behind the weakest binding affinity demonstrated by the SR4 peptide toward C5a both in MM-PBSA calculation as well as in the CD and fluorescence titration studies. Interestingly, Asp21, which was not mutated to Ala in SR4 peptide, demonstrated substantial energetic contribution toward binding the C5a. On the other hand, despite harboring the Tyr/Ala mutations, SR5 peptide illustrated comparable binding affinity toward C5a like the SR3 peptide in both MM-PBSA calculations and CD titration studies. Further analysis indicates that though Tyr6/Tyr11/Tyr14 appreciably contributed toward the binding free energy, it was significantly lower than the aspartic acids on the NT-peptides. Nevertheless, the observation is in sync with the earlier studies, which reported that post-translational sulfation of Tyr11/ Tyr14 is essential for the efficient binding of C5a. 35 Overall, the data indicate that selective mutation of any single aspartic acid on the NT of C5aR1 may be able to influence C5a− C5aR1 binding and signaling to an appreciable extent.
2.7. Probing the "Two-Site" Binding Interaction of C5a−C5aR1. Titrations of the NT-peptides against C5a broadly confirmed the hypothesized interactions at "Site1" as illustrated in Figure 1. The model structural complex of C5a− C5aR1 also suggests that in addition to binding to the NTpeptide, C5a also binds to several amino acids on the ECS of C5aR1 collectively defined as "Site2". The ECL2 is the most prominent polypeptide in the ECS of C5aR1, which has been described to play a significant role in the C5a-induced activation of C5aR1. 40 However, a direct interaction of the C5a with the ECL2 peptide of C5aR1 is not clearly described in the literature. Thus, it was necessary to probe whether activation of C5aR1 is also linked to the interaction of C5a with the ECL2 peptide at "Site2" of C5aR1. To probe the interaction at "Site2", C5a preincubated with saturating concentration of the NT-peptides was subjected to varying concentrations of ECL2 peptide, and the observed response was recorded by, respectively, CD and fluorescence spectroscopy.
The data presented in Figure 12 indicates that binding of NT-peptides does not occlude the interactions of C5a further with the ECL2 peptide, suggesting that C5a perhaps engaged the free NT-peptides at "Site1" in a near-native manner. The addition of 0.5 μM ECL2 peptide (SR1) to C5a preincubated with the native SR3 peptide substantially enhanced the CD signal, which did not change further by the addition of 1 μM SR1 peptide. This suggests that the ECL2 peptide perhaps acted as a part of the distal "Site2", which was able to saturate the secondary binding site of C5a. In agreement, fluorescence data involving the SR1 peptide for the C5a−SR3 system was also found to be consistent with the observation made in the CD. A similar trend was also noticed in both CD and fluorescence for the C5a−SR5 system, in the presence of 0.5− 1 μM SR1 peptide. Further, compared to the free C5a, the C5a preincubated with the native NT-peptide (SR3) of C5aR1 demonstrated a much better response toward the ECL2 peptide, suggesting that binding of the NT-peptide of C5aR1 to C5a may be the first important step necessary for triggering the activation pathway of C5aR1.
The modest change in fluorescence intensity observed for both the systems suggested that the SR1 peptide perhaps does not come in direct contact with the bulk of C5a, further affirming the existence of "Site2". However, compared to the C5a−SR3/C5a−SR5 system, the addition of SR1 peptide demonstrated a relatively strong conformational response in CD for the C5a−SR4 system. On the other hand, the fluorescence signal of the C5a−SR4 system was quenched appreciably on the addition of 0.5−1 μM SR1 peptide. The comparative observations clearly evidence that mutations of important amino acids can trigger an improper mode of interaction between the NT-peptide and the C5a at "Site1", Figure 11. Comparative summary of the energetic contribution made by each amino acid of the NT-peptides of C5aR1 toward the average free energy of binding, respectively, estimated for the C5a−SR3, C5a−SR4, and C5a−SR5 biomolecular complexes. The specific amino acids that have been subjected to alanine mutation either in SR4 or SR5 peptides are also highlighted within the graph.
which may subsequently alter the further interaction of C5a with the other peptide fragments on the ECS of C5aR1. Further, as evidenced, the NT-peptides do not appear to interact with the ECL2 peptide of C5aR1 in a substantial manner ( Figure S8). Thus, it is reasonably clear that C5a interacts with C5aR1 by recruiting "two sites", as illustrated in the model complex of C5a−C5aR1, presented in Figure 1.
DISCUSSION
C5aR1 is among the ∼120 GPCRs known in the human genome that recognizes endogenous peptides 41 or proteins as ligands for cellular signaling and physiology. The "two-site" binding paradigm involving the C5a−C5aR1 system is an old concept that was hypothesized three decades ago. 25 The concept had been strongly supported by the deletion and single-point mutation-based biomolecular signaling data obtained from the C5a−C5aR1 system. However, in the absence of any such structural data related to the C5a−C5aR1 system, a highly refined full-length model structure of C5aR1 complexed to C5a 34 was generated in the recent past, affirming the existence of "two-site" contact-based molecular recognition in the C5a−C5aR1 system. The model illustrated that the high-affinity binding at "Site1" is driven by several salt bridge/ hydrogen-bond interactions involving the aspartic acids, as well as the tyrosine amino acids of the NT-peptide of C5aR1. Similarly, the interactions at "Site2" involved several amino acids of the ECL2 and ECL3 peptides of C5aR1. However, the physical viability of the illustrated mode of interaction was untested, which was subject to test in the current study by recruiting the synthetic variants of the NT-peptides and the ECL2 peptide of C5aR1. The titration data of the NT-peptides as well as of the ECL2 peptide presented in Figures 6, 7, 11, and 12 indicates strong agreement with the intermolecular interaction observed in the model C5a−C5aR1 complex. Moreover, the data also suggests that the aspartic acids on the NT-region of C5aR1 are crucial for the high-affinity binding of C5a, which is in substantial agreement with the independently reported biomolecular signaling studies that indicate both Asp/ Ala or Asp/Asn mutation abrogates binding and signaling of C5aR1. Similarly, mutation of several amino acids with cationic side chains on the bulk of C5a, like Arg37, Arg62, and Arg40, have also been shown to abrogate the binding of C5a to C5aR1, in addition to the mutation of His67, Lys68, and Arg74 in the CT-region of C5a. 21 Further, the binding and signaling activity of C5a toward C5aR1 can also be dampened by altering the biologically active conformer allosterically 42 induced by the mutation of specific amino acids on C5a, which are not necessarily involved in interaction with C5aR1 at "Site1" or "Site2". Thus, the effect of C5a mutation on the binding and signaling of C5aR1 should also be evaluated from the dynamic structural perspective of C5a. Moreover, the binding of C5a to C5aR1 is not purely complementarity of cationic−anionic electrostatic interactions of side chains, as other amino acids on the NT-region of C5aR1 also contribute toward sustained hydrogen bonding and hydrophobic interactions with the C5a. Nevertheless, the current study validates the reported C5a−C5aR1 model complex and provides direct evidence of the involvement of two-step interactions of C5a at two discrete binding sites located on the C5aR1.
A better understanding of the "two-site" binding in the C5a−C5aR1 system can be advantageous from the therapeutic intervention standpoint. Traditionally, C5aR1 has been the preferred target for competitive inhibition of C5a, and currently, several small molecules and peptides are known in the literature that can competitively inhibit the binding of C5a
ACS Omega
http://pubs.acs.org/journal/acsodf Article to C5aR1 by targeting the orthosteric/allosteric "Site2" of C5aR1. 30 On the other hand, direct targeting of C5a for competitively inhibiting the "Site1" interactions of C5aR1 and the "two-site" binding of C5a to C5aR1 is comparatively less exploited. Nevertheless, designer complementary peptides targeting the antisense homology box (AHB) of C5a have shown some exciting results in the earlier cell culture and animal model studies. 43,44 In addition, recent studies 12 from our group also indicate that prednisone (PDN), a known corticosteroid, can bind to C5a with K d ∼ 0.38 μM, which can potentially modulate the interaction of the C5aR1 with C5a. Preliminary studies, presented in Figure 13, suggest that C5a preincubated with a near-saturating concentration of PDN (∼0.5 μM) demonstrates a comparatively weaker response toward the near-saturating concentration (∼0.35 μM) of the native NT-peptide (SR3: Met1-Lys28) of C5aR1 than the free C5a, which could be most likely due to the competitive inhibition of the NT-peptide by the binding of PDN to C5a. In this context, it is reasonable to believe that designer peptides targeting C5a will be able to competitively inhibit the binding of the NT-peptide (Site1) and the ECL2 peptide (Site2) of C5aR1 under native conditions. However, subsequent future studies will be necessary for solid validation of the above hypothesis.
CONCLUSIONS
The recruitment of C5aR1 by C5a, one of the most proinflammatory anaphylatoxins of the complement system, triggers a plethora of leukocyte responses, and indeed advanced studies over the years suggest that a derailed complement can eventually set the stage for several fatal immunological and inflammatory diseases. The newest addition to the list of fatal diseases is the COVID-19 pandemic, where a direct correlation has been noted between the plasma level of C5a and the severity of COVID-19. 9 Given the complexity of the immune response, an elevated level of C5a can also influence the secretion of other proinflammatory cytokines, leading to a cytokine surge. The favorable outcome observed with anti-C5 (Eculizumab) 45 and anti-C5a (Vilobelimab) 46 antibodies in the case of COVID-19 treatment further highlights the crucial role of C5a in the severe inflammation 47induced pathology of lung injury, followed by the significant damage to the other organ systems. The pleiotropic nature of C5a coupled with its involvement in multiple intertwined signaling pathways labels C5a as a potential target for exploring prospective therapeutics. The anti-C5a antibodies 46−48 neutralize the major functional epitopes on the bulk of C5a, which serve as the binding site for interacting with the NT-peptide of C5aR1. In this context, the current study provides an overtly simplified biophysical overview of the synergistic intermolecular interactions of C5a at the "two sites" of C5aR1, as hypothesized earlier in the model complex of C5a−C5aR1, which can be further exploited to engineer high-affinity designer bidentate peptides for targeted inhibition of C5a, as a secondary alternative to the heavyweight antibodies.
5. MATERIAL AND METHODS 5.1. Synthesis of the Major Peptide Fragments of C5aR1. Total four peptides belonging to the extracellular surface of C5aR1, involving the two major peptide fragments (i) the N-terminus and (ii) the extracellular loop-2 (ECL2), were synthetically prepared using the standard Fmoc chemistry over solid phase by recruiting the services of GenScript (NJ, USA). Out of four, three NT-peptides (SR3, SR4, and SR5) contained 28 amino acids and one ECL2 peptide (SR1) contained 25 amino acids. The SR1 peptide (Tyr174-Arg198) was acylated and amidated, respectively, at the N-and Ctermini and also carried a C188/S mutation for the ease of synthesis and to avoid unwanted aggregation in solution. SR3 peptide has the native NT-sequence of C5aR1 (Met1-Lys28), whereas SR4 and SR5 peptides have mutations at strategic positions based on the model complex of C5a−C5aR1 reported earlier. All of the peptides have ≥ 95% purity as judged from the analytical HPLC profile recorded by recruiting the C18 (4.6 × 250 mm) column at 220 nm, using acetonitrile−water gradient in the presence of 0.05−0.065% trifluoroacetic acid (TFA). The ESI-MS confirmed the integrity of all of the peptides.
5.2. Circular Dichroism (CD) Studies. The CD studies were carried on a Chirascan CD spectrometer system in the far-UV region at 25°C using thoroughly filtered and degassed 1× PBS (pH ∼ 7.4). Each sample was subjected to a minimum of three scans with a time constant of 1s and a step size of 1 nm. The peptides and the recombinant human C5a (R&D systems) were appropriately solubilized only in 1× PBS (pH ∼ 7.4). In all of the titration studies, the concentration of C5a was maintained at 0.1 μM, and the concentration of the NTpeptides (SR3, SR4, and SR5) was varied between 0 and 1 μM. All of the samples were incubated for a minimum 1 h at 4°C prior to the CD studies. The molar ellipticity was converted to mean residue ellipticity [θ MRE ], and the data were normalized and subjected to nonlinear regression in GraphPad Prism for estimating the binding affinity of the NT-peptides toward C5a. In addition, for probing the conformational changes due to the "two-site" binding effect, 0−1 μM ECL2 peptide (SR1) was titrated against 0.1 μM C5a preincubated with 0.5 μM NTpeptides. Each sample of C5a corresponding to the given concentration of the peptides was individually prepared and read separately in the instrument. A similar procedure was followed for the C5a and prednisone system. The CD signals in millidegrees observed for the C5a incubated in the presence of 0−1 μM peptides were subtracted from the CD signals in millidegrees arising from the corresponding concentration of the free peptides in the buffer prior to the final processing of the data.
5.3. Fluorescence Studies. The fluorescence studies were performed in pure 1× PBS (pH ∼ 7.4) at 25°C, using a Cary Eclipse fluorescence spectrophotometer (Agilent Technolo- gies) equipped with the PCB 1500 Water Peltier System. The excitation and emission slit widths were set to 5 nm with excitation wavelength range of 278−280 nm and emission range between 290 and 450 nm. To maintain uniformity with the CD studies, the fluorescence titration studies were also performed in the presence of 0.1 μM C5a by varying the concentration of the NT-peptides between 0 and 1 μM. All of the spectra were recorded with an average of three scans, and the background spectra of the peptides in the corresponding buffer were appropriately subtracted. The fluorescence intensity of the C5a in the presence of the varying concentration of the NT-peptides was normalized at 350 nm, and the data were subjected to nonlinear regression in GraphPad Prism for estimating the binding affinity of the NT-peptides toward C5a. Similar to the CD studies, 0−1 μM ECL2 peptide (SR1) was also titrated against 0.1 μM C5a preincubated with 0.5 μM NT-peptides for probing the conformational changes in C5a due to the "two-site" binding effect. The fluorescence signals observed for the C5a incubated in the presence of 0−1 μM peptides were subtracted from the fluorescence signals arising from the corresponding concentration of the free peptides in the buffer prior to the final processing of the data.
5.4. Molecular Dynamics (MD) Studies. The C5a (PDB ID: 1KJS) complexed to the native (SR3) and mutant (SR4 and SR5) NT-peptides was subjected to MD simulations for 100 ns each at 300 K in the presence of simple point charge (SPC) water molecules, with solvent density set to the value corresponding to 1 atm at 300 K, by recruiting the GROMACS package 49 as described earlier. 34,42,50 The C5a−SR3 model complex reported earlier served as the reference for, respectively, generating the C5a−SR4 and C5a−SR5 mutant complexes. All of the systems were neutralized by randomly placing the appropriate number of chloride ions and were equilibrated twice, first under NVT (0.5 ns), followed by NPT (0.5 ns) conditions before the production MD run. Conformational clustering was performed at an interval of 50 ps with a backbone RMSD cutoff ≤ 1.5 Å by recruiting the gromos fitting method, as defined in GROMACS. PyMOL (The PyMOL Molecular Graphics System, Version 1.1r1, Schrodinger, LLC) and Discovery studio (Accelrys) software were utilized for initial processing, visualization, analysis, and presentation of the protein structures. The utility programs available in GROMACS were implemented for the detailed analysis of all of the MD trajectories.
5.5. Binding Energy Calculation Studies. The 100 ns MD trajectories of the C5a complexed to the wild type (SR3) and the mutant (SR4 and SR5) NT-peptides were, respectively, used for calculating the relative binding free energies of the biomolecular complexes by recruiting the molecular mechanics Poisson−Boltzmann surface area (MM-PBSA) method, as described elsewhere. 51 The dielectric constant of the solute and solvent were, respectively, fixed at 20 and 80 for the calculation of polar solvation energy. Variation of solute dielectric in the 2−20 range altered the total binding energy to some extent, but the overall trend remained the same for the peptides. Finally, 500 conformers derived from the first major cluster populated over 100 ns of the MD trajectory for each of the complexes were, respectively, used for calculating the average binding free energy.
HPLC and ESI-MS profiles of the synthetic peptides; fluorescence signal of the peptides compared to C5a; cluster analysis of the MD trajectories; molecular mechanics energy plot of the C5a complexed to different NT-peptides; and absence of intermolecular crossreactivity between the peptide fragments as judged from CD and fluorescence (PDF) | 2021-09-12T05:22:21.073Z | 2021-08-26T00:00:00.000 | {
"year": 2021,
"sha1": "7815a814caa3c8dabee912bb649f0c29dd7f8451",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7815a814caa3c8dabee912bb649f0c29dd7f8451",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3233965 | pes2o/s2orc | v3-fos-license | Chunking in working memory via content-free labels
A recent study found that visual working memory performance was enhanced when pairs of colors were predictably paired, and it was interpreted as a form of “memory compression” which implies that more colors could be stored online in a more efficient format. Here we propose an alternative hypothesis that does not entail any increase in the number of individuated representations stored online. Instead, familiar ensembles of items may be attached to a content-free label (e.g., remembering red-white-blue as “American flag”) that can be used to retrieve the constituents of a chunk when they are needed to guide a response. If accessing “compressed” memories requires an additional retrieval process, then access to compressed items should be slower than for uncompressed items. Indeed, Experiments 1 (visual) and 2 (verbal) showed that response times were substantially longer in patterned (i.e., compressed) than in control conditions. In Experiments 3 and 4, regularity-based advantages were eliminated with brief (1000 or 875 ms) response deadlines, in line with our hypothesis that accessing compressed memories requires a slow retrieval process. In sum, while statistical regularities can enable access to larger amounts of information, this information may not be available “online” in the same way as singleton items.
necessary, these details can be retrieved from the relevant documents. Chunking has been an important topic of cognitive psychology since the early pioneers [10][11][12] . In the classic work of Miller 12 , he proposed that the human cognitive capacity is limited to several chunks. Critically, even if the number of chunks remains constant, it is possible for observers to increase the amount of information referenced by a chunk via associative learning. For example, classic studies of chess expertise 10,11,13 showed that players at and above master level remember the positions in real games much better than novices. However, their memory of random positions is basically the same as that of the novices. Thus, the chess masters do not have a greater number of chunks than novices in their memory. Instead, they know how to represent multiple pieces within common game patterns as chunks, whereas the novices do not. In the latter, content-free labels of chunks replace the raw color information, and the "decoding rules" of labels have to be stored as offline representations.
Content-Free Label in Chunking
There are several reasons why we believe that chunking may be based on content-free labels. First, from a computational point of view, the use of content-free label seems like an obvious strategy. In a computer system, a handle is an abstract reference to a resource, and the handle is content-free. A computational system that is not designed in this way would be inefficient when the label alone is sufficient to retrieve each element when necessary. For example, a file name like "cities.txt" could be used for a file that includes the names of five cities (Rome, Paris, Boston, Tokyo, Beijing) instead of a file name like "RomeParisBostonTokyoBeijing.txt". Clearly, the former strategy is more efficient when the individual city names are not yet needed. Similarly, content-free labels could provide an efficient way to handle familiar ensembles of information in a WM task, especially when there is motivation to simultaneously store information about other stimuli.
Second, the use of content-free labels in chunking seems obvious in some cases. For example, someone remembering the string "internationalizationcongratulationmisinterpretation" would probably agree that they are not holding online individuated representations for each letter in this string, even if every letter could be recalled perfectly given sufficient time. Similarly, a Chinese reader remembering the visual pattern would probably agree that they do not maintain online individuated representations for all the visual details (e.g., orientations and sizes of each part of a stroke), even if those details can be recalled perfectly given enough time. A natural explanation for this introspection is that people hold content-free labels in mind rather than the individual items that comprise each ensemble.
Third, the use of content-free labels seems to be naturally implied by some other important concepts such as type/token distinction 14 or object files 15 . In the type/token distinction, if a type repeats frequently as many tokens, then it seems plausible that each of the token will only represent a "content-free label" and ignore the visual details (i.e., leave them in LTM as we assumed). For example, when there are many identical cars in a visual scene, it seems plausible that the cognitive system will represent each token of car as a "content-free label". This type/token distinction has been applied to explain various phenomena and/or mechanisms such as attentional blink 16 and visual WM 17 . In all these cases, it seems natural that the tokens function as content-free labels.
Cost of Decoding
Although the introspection is obvious in the cases described above, it is not so obvious in borderline cases. For example, it is not obvious whether the memory of a "Belgium flag" contains the active representation of the three colors or not, or the memory of a word "dog" contains active representations of the three letters or not. Nevertheless, our hypothesis is that even in such borderline cases, chunks are maintained via content-free labels that do not give access to the individual elements in the chunk. Instead, an individual element within a chunk can only be used after a decoding process that enables the elements of a chunk to be actively represented as individuated items in WM. We reasoned that this retrieval process should take time, thereby delaying responses that are guided by those individuated representations. Therefore, we predict that access to information, and consequently response times in a visual WM task, will be slower when subjects are exploiting statistical regularities than when they are not. The present experiments tested this prediction.
Chunking in Verbal WM
Although memory compression is a relatively new notion in visual WM, the role of chunking has been well-established in verbal WM 18 . For example, one can more easily remember the string "fbicbsibmirs" than a randomly-generated string of the same length. We propose that the "content-free labels which refers to offline representations" are general to all types of chunking and are therefore also responsible for chunking in verbal WM. In the "fbicbsibmirs" example, one will only remember four acronyms, FBI, CBS, IBM, and IRS, as content-free labels, but the composition of each acronym is not represented on-line and is retrieved from an offline state when it is needed. Our hypothesis falls in line with the interpretation offered by Chen and Cowan 8 when they trained observers to remember pairs of familiar words, and found that subjects could maintain an equivalent number of these pairs as they could word singletons. While their study was focused on a "core capacity" that could be defined by the number of chunks stored, our study focuses on the predicted cost of unpacking those chunks when a decision requires access to the constituents of a chunk. Thus, we predict that in a task which requires information about an individual letter (e.g., cuing a location in a sequence and report the letter), responses will be slower for a "chunked string" than a string consisting of random letters.
Experiment 1-2: Slower Response for Patterned Blocks
Experiments 1-2 tested whether access time for items in patterned blocks (i.e., blocks with statistical regularities) would be slower than for items in control blocks (i.e., blocks without statistical regularities), in line with the above-mentioned hypothesis that improved memory performance in the patterned blocks was accompanied by a cost of decoding. Experiment 1 used colors as stimuli to test visual WM, whereas Experiment 2 used letters as stimuli to test verbal WM.
Method.
In all experiments, the stimuli were presented on a 1,024 × 768 pixels CRT color monitor. The observers viewed the display from a distance of about 60 cm and entered responses using a keyboard. The program was written in Microsoft Visual Basic 6.0 and was run on Microsoft Windows XP using timing routines tested with the Blackbox Toolkit (Blackbox Toolkit Ltd., York, England).
Participants. Students at the Chinese University of Hong Kong completed Experiments 1 and 2 for a compensation of HK$50. There were 32 participants in each of Experiments 1 and 2. All had normal or corrected-to-normal vision. All experiments of the present study were carried out in accordance with approved guidelines. The consent form and experimental procedures received prior ethical approval from research ethics committee of the Chinese University of Hong Kong. Informed consent was obtained from each participant.
Stimuli. Sample stimuli displays of Experiments 1 and 2 are respectively shown in Fig. 2a and c. In Experiment 1, eight colors were presented, two on each of the 4 corners (i.e., left-top, left-bottom, right-top, right-bottom), against a gray background (Fig. 2a). The two colors of each corner occupy the top and bottom half of a 1.04 cm × 1.04 cm square, which is 1.47 cm away from the center of the display. For each observer, the 8 colors (red, green, yellow, blue, cyan, purple, black, and white) were divided into 4 pairs. This division was randomized across different observers (i.e., partial counterbalancing). In each trial of patterned blocks, the 4 pairs were randomly arranged in the 4 corners. In other words, the color values in each pair (e.g., red-top-black-bottom) were constant, but that the position of each pair was randomized. In each trial of control blocks, the 8 colors were randomly arranged in the 8 possible positions.
In Experiment 2, the stimuli consisted of 8 black letters (Fig. 1c). The two letters of each corner occupied the left and right half. The 8 letters were divided into 4 pairs (GO, HI, BY, ME), which were all very common words. In each trial of patterned blocks, the 4 words were randomly arranged in the 4 corners. In each trial of control blocks, the 8 letters were randomly arranged in the 8 possible positions.
Procedure. The sequence of presentations is shown in Fig. 2b. At the start of each trial, a small white fixation cross was presented in the center of the display for 400 ms, and then the stimuli were presented for 400 ms and then disappeared. One second following the disappearance of the stimuli, the test display was presented marking 1 of the 8 positions and showing two choices (e.g., 2 colors in Experiments 1 and 2 letters in Experiment 2), one of which had appeared in this marked position and the other (i.e., the incorrect choice) was randomly selected from the other seven possible items. This test display remained on the display until the observers responded. The observers had to decide which of two items in the test display had been presented on the marked position of the stimuli display and then press one of two adjacent keys ("j" for the left item and "k" for the right item) to indicate their response. They were asked to give first priority to the accuracy of their responses and also try to respond as quickly as possible. After responding, the observers heard either a pleasant or an unpleasant tone to indicate whether their response was correct, and the next trial began 400 ms later. Each observer completed 10 blocks (60 trials per block). For half of the participants, blocks 1-5 were patterned blocks whereas blocks 6-10 were control blocks, whereas for the other half, this order was reversed. The first patterned block and the first control block (i.e., blocks 1 & 6) were regarded as learning of the presence/absence of color patterns and excluded from the analysis. Fig. 2d, the memory capacity, as estimated with Cowan's k 18,19 , signif- In Experiments 1-2, response times were calculated over correct trials only. The RT outliers of each participant were excluded by first removing all response time values greater than 10,000 ms, and then removing all values beyond 3SDs. As shown in Fig. 2e, the response times were substantially slower in the patterned than the control blocks (Experiments 1: 1291 ms vs. 1108 ms, t (31) = 3.61, Cohen's d = 0.64, p < 0.0005; Experiments 2: 1474 ms vs. 1300 ms, t (31) = 3.67, Cohen's d = 0.65, p < 0.0005), confirming the cost of decoding in both visual and verbal WM.
Results and discussion. As shown in
A further analysis revealed an interesting sequence-order effect on the response times: for each pair in the patterned blocks, the response to the top-color was significantly faster than that to the bottom-color in Experiments 1 (1256 ms vs. 1335 ms, t (31) = 2.76, p < 0.005), and the response to the left-side letter was significantly faster than that to the right-side letter in Experiments 2 (1425 ms vs. 1524 ms, t (31) = 3.92, p < 0.0005). As shown in Fig. 2f, this sequence-order effect was significantly smaller in control blocks than in patterned blocks (Experiments 1: 9 ms vs. 79 ms, t (31) = 2.51, p < 0.01; Experiments 2: 41 ms vs. 99 ms, t (31) = 2.29, p < 0.02). These results suggest that when a pair needs to be decoded, the individual items are retrieved in a stereotyped order, from top-color to bottom-color, or from left-side letter to right-side letter.
The time-course of these effects are illustrated in Fig. 2g-j. The block number refers to the order of a block in its own type. For example, block 2 of patterned blocks mean the second block if an observer run in the order of patterned-control, but the seventh block if an observer run in the order of control-patterned. Consistent with the Brady, et al. 1 , for both accuracy (i.e., memory capacity) and RTs, the differences between the patterned and control blocks were relatively small in Block 1, and these differences gradually increase with more learning.
Experiments 3-4: A Response Deadline Method
Experiments 1-2 suggest that access times for compressed items may be slower than for uncompressed items, in line with a chunking account in which the details about items within an ensemble must be retrieved before they can be used to guide behavior. Experiments 3-4 provided converging evidence for this hypothesis using a response-deadline approach in which observers were forced to respond within a specified amount of time. If the additional information available in the patterned condition has to be retrieved from an offline state, then the regularity-based advantage should be abolished when brief response deadlines preclude successful retrieval. By contrast, if memory compression enables a larger number of items to be stored online in WM, then a regularity-based advantage should be evident even for brief response deadlines. Thus, Experiments 3-4 enabled a clearer measure of the temporal dynamics of information access in the patterned and control blocks. Method. The method of Experiments 3-4 was identical to that of Experiments 1-2 with the following exceptions. There were 40 observers in each experiment. The two items (i.e., two choices) in the test display shrank from view at different rates, and the observers were instructed to respond before the disappearance of these items. If the observers failed to respond before their disappearance, then this trial was marked as an incorrect response with the feedback of an unpleasant sound. The duration of test-displays could be either brief (1000 ms in Experiment 3; 875 ms in Experiment 4) or prolonged (2500 ms), and the two levels of durations (prolonged vs. brief) were randomly intermixed within each block.
Results and Discussion
The results of Experiments 3 & 4 are shown in Fig. 3. There were clear interactions between the effects of regularities and the time available for responding ( Fig. 3a and b). In the prolonged response deadline, a substantial advantage was observed in the patterned condition compared to the control condition (Experiment 3: 4.10 vs. To describe these interactions from another perspective, there are modest improvements of performance over time in control blocks. This is natural because the RT-accuracy trade-off is a ubiquitous phenomenon and there are various factors that impair the performance when time is limited. More critically, the improvements of performance over time are much greater in patterned than in control blocks (1.69 vs. 0.48 in Experiment 3; 2.79 vs. 1.29 in Experiment 4). This implies that the time is much more critical in patterned than the control blocks, presumably because exploiting the regularities in the patterned blocks requires time for the retrieval of offline representations.
We did not attempt to exclude RT outliers in Experiments 3-4 because the RTs were already limited by the response deadlines. The response time data of the correct trials (Fig. 3c,d) confirmed the finding of Experiment 1. The response times were significantly slower in the patterned than the control blocks in all levels of response deadlines (p < 0.001 for all levels).
We also analyzed the sequence-order effects in the same way as in Experiments 1-2. As shown in Fig. 3e,f, the sequence-order effects were generally smaller in Experiments 3-4 than in Experiments 1-2. This is probably because these effects were weakened by the time-pressure of the response deadline. Importantly, the pattern of results was fairly consistent with our proposal for content-free labels. There were significant sequence-order effects in patterned condition in the prolonged response deadline in both Experiment 3 (f (1, 39) = 5.81, η p 2 = 0.13, p < 0.025) and Experiment 4 (f (1, 39) = 33.81, η p 2 = 0.46, p < 0.0001). These sequence-order effects were reduced in the brief response deadlines in both Experiment 3 (f (1, 39) = 2.90, η p 2 = 0.07, p < 0.1) and Experiment 4 (f (1, 39) = 16.11, η p 2 = 0.29, p < 0.0005). Moreover, the interaction between response deadline (prolonged vs. brief) and condition (patterned vs. control) were nearly significant in Experiment 3 (f (1, 39) = 3.36, η p 2 = 0.08, p < 0.08) and significant in Experiment 4 (f (1, 39) = 8.20, η p 2 = 0.17, p < 0.01). To sum up, the results of Experiments 3-4 again clearly show that although additional information is available in the patterned trials, accessing that additional information takes a substantial amount of time. At brief response deadlines, there was little trace of an advantage in the patterned condition, inconsistent with the claim that subjects had a larger number of items represented online in the patterned condition. Instead, this empirical pattern is consistent with our hypothesis that memory compression is based on a slow process for retrieving information about the constituents of a chunk.
General Discussion
To summarize our findings, Experiments 1 and 2 replicated previous demonstrations that statistical regularities enable access to a larger number of feature values in a visual WM task 1 and in a verbal WM task 18 . This advantage, however, was accompanied by a marked slowing in the speed with which those compressed representations could be used to guide behavior. Experiments 3-4 provided converging evidence for this conclusion using a response deadline procedure. With brief response deadlines, performance was equivalent in the patterned and control conditions, in line with our hypothesis that accessing "compressed" information from regular pairs requires a relatively slow retrieval process that could not be completed in the brief deadline conditions. Thus, we propose that regularity-based advantages in both visual WM (i.e., memory compression effects) and verbal WM (i.e., the chunking advantage) may reflect a dynamic collaboration between online and offline representations, such that content-free labels of chunks are stored online while the values of the associated items are retrieved from an offline state when they are needed to guide behavior. Critically, this suggests that regularity-based advantages may not change the number of individuated representations that can be stored in WM, in line with Chen and Cowan's observation that subjects could store the same number of well-learned word pairs as they did word singletons. The present work extends this finding by demonstrating the temporal cost of decoding the contents of a chunk. both accuracy (i.e., memory capacity) and RTs, the differences between the patterned and control blocks were relatively small in Block 1, and these differences gradually increase with more learning processing. Error bars show within-subject 95% confidence intervals 32 Relation to Models of WM. In the traditional framework of multi-store models 20,21 which holds that the underlying mechanisms of WM and long-term memory (LTM) are separate from each other, the notion of content-free labels implies that only those labels are represented online in WM, while the actual content of the chunks are stored offline in LTM. Thus, information about individual elements within a chunk is not available until it is retrieved into WM from LTM. That said, the notion of content-free labels is equally compatible with unitary-store models 18,22-25 that posit a common representational space for WM and LTM, such that WM is an activated subset of LTM. Here, the notion of content-free label implies that the labels -which are devoid of individual item information -are held in an activated state, while the details about the items associated with those labels must first be moved into an active state if they are to guide behavior. Thus, activation of a content-free label does not give access to an individual element within the associated chunk until time is taken to shift that information into an active state. In both cases above, the critical point of the notion of content-free label is that an individual element within a chunk can only be used after a decoding process. there is a clear interaction as predicted by the chunking account: There is a substantial advantage in the patterned condition compared to the control condition in the prolonged response deadline (2500 ms), but this advantage almost disappeared in the brief response deadline (1000 ms & 875 ms in Experiments 3 and 4). The response times data (panel b,d) confirms the finding of Experiments 1-2. The response times were significantly slower in the patterned than the control blocks in all levels. We also analyzed the sequence-order effects (panel e,f). There were significant sequence-order effects in patterned condition in the prolonged response, and these sequence-order effects were reduced in the brief response deadlines. Error bars show within-subject 95% confidence intervals 32 .
SCIenTIfIC REPORts | (2018) 8:23 | DOI:10.1038/s41598-017-18157-5 Storage of the Content of Chunks. We assume that the contents of chunks are stored in LTM because it seems both burdensome and unnecessary to keep them in WM. Nevertheless, the prediction on the cost of decoding does not depend on this assumption of the locus of the storage of the content of chunks. The critical prediction of the present study, namely the cost of decoding, relies on the assumption that retrieving the content of a chunk is time-consuming. The rationale is simply that this is an extra step that is required when there are statistical regularities than when there are not. We do not argue that retrieval from an offline state must always be slower than accessing information in WM, a generalization that fails in many different scenarios (e.g., reporting the capitals of known countries vs. capitals of hypothetical countries that were just learned). Nevertheless, our data are well explained by the hypothesis that subjects took longer to retrieve the content of a chunk because it required them to access associative memories.
Memory Scanning. One may potentially suggest that, although the slowed response times in Experiments 1-4 are consistent with the hypothesis that accessing a chunk requires retrieval from offline representations, these findings could also reflect a simple increase in the time required for "memory scanning" over larger numbers of representations in WM 26 . There are a few reasons to question the validity of this alternative account.
First, our response deadline studies showed that there was little benefit in the patterned condition over control condition in brief response deadlines, suggesting that the additional regularity-based information was unavailable. One may suggest that this lack of an advantage in the patterned condition (with brief response deadlines) is due to the cost of scanning a larger set of memorized items. However, this scanning cost is based on the assumption of a "random-order scanning" process in which the spatial cue provided is ignored, and observers scan randomly until they happen to encounter a representation that matches the position of the spatial cue. This kind of random scanning seems unlikely, particularly in light of the fact that spatial position is an essential and salient part of these memory representations. By contrast, if subjects first consult the item in memory in the cued position, it is clear that a larger number of items in WM would yield better performance at the earliest response deadlines.
Second, the memory scanning account presumes that this scanning process takes hundreds of milliseconds per item, far longer than the typical scanning costs estimated from past memory scanning paradigms 26 .
Third, the conceptual necessity of the "memory scanning" hypothesis can be questioned. Clearly, in the laborious cases such as remembering the string "internationalizationcongratulationmisinterpretation", it seems absurd to assume all 51 letters are individually hold in an online system which takes minutes to scan, and one will have to agree on the use of content-free labels. Therefore, content-free label is a necessary notion and it is theoretically more parsimonious to assume that it also accounts for the more moderate cases in the present experiments. Of course, it is possible that the RT costs in the laborious cases and those in the moderate cases are fundamentally different. However, it seems to us that the only empirical difference is that the RT costs can be easily verified by introspection in the former case but has to be experimentally measured in the latter case, and this is not a good conceptual reason to assume a theoretical dichotomy.
Fourth, and most decisively, the sequence order effect in Experiments 1-4 cannot be explained by increased scanning time in the patterned condition because the amount of stored information was identical regardless of whether top color or bottom color was tested, or whether left-side letter or right-side letter was tested. In other words, if both top-color and bottom-color of a pair are individually encoded in visual WM and the longer response time was simply due to the "scanning of more items", then there is no reason to believe that the response should be slower for bottom-color than for top-color. On the other hand, this sequence-order can be naturally explained by assuming the content of a chunk is retrieved in a stereotyped order.
Chunking vs. Partial Memory. Brady et al. 1 considered and ruled out an alternative account in which the observers have remembered one color from each pair and use that to infer the other color. It should be clarified that the chunking account of the present study is fundamentally different from this "partial memory" account. Basically, in chunking, the observers remember a content-free label and use that label to "infer" the content of the chunk. For example, in the chunking account, the string "dogduckdolphin" is remembered as 3 animal concepts, but in the "partial memory" account, this string needs to be remembered as "ddd", which causes confusion in this case because the "d" could be followed by different letters.
Brady et al. 1 provided a few findings against the "partial memory" account. First, the memory for low-probability pairs was also better when they were stored with high probability pairs, while a strategy of guessing based on the value of one item in a pair predicts worse performance for low probability pairs. This finding is also consistent with the chunking account because, like Brady et al. 1 have assumed, the chunking of high-probability pairs could have saved representational resources for low-probability pairs even if they are not remembered as chunks. For another, Brady et al. 1 found that when observers make guesses, they were not more likely to guess the high-probability partner of an item when a low probability pair was tested. For example, if red is usually paired with green but is paired with yellow in a trial, then if observers fail to report the color "yellow", they are no more likely to report "green" than another irrelevant color such as "blue". This finding is also very consistent with the chunking account. Although the observers will use a label to represent the regular pair "red-green", they will not use this label in the case of "red-yellow", and there is no reason to believe that they would be especially likely to guess "green" in that case.
Labeling of Perceptual Information. This chunking process may also be important in perceptual input itself, in addition to the memorized information. For example, following the Boolean map theory 27 , it was proposed that the features in a familiar pattern (e.g., colors of the Star and Stripes Flag) may be consciously accessed as a whole label, and the colors themselves are not directly represented, but are inversely inferred from this "flag" SCIenTIfIC REPORts | (2018) 8:23 | DOI:10.1038/s41598-017-18157-5 label 28 . It has been predicted, and confirmed, that there is no familiarity-based benefit for aspects of the features that are orthogonal to the familiarity (e.g., dark red vs. light red in the Star and Stripes flag) 28 .
Subjective Experience of the offline Representations. At the first sight, the reliance on offline representations may lead one to a rather strange picture of how they would be subjectively experienced. An observer is not faced with colored objects, but rather only "empty" regions with abstract labels attached on them. Therefore, one may find this notion strange because it is inconsistent with the intuition that real colors, not abstract labels, are kept in memory.
However, this interpretation may be misleading because the vivid subjective experience of memorizing real colors could well be built upon the mechanism of offline representations. As Dennett 29 pointed out, human observers may often misinterpret the information that can be readily fetched as what has been represented at the moment, and regard the information as part of their subjective experience. Dennett 29 illustrated the case of periphery vision as such an example. Human observers do not have "clear images" of the peripheral objects represented in their visual systems, yet they subjectively perceive these peripheral objects as clear rather than blurry, perhaps because the observers can obtain "clear images" of these objects by fixating on them whenever they want. Similarly, even if the online representations has only directly represented the labels, the subjective impression of "remembering the colors" may still readily emerge because the colors indicated by these labels can be retrieved from offline representations in an efficient manner.
One line of evidence consistent with this view is from the studies of change blindness 30,31 which showed that human observers are surprisingly poor at detecting changes in their environments when they have a strong impression of seeing the whole scene at once. So, perhaps our subjectively rich and detailed visual experiences belie a persistent role for offline representations in our mental representations of the world. | 2018-04-03T02:10:39.983Z | 2018-01-08T00:00:00.000 | {
"year": 2018,
"sha1": "538a95ca0e076d55dfcb1547130c3460c203cc3a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-18157-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f074ed21b556c70996f31fdb2549c57c795b0a84",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
88522518 | pes2o/s2orc | v3-fos-license | Bayesian method for inferring the impact of geographical distance on intensity of communication
Spatially-embedded networks represent a large class of real-world networks of great scientific and societal interest. For example, transportation networks (such as railways), communication networks (such as Internet routers), and biological networks (such as fungal foraging networks) are all spatially embedded. Both the density of interactions (presence of edges) and intensity of interactions (edge weights) are typically found to decrease as a function of spatial separation of nodes in these networks. Communication and mobility of groups of individuals have also been shown to decline with their spatial separation, and the so-called gravity model postulates that this decline takes the form of a power-law holding at all distances. There is however some evidence that the rate of decline might change as the distance increases beyond a certain value, called a change point, but there have been few statistically principled methods for determining the existence and location of change points or assessing the change in intensity of interactions associated with them. We introduce such a method within the Bayesian paradigm and apply it to anonymized mobile call detail records (CDRs). Our results are potentially useful in settings where understanding social and spatial mixing of people is important, such as in the design of cluster randomized trials for studying interventions for infectious diseases, but we also anticipate the method to be useful for investigating more generally how distance may affect tie strengths in general in spatially embedded networks.
Spatially embedded networks are networks in which each node has been assigned a fixed location in some underlying Euclidean space. Although this description could include embedding of nodes in a covariate space (e.g., representing fitness of nodes), here we focus on geographically embedded networks, i.e., networks that have been embedded in a two-dimensional Euclidean space where the positions of the nodes can be interpreted as geographical locations. Although this interpretation is not necessary for the formulation or use of the method, it applies to our specific application.
With the rise of communication and social network technologies, the role of spatial distance on establishing and maintaining social ties is constantly changing [1][2][3] . Knowing that two individuals communicate with one another using a specific channel or mode of communication makes them more likely to use also another [4][5][6] . For example, people who speak on the phone frequently also interact in person 7 . For researchers studying infectious diseases, such as HIV/AIDS or Malaria, the structure of social interactions in a population can provide valuable insights into how pathogens are transmitted among members of that population [8][9][10] . Another context for which the interplay between social ties and geography is important is in the delivery of healthcare. Patterns of care delivery can be naturally represented as networks, wherein two physicians are connected to one another if they share one or more patients 11 . The clusters of physicians in these networks often do not coincide with institutional boundaries but instead extend across them 12 . The literature on geographic variations literature in healthcare costs and outcomes was launched by Wenberg and Gittelsohn 13 , and has since become the central empirical argument for the inefficiency of the health care system in the Unites States. Because geography places constraints on patient-sharing relationships of physicians, a principled way to assess the impact of distance on intensity of connections in these networks might lead to a more complete examination of the sources of variability in provision of healthcare. Although we do not pursue this application here, the methods we introduce could also be used to address the role of geography also in healthcare delivery.
Because traditional surveys are resource intensive and scale poorly, mobile phone data, or more specifically call detail records (CDRs), have emerged as an alternative for inferring the structure of underlying interpersonal open Department of Biostatistics, Harvard School of Public Health, Boston, MA 02115, USA. * email: degrut@hsph. harvard.edu
Results
Data. We aggregated the dataset in two ways. First, we aggregated the daily call counts over the 3-month period, resulting in a single call count for each distinct pair of users. We distinguish between the caller and the receiver; hence, the count for each call between each pair is directed. Second, we aggregated the data from the level of individuals to the level of counties; the resulting dataset describes communication intensity for calls among the counties. There were records for a total of 2,511,035 users; 359,759 of them resided in the largest county and 136, in the smallest. The number of calls from one county to another ranged from 0 to 266,199 with 21,016,548 calls in total. There were 2,646 distinct zip codes nested within 427 counties. The geographical location of each county was calculated by first identifying the latitude and longitude of each zip code centroid and then taking the mean of the these coordinates over all zip codes that were nested within a given county. For each county we thus obtained the number of resident users; and for each pair of counties, we obtained the spatial distance between them and the number of calls made and received by users in those counties over the 3-month period. As discussed in the section, Computational complexity, we reduce computational burden by selecting a subset of data that arose from 65 counties with the greatest numbers of users; in this subset, the number of calls ranged from 7,879 to 359,759. The corresponding call counts between pairs of counties ranged from 2 to 266,226. Multiple calls between any pair of users were included as one number in the call count. Figure 1 demonstrates the decay in intensity with distance as well as the distribution of number of calls; the log transformed call numbers appear to be roughly normal in distribution.
The distance is calculated at a coarser level (county) rather than at the zipcode level to protect user privacy; call counts between zipcodes might reveal user identity, especially between those for which the number of users and calls is small. We also note that although our analysis is of the locations of calls (not residences of callers), using a larger geographical unit will make these more likely to be the same, and perhaps thereby add to the interpretability of the analyses. We comment on this issue in the discussion.
Gravity model and our extension. Analyses of the data described above is based on the gravity model.
Adapting the notation from 26 , this model can be written as where G ij specifies the communication intensity from source location i to destination location j, K is a constant, m i is the population of the source location i, n j is the population of the destination location j, and d ij is the distance between source i and destination j.
A related article 25 provided an extension to this model: where n i and n j are the number of users in county i and j; d ij is the distance between the two in kilometers; Y ij = g(G ij ) and g(·) is a transformation function, in the gravity model, g(·) = log(·) ; µ is the intercept; θ i represents the location of the change point measured on the logarithmic scale for communication initiated from location i; β 3,i represents the distance effect before change point θ i ; β 4,i specifies the difference of distance effect before and after the change point; and S is the number of locations under consideration. When β 4,i = 0 , the www.nature.com/scientificreports/ difference is 0, i.e. the rate of decay does not change over the observed range. We denote the size of the population at location i as n i and refer to the model with β 4,i as the full model and the model that sets β 4,i to 0 as the reduced model. By definition, is the indicator function. It takes value 0 before the change point θ i and d ij − θ i after the change point. We assume that ǫ ij iid ∼N(0, σ 2 ) . This formulation provides a straightforward way to compare the two nested models with regard to the effect of distance effect; the reduced model has the constraint β 4,i = 0 . In this formulation, model selection only involves variable selection; we perform the latter using LASSO 31 . We also estimate θ i and quantify its uncertainty as described in Methods below. We note that the above formulation assumes that the full and nested models share the same intercept and population size effects-an assumption that might not hold in practice. To address this concern, we consider two distinct settings, case I, which refers to the setting where the assumption holds, and case II, where it does not. For the latter, we extend the model by allowing different intercepts and population size effects for models with and without change points. In Methods, we describe how inference on this model is achieved.
Analysis of call records data
As illustrated by the scatter plot in Fig. 1, the relationship between natural log of call counts and natural log of geographical distances appears to follow a linear relationship both before or after the break point. We also note that Fig. 1 is consistent with our assumptions of continuous calling intensity and normality of natural log of the number of calls. We used the preliminary binary assignments of change points based on BIC in a simple linear regression to assess whether there is variability across counties in intercepts and population size effects. Both models with only main effects (indicator variable of group assignments, log population sizes, log distance-before/ after change point) and those with main effects and interaction terms showed evidence (p value < 0.05) of such variability. Hence we applied the method described below (in the Simulation study section) for the analysis of the cell phone data. The variability in intercepts and population size effects is true both for the general population from all 427 counties and for the user subpopulation we described above.
In the analysis of call records (Figs. 2 and 3), we note that the slopes for source locations in the northeast appear to be less steep; that slopes near the capital city, where the population is dense, are more likely to have change points No such patterns were observed for slopes of other locations, either before or after the change points. Model estimates revealed that locations with no change point tended to be in the north while those with change points were concentrated in the south around the capital area. For diagnosis on convergence, Fig. 4 shows a trend of PSRF 2 approaching 1 very quickly and a PSRF 1 fluctuating below 1.5, which is acceptable.
Discussion
To analyze the decline in communication intensity with geographical distance, we extended the gravity model by allowing for change points in this relationship. We addressed the issue of the existence of change points for each source location and quantified associated uncertainty using a Bayesian model. We also provided estimates of the slopes before and after each change point. We investigated the geographical pattern of the existence of change points and noted differences in these patterns between rural and urban areas.
We apply our method to an anonymized dataset of call detail records, using the number of mobile phone calls in as the measure of communication intensity between a pair of counties. The outcomes are log-transformed counts; the regression model we specify treats the transformed outcomes as continuous-a choice that is most appropriate when the number of calls between two locations is large (Fig. 1). In settings with 0 or very small counts, one could consider alternative models (e.g., negative binomial) or the addition of an arbitrary small positive number to 0, although the latter approach can add bias 32,33 . In this setting, a negative binomial model might be a better fit, though the interpretation of the parameters is less straightforward. Using Bayesian methods in a setting where the data are assumed to be negative binomial distributed requires non-standard approaches even without inclusion of change points into models. Some research has provided useful tools for sequentially updating the parameters using Gibbs sampler by augmenting the posterior distribution with auxiliary parameters [34][35][36] . When the number of counts is large, the negative binomial approach may not be computationally feasible; fitting negative binomial outcomes in Bayesian LASSO needs further investigation. One possible direction is to extend the methods based on the conditional normal distribution 36 by transforming the variance matrix so that normal-distribution based LASSO method can be employed.
Another extension of our method would allow for aggregation of results across different subsamples; currently the number of locations we can analyze is limited by computational capacity. Developing a method to obtain consistent results from different overlapping sets of nodes, perhaps in a meta-analysis framework, would alleviate the computational concerns, but is challenging. Some potentially useful approaches are provided [37][38][39][40] . In particular, the stability selection 41 may be used to assess the properties of the meta-analytic results. An example of the use of LASSO in analyses that combine across subsamples arose from analyses intended to discover adverse drug reactions 42 . Another potentially useful approach is the use of path of partial posteriors 43 . In this approach, the resampling procedure resembles the bootstrap, but with smaller resampling sizes. Because standard bootstrapping of the LASSO estimator of the regression parameter for variance inference is known to yield inconsistent estimates 44,45 , modified bootstrapping must be used 46 . Nonetheless, Bayesian LASSO procedures provide straightforward and valid estimates for standard errors.
The findings from our analysis of mobile phone communication intensity illustrate how such information might be used. For example, should such communication networks prove to be accurate proxies for contact networks, such analyses might help guide the design of cluster randomized trials for infectious disease. Randomized trials ideally enroll participants in a way that minimizes the extent to which the treatment assignment of one subject affects the outcome of another. For interventions in which such interference occurs at the individual but not the cluster level (e.g., through contacts among randomized subjects), cluster randomization can be Scientific RepoRtS | (2020) 10:11775 | https://doi.org/10.1038/s41598-020-68583-1 www.nature.com/scientificreports/ useful 47 . Clusters may be comprised of participants in the same geographical location, institution (e.g. school) or administrative unit (village). Cell phone data could potentially aid in the identification of appropriate clusters by providing information about the probability of interference. When mixing across clusters cannot be eliminated, identification of treatment effects requires modeling of the mixing process 48 . The impact of interference across randomized units on power of a clinical trial to detect effects of an intervention in preventing spread of infectious disease is investigated 49,50 . As geographical distance is likely to affect contact networks, knowing the relationship between communication and distance may be useful not only for identification of clusters, but also for aiding in development of appropriate mixing models.
Methods
To estimate the parameter of interest, θ i , and quantify its uncertainty we employ a Metropolis Hastings algorithm in Bayesian framework. We consider a Metropolis sampling block for θ i and a Bayesian LASSO block dealing with β 4,i . To allow different intercepts and population size effects for models with and without change points, we employ a Reversible Jump Monte Marlo Markov Chain algorithm. To implement it, we chooose (RJMCMC) option in the blasso function in R package monomvn. We use the default non-informative priors for unknown parameters in both simulation and data analysis. This approach allows for statistical inference using Bayesian LASSO. RJMCMC is a general version of the Metropolis-Hastings algorithm 51 , which allows transitions between
Metropolis block and Bayesian LASSO.
Case I: Assuming same intercept and population size effects across all source locations With Bayesian LASSO, the model is specified as which can be written as Y = µ1 + Xβ + ǫ using matrix notation. µ is not included in the Bayesian LASSO penalty term 52 ; 1 is the vector of 1s; X is the model matrix consisting of logarithmic population sizes and distances, and β is the vector of βs.
In general, LASSO 31 solves an unconstrained optimization problem subject to a given bound on the L 1 norm of the parameter vector that is equivalent to where Ỹ = Y − µ1 is the centered outcome vector; p is the number of parameters after excluding the intercept. In the Bayesian setting, solution to Eq. (7) provides the posterior mode estimates when β j has i.i.d. double exponential priors. Conditional double exponential priors are used in the formulation to avoid multiple modes 52 . They can be expressed hierarchically as The entire sampling procedure is available using function blasso in R package monomvn with the option for RJMCMC specified as False. To incorporate a Metropolis block for change point estimation, we alternate between the Metropolis and Bayesian LASSO blocks. Validity of this approach is established by regarding it as two components of a Gibbs sampling algorithm 53 . In summary, conditional on change points, our inferential problem becomes one of a variable selection; conditional on other parameters, change point sampling is a straightforward application of a Metropolis algorithm.
Thus after obtaining the initial values µ (0) , β (0) , θ (0) and σ 2 (0) , we proceed as follows: 1. At iteration t for each source location i, update change point θ (t+1) i using Metropolis algorithm with a normal proposal N(θ (t) i , σ 2 θ ) . The range of θ i is determined empirically from data, i.e., the posterior likelihood of θ i has an indicator function term in the product that is 0 if the proposed θ (t+1) i is out of the observed empirical log-distance range, thereby assuring that any out-of-range proposal will be rejected. 2. For each location i, if there are fewer than 5% of data points on either side of θ (t+1) i for the subset of data, i.e., Y i , we consider it to be on the boundary, specify β (t+1) 4,i = 0 , and remove it from the model in the next estimation step. We denote the number of locations belonging to the boundary sets as b (t+1) . 3. Create the corresponding S(S − 1) × (2 + 2S − b (t+1) ) covariate matrix (intercept column is not included) based on θ (t+1) . Together with the data, β (t) (after β (t+1) 4,i = 0 are removed), σ (t)2 and (t) , input the covariate matrix into the blasso function for h iterations (2 or more). The output intercept is µ (t+1) . From the output we also get β (t+1) ( β (t+1) 4,i = 0 are put back), σ (t+1)2 and (t+1) .
Case II: Allowing different intercepts and population size effects for models with and without change points.
When there is evidence of the presence of change points, we estimate these parameters separately in two different models. In this case, estimates of intercepts and population size effects depend on the set of source locations whose data contribute to the estimation in any given iteration. We denote the mean model as η (t) for iteration t to maintain consistency with the notation we introduced earlier.
As mentioned above, estimation makes use of the Reversible Jump MCMC option in the blasso function. In our setting, different models imply different specification of zeros in β (t) 4 , and are characterized by η (t) , where η (t) RJMCMC is a general version of the Metropolis-Hastings algorithm 51 , which allows transitions between different states or models of different dimensions. A thorough review of RJMCMC with more recent comments can be found in a review article 54 .
Use of RJMCMC yields the following sampling scheme: Y |µ, X, β, σ 2 ∼ N(µ1 + Xβ, σ 2 I), = 0 and remove it from the model in the next estimation step. 2. Conditional on θ (t+1) , create the s(s − 1) × (5 + 2s − b (t+1) ) covariate matrix (intercept column is not included). Data from each source location contribute to their own group's estimation of intercept and population size effects, which depends on η (t) i . All data and parameter values from the previous iteration t (including σ (t)2 and (t) ) are used in the blasso function with RJMCMC for 3 iterations. 3 is the minimum number of iterations to avoid the situation in which zeros in the previous iteration are carried forward. 3. From Step 2 we get the updated β (t+1) , σ (t+1)2 , µ (t+1) and (t+1) . Now update the η (t+1) : η Diagnostics for assessment of convergence. The usual diagnostic framework for Bayesian LASSO [55][56][57] includes trace plots for different chains and calculation of the Potential Scale Reduction Factor (PSRF). Diagnostics for RJMCMC can be developed by extending that framework to include within-model and between-model variations in the parameters.
We make use of Castello and Zimmerman 58 , which defines two PSRFs in the assessment. For a chosen parameter, PSRF 1 is the ratio between total variation V and variation within chains W c ; PSRF 2 is the ratio between variation within models W m and variation within models and chains W m W c . V , W c , W m and W m W c are defined as follows: where θ r cm , θ .. . , θ c. . , θ .m . and θ cm . are the rth appearance of θ in model m chain c, mean θ across all models and chains, mean θ within chain c across all models in that chain, mean θ within model m across all chains, mean θ within chain c and model m, respectively. R cm is number of θ in chain c model m. C and M are the number of chains and distinct models, respectively. We follow the strategy provided by Castello and Zimmerman 58 to assess convergence and, for simplicity, illustrate this approach by considering a scalar. We choose σ 2 , the variance of the error terms, for this illustration, as its interpretation remains the same across the models. Each chain is divided into batches of equal length. A sequence of PSRF 1 and PSRF 2 is calculated for each batch. A desirable result is that the two quantities move toward 1 as the iteration proceeds. In the simulation study below, we illustrate the use of diagnostic graphs for evaluating convergence; further details on this subject can be found in Brooks and Giudici 59 . interpretation. Under the assumption that intercept and population size effects are identical across source locations, we obtain a sample of β 4,i as well as its 95% credible interval rather than an estimate of the probability that each source location has a change point. Intervals that do not cover 0 imply the presence of a change point by providing evidence against the null hypothesis that the difference of the two slopes is zero. Approaches that allow variability in intercepts and population size effects yield a sample of models and their corresponding parameter estimates. For prediction, we make use of the models that RJMCMC has sampled in the estimation process; the estimated mean for predicted outcomes is a weighted average of the predicted outcomes of all models. computational complexity. Because of the computational burden of these methods, we consider an analysis of a subset of data. Simulation studies (Fig. 6 in Appendix) show that computation time for the Bayesian LASSO function blasso increases sharply as the number of locations increases. We note that the size of the covariate matrix increases at O(S 3 ) where S specifies the number of locations. It has been showed that for the least angle regression formulation of the problem, the computational complexity is O(m 3 + m 2 n) 60 , where m is the number of features and n is the number of the outcomes. In our setting, the situation is even more challenging in that the number of outcomes grows quadratically with S, which renders the overall computational complexity to be O(S 4 ).
Simulation study. We conducted the following simulations to assess the performance of our models compared with naïve approaches as well as to check the effect of the tuning parameter σ 2 θ . The values of the parameters in the data generation process were selected to be the estimates from the preliminary data analysis using σ 2 θ = 0.03 . The observed geographical distances between counties were used. We assessed the performance of the gravity model, the naïve fit based on BIC and grid search, and the Bayesian LASSO model on scenarios with The diagnostic graphs in Appendix show that convergence was generally achieved. We assessed the model fit and the effect of the tuning parameter based on the prediction error (PE), which is defined as follows: where L is the model, M is the number of data points, y new is the observed outcome in the test dataset, y new is the fitted value using model estimated on the old dataset.
One hundred new datasets were generated using the same covariates and parameters for each variance category. The findings are shown in Table 1.
As expected, estimates based both on BIC and Bayesian LASSO performed better than those of the gravity model with respect to prediction error in low, medium, and high error variances. The choice of tuning parameter had little effect; use of 0.2 in data analysis appears reasonable as this choice leads to a mean acceptance rate for the Metropolis algorithm on change points in the range of 20-25% 57 , as shown in Table 2. The 95% credible interval coverages for change points, as shown in Fig. 5 and Table 3, also reached high values at tuning parameter 0.2. The crude model based on BIC and Bayesian LASSO estimates are comparable. This is demonstrated in Fig. 5, which shows the crude estimates and Bayesian LASSO estimates to be similar. An advantage of the latter however is its ability to provide interval estimates on the change points and its smaller number of required parameters; Fig. 5 provides the 95% credible interval. These results imply that predictive power was not reduced because of the estimation of location of change points. Bayesian LASSO does require greater computation time: Computation time for 15,000 iterations takes around 9-10 h, whereas the BIC approach requires only a few minutes. For further information about runtime from simulation studies, see Fig. 6. www.nature.com/scientificreports/ www.nature.com/scientificreports/
Data availability
The data that support the findings of this study is available from Telenor and was obtained for this research by Dr. Onnela. Restrictions apply to the availability of these data, and so are not publicly available.
Appendix: Discussion of model choices
In addition to the gravity model, other models to study the impact of spatial distance on communication intensity, such as the radiation model 25 , have been proposed. This model predicts commuting flux between locations, and the rank-based friendship model 30 , which ranks friendships based on the geographical distance between them. Both models reduce to Eq. (4) with certain constraints on their parameters or under certain assumptions. The radiation model 25 uses the following specification where T ij is the average commuting or mobility flux from location i to j (for simplicity, we denote average flux as T ij to keep the notation consistent), T i = j� =i T ij is the total number of commuters from i, and s ij is the population living in the circle centered at the source with a radius of r ij (not including m i ). Adopting this notation, Taking the logarithm yields, In radiation model 25 , we note that Eq. (13) reduces to Eq. (4) with α + β = 1 and γ = 4 when the population is uniformly distributed such that m = n and s ij ≈ m i r 2 ij . The model is mechanistic and has no parameter to fit. The rank-based friendship model 30 is formulated as follows. Let u and v be two individuals. Then define rank u (v) = |{w : d(u, w) d(u, v)}| , where d(u, w) is the distance between individual u and individual w. The probability of u and v being friends is modeled as As rank u (v) ≈ d(u, v) 2 when the the population is uniformly distributed, Eq. (14) reduces to Eq. (4) with m = n = 1 and γ = 2.
Both the gravity and radiation models are based on strict assumptions of the underlying mechanism, which are hard to validate. The gravity model, which uses the same parameters for each pair of locations, implicitly assumes a homogeneous effect of distance for the intensity function. The radiation model addresses this issue by modeling the intrinsic heterogeneity of the geographical distribution of population by incorporating s ij in the model. However, subject to its strict assumption and 'parameter-free' property, it allows little room for other factors. The rank-based model deals with the heterogeneity by substituting distance with rank, which seems to have a similar role as the s ij in the radiation model. Thus the rank function in Eq. (14) can be regarded as an implicit function of distance and population distribution. We can make Eq. (14) parametric by incorporating a parameter for the power of the rank. If the population is uniformly distributed across the area, this is equivalent to the gravity model with parameter γ for the distance r ij .
We note here that even though the rank-based approach sheds some lights on the question of interest, to move from the individual level to zip code or county level requires a completely different set of assumptions. Therefore, a rank-based gravity model cannot be seen as a simple extension of the rank-based friendship model. | 2018-05-23T20:08:16.000Z | 2018-05-23T00:00:00.000 | {
"year": 2020,
"sha1": "f40ce3bcbda8909b1ade38483de9b800c1d5fa47",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-68583-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73530ec98b8d2b6bb0f0d695284a623281a56a8c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine",
"Computer Science"
]
} |
8669929 | pes2o/s2orc | v3-fos-license | Meal Pattern of Male Rats Maintained on Amino Acid Supplemented Diets: The Effect of Tryptophan, Lysine, Arginine, Proline and Threonine
The macronutrient composition of the diet has been shown to affect food intake, with proteins having distinct effects. The present study investigated the effect of diet supplementation with individual amino acids (tryptophan, lysine, arginine, proline and threonine) on meal pattern among male rats. Meal pattern and body weight were monitored for two weeks. Proline and threonine had minimal effects on meal pattern, while the most pronounced changes were observed in the tryptophan group. Both tryptophan and lysine decreased overall food intake, which was translated into a reduction in body weight. The reduced food intake of the tryptophan group was associated with an increase in meal size, intermeal intervals (IMI) and meal time and a decrease in meal number. The decrease in the food intake of the lysine group was associated with a reduction in both IMI and meal number, and this was accompanied by an increase in meal time. Arginine increased meal number, while decreasing IMI. Proline and threonine had a minimal effect on meal pattern. Lysine seems to increase satiety, and arginine seems to decrease it, while tryptophan seems to increase satiety and decrease satiation. Accordingly, changes in meal patterns are associated with the type of amino acid added to the diet.
Introduction
Obesity is considered a public health epidemic and is associated with co-morbidities, including cardiovascular disease, diabetes and some forms of cancer [1,2]. Obesity results mainly from an inadequate balance between food consumption, which is regulated by an interaction between physiological and environmental factors, and energy expenditure [3][4][5]. Moreover, daily food intake is the outcome of eating behaviors (meal patterns) governed by several states that determine the meal size, number, time and intermeal interval: "hunger" (the physiological signal promoting the brain to initiate food seeking), "satiation" (the processes leading to the interruption of an eating episode) and "satiety" (the non-hunger state between two meals). These states are influenced by physiological and non-physiological factors [6,7]. In addition, understanding the changes in meal pattern can be a useful tool for clinicians or nutritionist to modify the diet of individuals or a population [7].
It has been reported that the macronutrient composition of the diet can significantly alter the regulation of food intake, with protein being the most satiating [8][9][10][11][12]. According to Poppitt et al. [13] and Stubbs et al. [14], protein has both short-term and long-term satiating effects in humans. Bensaï d et al. [15] reported that in rats, an intra-oral protein load administered at different concentrations produced a greater inhibition of food intake than an isovolumetric and isocaloric carbohydrate load. This could be explained by several mechanisms, which were proposed to be involved at the peripheral and central levels, including the alteration in gut hormone release through the suppression of ghrelin and the elevation of PYY, CCK and GLP-1, causing a reduction in appetite and food intake [12,15,16]. In addition, protein was hypothesized to reduce satiety through its capacity to stimulate diet-induced thermogenesis, which is associated with an increase in body temperature, metabolic rate and hepatic ATP production [17]. The satiating effect of proteins has been reported to differ according to protein sources [18][19][20][21][22][23]. In lean men, the satiating effect of fish protein was reported to be higher than that of beef or chicken [18], and the satiating power of gelatin (incomplete protein) was higher than that of casein [20]. However, the difference in satiating power between various protein sources may not be translated into a variation in body weight. This is indicated by the similarity in the body weight of rats maintained on high protein whey and soy diets, where the satiety of the whey protein-based diet was higher than that of soy [23]. Although both high protein diets restricted weight gain and reduced fat accumulation, each had its distinct mechanism. While the high whey protein group showed a decrease in food intake, the soy protein subjects exhibited an increase in fat oxidation.
Although the effects of protein sources and individual and/or combined amino acid supplements on the profile of ingested and plasma amino acids have been widely studied, their impact on meal pattern is not yet fully understood. We have previously investigated the impact of certain amino acids [24,25], and the present work focuses on lysine, tryptophan, arginine, proline and threonine. Lysine ingestion was reported to increase postprandial glucose clearance [26] and to stimulate the secretion of the gut hormones, CCK and GLP-1 [27]. Tryptophan is needed for the synthesis of serotonin, a neurotransmitter known to be involved in appetite regulation. Arginine is a precursor of nitric oxide and an inducer of growth hormone release [28,29], as well as proline production. Threonine was reported to improve food intake and weight gain [30]. Moreover, it was hypothesized that the central nervous system controls food intake by detecting dietary protein content and quality through the sensing of specific circulating amino acids, such as lysine [31]. The present study aims at investigating the influence of individual amino acid-supplemented diets on the meal pattern of male rats.
Animal Housing
Adult male Sprague-Dawley rats (Animal House, American University of Beirut, Lebanon), which are known to have a good consistency in meal pattern [32], were housed initially in individual wire-bottomed cages in a room with controlled temperature (22 ± 1 °C) and under 12:12 h light-dark cycles with lights on at 7:00 a.m. The rats were moved to feed recording equipment (Model 80350 series, Campden Instruments limited, Lafayette, IN, USA), each residing in a separate chamber. Rats were allowed a four-day adaptation period while being fed a semi-synthetic control diet ad libitum [33] (Table 1) with a gross energy of 18.2 kJ/g distributed as 56%, 21% and 23% from carbohydrate, protein and fat, respectively. The amino acid composition of casein (g/100 g of protein) is as follows: alanine (2.6), arginine (3.6), aspartic acid (6.
Experimental Protocol
The study was divided into two experiments; each experiment included a control group in which the rats were maintained on the control diet. In the experimental groups, rats were maintained on the same control diet supplemented with 5% of the specific amino acid. This translates to about 1 g/per day or 3.0 g/kg body weight per day assuming an average daily dietary intake with a body weight of 330 g. Body weights and meal patterns were monitored over two weeks. Experiment 1: The effect of diet supplementation with 5% tryptophan or lysine on meal pattern was investigated. Thirty rats were divided into 3 equal groups (n = 10): control, tryptophan and lysine group. Experiment 2: The effect of diet supplementation with 5% arginine, proline or threonine on meal pattern was investigated. Thirty rats were divided into 4 groups: control (n = 6), arginine (n = 8), proline (n = 8) and threonine (n = 8).
Feeding Pattern
The feed recording machine is a microstructural feeding analysis system designed for rats (Model 80350 series, Campden Instruments limited, Lafayette, IN, USA) equipped with a computer-based data acquisition system capable of monitoring feeding behavior in rodents with high sensitivity (0.1-g resolution). The system consists of 16 individual chambers with dimensions of 285 mm × 210 mm × 200 mm (L × W × H). The cage bottom is made of 2-mm rods separated by a 7-mm gap. The chambers are well ventilated to allow for air circulation. A hopper is attached to the back of the cage where food is filled. The hopper is supported by a weighing balance, which measures food weight changes, and an infrared beam that detects the animal while feeding. Time and hopper weight are logged into the computer every time the animal begins and ends feeding. Meal patterns were recorded, and the results were collected as meal number, meal size (g), meal time (s), intermeal interval (s) and feeding rate (mg/s). A meal was characterized as the ingestion of food for at least 13 s with a quantity of at least 0.3 g [34]. Meals are considered distinct if the intermeal interval is greater than 10 min [35]. Food intake was defined as the difference in food weight over 24 h and includes any intake outside the defined meals.
Statistics
Data are expressed as means ± SEM of all values. Data were analyzed using the Statistical analysis package for Social Sciences (SPSS, version 16, IBM, NY, USA). Data were analyzed by one way analysis of variance (ANOVA), and specific comparisons were made using Tukey's post hoc comparisons. A probability of p < 0.05 was considered statistically significant.
Experiment 1
Body Weight and food efficiency ( Table 2): The mean initial body weight was similar among the groups. The maintenance of rats on 5% lysine or tryptophan diet for fourteen days significantly reduced their final body weight as compared to the control group. The weight gain of the control group was about 3 times higher than those of the lysine-and tryptophan-supplemented diet groups. Similarly, the food efficiency of the control group was significantly higher than those of the lysine and tryptophan groups. The final body weight, weight gain and food efficacies were similar among the lysine and tryptophan groups.
Food intake and feeding rate ( Table 3): The total food intake of the tryptophan group was significantly lower than both the control and lysine groups. However, the food intake of the lysine group was significantly lower than that of the control. The diurnal (%) intake of the lysine group was significantly lower than those of the other groups, while nocturnal (%) food intake was highest in the lysine group, followed by the control and tryptophan groups. In addition, the feeding rate (total, diurnal and nocturnal) of the lysine and tryptophan groups was similar, and this was significantly lower than that of the control group. Meal pattern (Figure 1 and Table 4): The meal size (total, diurnal and nocturnal) of the tryptophan group was significantly higher than those of the other groups. The meal number (total and nocturnal) of the tryptophan group was lower than those of the other groups, and that of the lysine group was lower than the control group. The diurnal meal numbers of the tryptophan and lysine groups were similar, and these were lower than that of the control group. The meal time (total, diurnal, nocturnal) of the tryptophan group was lower than those of the lysine and control groups, and that of the lysine group was lower than that of the control group. The total intermeal interval of the tryptophan group was higher than those of the other groups, while that of the lysine group was lower than that of the control group. The diurnal intermeal interval of the lysine group was lower than those of the other groups, while the nocturnal intermeal intervals of the tryptophan groups were higher than those of the other groups.
Experiment 2
Body Weight and food efficiency ( Table 5): The mean initial body weight was similar among the groups; the final body weight was also similar among the groups. The weight gain of the threonine group was significantly lower than those of the control and arginine groups, while the food efficiency was similar among the different groups.
Food intake and feeding rate ( Table 6): The total food intake of the arginine group was significantly higher than those of the other groups. The diurnal (%) intake was similar among the different groups, while the nocturnal (%) food intake of the threonine groups was lower than that of the control and arginine groups. The feeding rate (total, diurnal and nocturnal) was similar among the different groups.
Meal pattern (Table 7 and Figure 2): The meal size (total, diurnal and nocturnal) was similar among the different groups. The total meal number of the arginine group was higher than those of the other groups, and that of the proline group was higher than that of the control group. The diurnal meal number was similar among the different groups. The control and proline groups had similar nocturnal meal numbers, and these were lower than that of the arginine group, but higher than that of the threonine group. Meal time (total, diurnal, nocturnal) was similar among the different groups. The total intermeal interval of the arginine group was lower than those of the control and proline groups. The diurnal meal intermeal interval of the control group was higher than those of the other groups, while the nocturnal intermeal interval was similar among the different groups.
Discussion
This research attempts to shed light on the physiological processes controlling feeding activity, namely satiation and satiety. Satiation is signposted by the meal duration and/or meal size; i.e., an increase in satiation is reflected in a decrease in meal time and/or meal size. On the other hand, satiety is defined by the time between meals and the number of meals; i.e., a decrease in satiety is indicated by an increase in the number of meals and a reduction in intermeal intervals. As such, studying meal patterns (meal size, meal number, intermeal interval, meal time) provides valuable information on the mechanism by which nutrients may influence feeding activity (satiation or satiety) [8]. In the present study, total food intake refers to all food consumed within 24 h and includes any intake outside of the defined meal. The amount of food consumed outside of the meal was very small, and this is not expected to impact the results.
Large variations (up to 50%) in individual amino acids are present between different proteins, and thus, a 5% addition of amino acid was chosen to mimic the dietary variation of proteins among individuals. Such an amount is not expected to cause adverse effects, since healthy animals receiving adequate quantities of all essential nutrients tolerate a considerable dietary disproportion of amino acids without exhibiting adverse effects [36]. Amino acids were reported to affect taste [37], and both taste and flavor aversions are known to decrease the eating rate. The consistency in the feeding rate among the different groups in Experiment 2 indicates that arginine, proline and threonine supplementation did not result in a significant effect on food palatability. However, lysine and tryptophan seem to have impacted the palatability of the diet.
Diet supplementation with lysine (5%) was associated with a reduction in body weight or weight gain, due to a reduction in food intake and efficiency, which is likely to be the result of an increase in diet-induced thermogenesis. Lysine was reported to be a potent anorectic amino acid in rats, and its anorectic activity may relate to its activity in delaying gastric emptying and inducing neuronal activity at the vagal afferent [31]. In contrast, lysine-deficient diets have been reported to decrease food intake, and this has been shown to be reversed by the addition of lysine [38,39]. Thus, both under and over consumption of lysine seem to reduce food intake. While lysine supplementation of subjects at risk of lysine deficiency was found not to affect body weight, it exerted other beneficial effects [40,41]. In humans, lysine ingestion with glucose has been reported to increase postprandial glucose clearance, while insulin was not altered [26]. This may have been behind the reduction in diurnal meal size, which is known to be reduced by insulin [42]. However, the increased satiety in the lysine group may have been related to the excitatory effect of lysine on the secretion of the gut hormones, CCK and GLP-1 [27], which are known to decrease appetite, mainly through a reduction in meal number. In addition, the sustenance of the meal size may have been the consequence of the stimulation of a compensatory mechanism to maintain food intake [43].
Tryptophan supplementation (5%) caused a reduction in body weight or weight gain through a decrease in both food efficiency and intake, which was the result of an increase in satiety and a decrease in satiation. Similar to lysine, tryptophan was found to induce an anorectic effect in rats, and this is believed to be attributed to its activity in delaying gastric emptying and inducing neuronal activity at the area postrema [31]. This is in contrast to the effect of increased brain serotonin (a byproduct of tryptophan) [44,45], which is known to have a negative effect on appetite [45] by increasing satiation or decreasing meal size [46]. Serotonin was reported to block the effects of the appetite-enhancing neurotransmitter, NPY, in the paraventricular nucleus (PVN) and to have a direct effect on the serotonin receptor in the brain, causing a decrease in food intake and an increase in after-meal satiety (a decrease in meal number) [47]; the latter is in line with our findings.
Food intake and meal pattern are partially related to the interactions between serotonin and dopamine in the brain, and the status of these neurotransmitters depends on their brain uptake and the intake of their precursors (tryptophan, phenylalanine-tyrosine), which are known to compete for uptake by the brain. The interaction at the lateral hypothalamus (LH) has been reported to influence meal size, while the interaction at the ventromedial hypothalamus (VMN) affects meal number [46]. Thus, in our study, an interaction at both LH and VMN may have been present as indicated by the observed alteration in both meal size and number. Increased tryptophan intake is likely to reduce phenylalanine-tyrosine brain uptake and, thus, brain dopamine concentration, and this, in turn, would be expected to decrease the meal number. Since reduced brain dopamine is known to be associated with the inability to initiate feeding, this causes a reduction in meal numbers, leading to an increase in intermeal intervals [48]. However, tyrosine supplementation, a precursor of dopamine, has been shown not to affect meal numbers, and this may have been related to its capacity to induce insulin release [24], which is known to increase meal numbers [42]. In addition, the observed reduction in meal numbers is in line with the excitatory effect of tryptophan on the secretion of the gut hormones, CCK and GLP-1, which are known to reduce meal numbers [27]. However, the observed increase in meal size and time is in contrast to the known effect of the peripheral or central injection of serotonin on meal size [49]. Thus, tryptophan involvement in meal pattern seems to operate beyond its role as a precursor of serotonin. This may partially explain the failure of tryptophan supplementation (of a diet containing 2.5 g tryptophan/kg dry matter) with up to a 1-g tryptophan/kg diet to affect the food intake and growth rate of young pigs [50].
Diet supplementation with arginine (5%) caused a slight, but not statistically significant, increase in weight gain and a significant increase in food intake, mainly due to a decrease in satiety, as indicated by increased meal numbers and decreased intermeal intervals. Arginine is a precursor of nitric oxide and an inducer of growth hormone release [28,29], which increases weight gain [51] and adiposity [52,53] in humans. Moderate intakes of arginine have been reported to have anti-obesity effects in diet-induced obesity rats maintained at moderate doses (0.2% to 1.5% in drinking water) and in humans receiving about 8.3 g/day (~80 mg/kg body weight per day) [54]. On the other hand, the dietary arginine supplementation (0.2% and 0.4%) of milk-fed young pigs was reported to increase body weight and weight gain, while food intake was not affected [29]. Moreover, the arginine supplementation (1%) of growing-finishing pigs increased body weight gain, and this was associated with an increase in skeletal muscle content and a decrease in fat carcass content [55], while our higher dose increased food intake. Thus, the relation between arginine supplementation and body weight and food intake does not follow a linear pattern, in which low intake produces an anti-obesity effect, while high intake stimulates body weight gain and food intake.
Reduced weight gain in the threonine group in the face of normal food efficiency may be explained by the slight reduction in food intake that reached significance in the nocturnal period. Threonine content ranging from 5.5 to 7.8 g/kg of dietary intake was reported to improve the food intake and weight gain of broiler chickens [30], and optimal growth requires a specific lysine to threonine ratio in pigs [56]. However, both of the above experiments utilized quantities lowers than that of our experimental 50-g/kg diet. On the other hand, proline supplementation (5%) had a minimal effect on growth and meal pattern. In line with that, a 90-day maintenance diet supplemented with proline at a dose ranging between 0.625% and 5% was reported not to affect the food intake and body weight of rats [57].
Thus, it can be postulated that increased consumption of cereals, which are known to have low lysine content, would favor increased energy intake usually associated with the development of obesity. This seems to be in line with the observed association between increased refined carbohydrate (mainly cereals) consumption and obesity [58]. On the other hand, the consumption of dairy products, a good source of tryptophan, favors a lower energy intake, leading to a decrease in body weight. This postulation is supported by several research findings [59]. Thus, an increased intake of dairy products in combination with a decrease in the consumption of cereals would be expected to have the potential of reducing energy intake.
Conclusions
In conclusion, dietary supplementation (5%) of proline and threonine was associated with a minimal alteration in meal pattern. Lysine reduced food intake mainly due to an increase in satiety; in contrast, arginine supplementation increased food intake due to a decrease in satiety. Tryptophan reduced food intake drastically due to a decrease in both satiety and satiation. | 2016-03-22T00:56:01.885Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "43e882fcaf91be3004d8223c5814087c89d13bf9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/6/7/2509/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43e882fcaf91be3004d8223c5814087c89d13bf9",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
263311583 | pes2o/s2orc | v3-fos-license | Online information acquisition affects food risk prevention behaviours: the roles of topic concern, information credibility and risk perception
Background The COVID-19 pandemic has not only brought great challenges to the global health system but also bred numerous rumours about food safety. Food safety issues have once again attracted public attention. Methods The data were drawn from the fifth wave of the first Taiwan Communication Survey database. The respondents were selected via multistage stratified random sampling. The sampling units were townships/districts, villages/neighbourhoods and households. The sample consisted of 2098 respondents. This study first used propensity value matching to analyse the direct impact of online food safety information acquisition on preventive behaviours and examined the heterogeneous impact caused by the difference in the degree of topic attention through value matching. Hayes’ PROCESS macro model 6 was applied to confirm the mediating effect and the serial mediating effect. Results The research results show that an increase in the frequency of the acquisition of online food safety information significantly increases individuals’ food risk prevention behaviour. However, only users with high concern about the issue are affected. The food risk prevention behaviour of users with low concern about this issue is not affected by the acquisition of online food safety information. Further analysis shows that risk perception and information credibility both play mediating roles in the impact of online food safety information acquisition on food risk prevention behaviour. Moreover, the transmission and united effects of information credibility and risk perception play a distal mediating role. Conclusions Food risk prevention behaviours are an important topic for personal health as well as government management. Our study’s findings can provide empirical evidence for risk managers and decision-makers to reevaluate the role of the internet in food risk management.
Introduction
In 1986, the German sociologist Ulrich Beck first identified the "risk society", which describes people's social insecurity and anxiety in industrial civilization.As an old saying in China goes, "Food is the most essential thing for common people, and food safety is the priority".Food safety affects the national economy and people's livelihood.In "China's Comprehensive Well-off Index", food safety was at the top for five consecutive years (2012-2017) among the ten issues of greatest public concern, even higher than topics such as housing prices, medical reform [1], and inflation.In 2018-2020, the topic of food safety remained in the top five [2].In 2020, COVID-19 brought enormous challenges to the global health system and fuelled countless rumours about food safety [3].Food safety issues once again attracted great public attention and returned to the top ten issues of greatest public concern [2].In traditional agricultural society, people were mostly concerned with food and clothing.With the development of new agricultural technology and biotechnology, people's food supply was greatly enriched.However, in the process of social development, risks also followed.Risks can be found in the processes of food production, packaging, preservation, and transportation, such as food packaging bags and food additives.In current society in which food safety receives unprecedented public attention, individuals should be encouraged to take practical actions to prevent or reduce the health hazards that may be caused by food safety issues.
If individual preventive behaviour is an important measure to address food risks, what factors influence individuals to take relevant actions?Previous health theory models, such as protection motivation theory, subjective expected utility theory, and the health belief model, have focused on perceived threat or fear assessment.These theories hold that the possibility of being affected by a risk or the severity of the impact caused by the risk are important psychological motivations for individuals to adopt health protective behaviours.However, these theoretical models generally overlook the role of communication media in risk events and present only fragments of individuals' responses to risk events.When food risks occur, food risk information often cannot reach the public directly.Instead, intermediaries such as the media are needed for the public to obtain the latest news on the event and understand its progress to understand how to deal with food risks.Therefore, the media play a crucial role in the process of risk information diffusion and the formation of public risk perception [4,5].
In the 1990s, the internet became widely available.Over the following 10 years, smartphones, social media, and streaming media extended and amplified the presence and aggregate functionality of the internet so that it reached the astounding level it has now [6].According to authoritative survey data, the internet has become the main channel for the Chinese public to obtain information, with 783 million Chinese people mainly consuming news on internet platforms [7], accounting for more than half of the total population in China.More importantly, the changes triggered by internet technology in the field of information production and information dissemination are rapidly reshaping public opinion in today's society.Online public opinion has become the main information platform that affects the public's cognition and attitude towards social phenomena [8].Thus, it is of great practical and theoretical significance to explore the relationship between the acquisition of online food safety information and food risk preventive behaviour.In previous media effect studies, the selection bias of research samples was a problem that was long ignored.Similarly, little is known about the heterogeneous impact of different degrees of attention to this topic.
To compensate for the shortcomings of previous studies, this study attempts to predict food risk preventive behaviour with a more complete theoretical framework and to examine how individuals cope with food risks.Based on the protective action decision model, this study systematically explores the structural relationships among online food safety information acquisition, risk perception, information credibility, and food risk preventive behaviour.This study utilizes data from the fifth wave of the first Taiwan Communication Survey database of 2098 Taiwanese people to explore the relationship between online food safety information acquisition and food risk preventive behaviour and construct a relatively complete research model with information credibility and risk perception as mediating variables.
Overall, this study has three contributions.First, this study focuses on the predictive effect of online food safety information acquisition on food risk preventive behaviour and uses a propensity score matching method to eliminate systematic bias between the control group and the experimental group in the matched confounding variables [9].This addresses the issue of selectivity, solving the bias in the research results and the "net" effect of independent variables on dependent variables, which makes the research results more credible.Second, compared with the previous literature's neglect of the impact of topic concerns on the behaviour of information acquirers, this study makes a more detailed distinction between individuals' online food safety information concerns to provide empirical evidence of the impact of online food safety information acquisition on food risk preventive behaviour.Third, this study regards information credibility and risk perception as important intermediary variables for transforming online food safety information acquisition into actual risk preventive behaviours.More importantly, this study innovatively proposes that information credibility and risk perception should be regarded as a continuous reaction process because the stimulation effect of online food safety information exposure is not only realized through information credibility or risk perception; it is likely to affect participation behaviour through the transmission and joint effect of the two.Ultimately, the theoretical chain of "information acquisition -information credibility -risk perception -preventive behaviour" is formed, which enriches and expands the protective action decision model.This chain logic relationship has decision-making reference value for emergency management departments to formulate accurate food risk communication strategies and to prevent, control, and eliminate food risks.
Protective action decision model
Various theoretical models of psychological motivation and behaviour decision-making provide useful explanations for how risk communication affects disaster response and individual behaviour.For example, psychodynamic theory provides a psychological research perspective and general direction for individual social behaviour decision-making and behaviour formation.The protective motivation theory (PMT) proposed by Rogers et al. also illustrates the role of psychological regulation on behavioural performance.However, their research is more focused on risk research situations.They consider information about individual characteristics and external environmental information as triggers for protective motivation and risk assessment and coping assessment as intermediate action mechanisms for protective motivation mechanisms.Finally, individuals produce self-protective thoughts and behaviours [10].Based on previous research models, Lindell and Perry [11] proposed the Protective Action Decision Model (PADM) and integrated the information processing process in 2012 to modify and improve the original model.Hence, the PADM is recognized as a multistage behavioural decision-making theoretical framework.
This theory states that individuals with different characteristics (such as skill use, cognitive ability, and economic resource ability) receive risk information about environmental and social factors from various information channels, which promote public attention and understanding of risk information.This triggers perceptions of risk, stakeholders and protective behaviour.On this basis, behavioural decisions are made, and corresponding protective actions are taken to reduce risks [12].Risk perception is the perception of the possible occurrence of risk and its consequences.Protective behaviour perception is the perception of the effectiveness and cost of behaviour when taking protective behaviour.Stakeholder perception is the perception of the professionalism, reliability, and responsibility of the information source [13].Therefore, risk perception and information credibility are important psychological motivations for individuals to receive risk information and influence their risk response behaviour.
The PADM provides the basis for explaining individuals' behavioural decision-making processes in risky situations.It is widely used in research on protective behaviour in natural risk situations such as earthquakes [14], hurricanes [15], volcanoes [16], and floods [17].Individuals' preventive behaviour decision-making processes for food safety risk event situations are highly similar to that in disaster situations.When individuals with subjectivity and relative rationality are exposed to food safety risk information, they actively utilize a variety of information channels and knowledge to assess the credibility of the risk information and form risk awareness and perception to adjust their own food risk prevention behaviour.Therefore, the prevention behaviour model provides a powerful reference for the theoretical framework of this study.However, in today's new media environment with diversified media channels and abundant information content, the public's trust in information source channels has changed.The protective behaviour decision-making model fails to fully consider the impact of information credibility on risk perception when individuals obtain information.What is the role of information credibility in public information acquisition, risk perception and preventive behaviour?Does it affect the causal model of information acquisition, risk perception, and preventive behaviour in crisis and risk communication?
Previous studies have shown that public risk perception is a process of collecting, selecting, understanding and responding to crisis information [18].Reliable information sources help the public form a correct perception of risk.Public risk perception is an important factor that affects public decision-making for protective behaviour [12].Information credibility and risk perception not only play an intermediary role between information acquisition and preventive behaviour but also show a chain logic relationship.Therefore, this study attempts to embed the variable of information credibility into the causal model of "information acquisition -risk perception -preventive behaviour" to form the new theoretical chain of "information acquisition -information credibility -risk perception -preventive behaviour".Whether the PADM can address food risk scenarios also needs to be further verified.In addition, considering that each person's attention to food safety issues is different, there may be a heterogeneous impact on the relationship between online food safety information acquisition and food risk preventive behaviour.Therefore, this study bridges the issue of concern and the protective behaviour decision-making model to comprehensively explore the relationship between online food safety information acquisition and food risk preventive behaviour, which further expands and improves upon the PADM.
The expanded PADM helps to select appropriate social cues and resources to enhance risk perception and make protective action decisions for different individuals.Emergency management departments provide good theoretical guidance on how to choose effective information dissemination methods to enhance individual food safety knowledge and preventive awareness for different populations.Once individuals obtain relevant food safety knowledge, they can take reasonable actions to enhance their own safety when food risks occur.This awareness can also improve the risk response effectiveness of emergency management departments, which is carried out synchronously.Therefore, the expanded PADM can better guide emergency management departments in developing precise food risk communication strategies for prevention and control, thereby resolving food risks.
Online food safety information acquisition and food risk prevention behaviour
Human behaviour and thoughts are affected by the quantity and quality of available information.Channels of risk information communication play a crucial role in the generation of risk perception and behavioural intention.In fact, most people are not witnesses of risk events.When personal experience is scarce, individuals often obtain risk information through interpersonal communication and media channels.However, the social network of each individual greatly constrains the breadth and depth of interpersonal communication, which leads to limited sources and content of information acquisition.The media has broken through this limitation by spreading diverse information and knowledge to the public on a large scale.The storage, retrieval and reuse functions of the network provide more opportunities for food risk communicators.Therefore, online news media are increasingly playing the role of food safety governance actors as sources of information [19].In particular, the rapid rise of social media has completely changed the way we communicate, share and obtain information online.Social media has become the main channel for individuals to obtain food safety information [20].
In the past 28 years, the public has witnessed the vigorous development of the internet.Since the commercialization of the internet in 1994, this new technology has expanded rapidly around the world.It transformed the monopoly of traditional media's domination of the release and dissemination of risk information.The internet constructed an unofficial field of risk communication that gives ordinary people and other social institutions the right to speak.Furthermore, it greatly expanded the scope and speed of the dissemination of food safety issues.According to framing theory, the framework of news reports directly affects the public's attitude.When the public is exposed to a specific information framework, their comprehension and cognition of certain phenomena will gradually tend towards the direction of the framework [21], thus changing their actual behaviour.Therefore, if an individual is exposed to relevant food safety information amid food safety events, the individual's preventive behaviour may be triggered.Studies have shown that the public can quickly obtain food safety information through social media, strengthen their risk perception of food safety, and take preventive actions to reduce the risk of food poisoning.[22] When the media reported the occurrence of African swine fever in other countries, it caused consumers in other disease-free regions or countries to worry and reduce their purchase of pork [23].Therefore, in risk studies of food safety issues, it is necessary to pay attention to online food safety information acquisition and explore its role and effect.Hence, we propose the following hypothesis: H1: Online food safety information acquisition has a significant positive impact on food risk preventive behaviours.
Differences in the degree of concern and preventive behaviours for food risk
The modernization of China's society is described as a "compressed modernization" that accelerates the production and reproduction of risks.It also leaves no time for the management of risks [24].Consequently, many social problems occur in the modernization process, which can be described as "risk symbiosis" in the period of social transformation.Among various social risks, food safety issues are closely related to public life, health, and wellbeing and are the risk events that receive the highest degree of public concern.For the general public, attention is a scarce resource.People use their energy to focus on the information they read, which in turn promotes the formation of the public's cognitive structure for food safety issues and guides the construction of public preventive behaviour.Previous studies on individual eye tracking have found that the reading of internet information is more selective than the reading of offline news information.It is easy for individuals to focus on specific news and improve their cognition and comprehension in specific fields [25].Thus, individuals can access and read online food safety information selectively, understand the latest progress of the event, and know how to deal with food risks.However, in cases where insufficient attention is given to food safety information, individuals may skip food safety information and avoid the opportunity to develop preventive behaviours.
Current research shows that the way individuals perceive risk and their level of concern about risk can influence their behaviour [26].In terms of health behaviour, the degree of public attention to relevant information on social media during the COVID-19 pandemic was positively correlated with preventive behaviour [27].Furthermore, attention to information on the internet can positively predict individual environmental behaviour [28].Hence, the degree of attention given to issues affects the decision-making process and adjustment of individuals' actual behaviour.In terms of food risk preventive behaviour, the impact of online food safety information acquisition on users with high attention to issues and users with low attention to issues may be significantly different.Based on the statements above, we propose the following hypothesis: H2: The impact of online food safety information acquisition on food risk preventive behaviours is different with regard to the degree of attention given to the issue.
Mediating effect: information credibility and risk perception
Human behaviour is the result of cognition and motivation [29].According to the PADM, information credibility and risk perception are two important motivations for online food safety information acquisition to affect food risk preventive behaviour.Risk perception refers to an individual's intuitive judgement and subjective feeling about the impact and severity of external objective risks under the situation of limited and uncertain information reserves.The view of the "mediatization of risk" holds that the media play an important role in the process of risk perception [30].On the one hand, the media provide crucial information channels for the public to recognize risks [31], especially when people cannot personally experience risk events and can only understand relevant risk information through the media; the role of media information is self-evident.Mobile apps are a health intervention method, and updated information can greatly improve users' knowledge of diseases and preventive behaviours [32,33].On the other hand, media can influence people's perception of risk because individuals can collect and process relevant data [34].
As an important medium for the public to access risk information, the internet has promoted the redistribution of risk discourse.The vast amount of food risk information on the internet has strengthened the public's "symbolic reality" experience of risk [35].A survey of 688 South Koreans found that personal exposure to cancer information in social media was significantly positively correlated with the respondents' cancer risk perception [36].Other studies have found that through social media access to information related to MERS, individuals' risk perception level was significantly improved [37,38].In the context of food safety, consumers' perceptions of food safety risks determine their intentions and behaviours in purchasing these foods [39].An empirical study of the salmon incident in Beijing's Xinfadi in 2020 confirmed this point: the stronger their risk perception, the more consumers avoided purchasing salmon-related food [2].Accordingly, this study proposes the following hypothesis: H3: Risk perception mediates the relationship between online food safety information acquisition and food risk prevention behaviour.
Information credibility is crucial for effective risk communication, and the strength of information credibility directly affects the public's willingness to engage in preventive behaviours.People who believe that information on social media is trustworthy tend to handle risk information more seriously [40].Information credibility is affected by factors such as the subject of information sources and channels of information dissemination [41].
Studies have shown that hard news in traditional media is more credible than that in new media, and there is no significant difference in the credibility of soft news between new media and traditional media [42].Furthermore, from the perspective of the persuasion effect, a large number of studies have highlighted the direct and positive effects of information credibility on individual behaviour [43][44][45][46].For example, Hong et al. (2019) [47] found that in the face of earthquake threats, information credibility affects the public's assessment of disaster severity and evacuation decisions.Yueh et al. (2022) [48] suggested that information with high credibility can reduce the uncertainty of information seekers, making them more willing to take related actions to overcome risks.Dong et al. (2018) [49] noted that high-credibility information is better able to elicit public perceptions of climate change risks in people's personal lives and is more likely to trigger climate-related action.Accordingly, this study proposes the following hypothesis: H4: Information credibility mediates the relationship between online food safety information acquisition and food risk prevention behaviour.
According to the PADM, the channels and frequency of obtaining information in risk events affect the information credibility and risk perception of the public, which in turn affects people's decision-making behaviour.Is there a potential impact mechanism between the two major psychological perceptions of information credibility and risk perception?Some studies have shown that information credibility is an important predictor of risk perception [50,51].Studies have also shown that media messages shape people's perception of risk and subsequently influence their mental health and behaviours [52].Based on the above discussion, this study argues that information credibility affects people's acquisition of information on food safety online, further affecting their subjective perception and judgement of risk and thus changing their intention to prevent behaviour.Therefore, we hypothesize the following: H5: Information credibility and risk perception play a chain intermediary role between online food safety information acquisition and food risk prevention behaviour.
Sample and data source
This study uses survey data from the fifth wave of the first Taiwan Communication Survey database.This wave of the survey took "risk and disaster communication" as the research topic and Taiwanese people over 18 years old as the interviewees.The study used a stratified three-stage PPS sampling method to sample towns and cities, villages, house numbers, and family members and obtained a total of 2098 valid samples.
Food risk preventive behaviour
Ten behaviours were measured that people adopt to protect themselves against common food risk events and serious food incidents in Taiwan.These strategies are promoted by national and local health and food safety authorities (e.g., the Ministry of Health and Welfare, Food and Drug Administration, and Office of Food Safety of the Executive Yuan), whose official websites contain relevant policies and news related to food safety. 1 They were also promoted by consumer groups and nonprofit organizations focused on health promotion [53,54].Examples of items are "avoiding drinking beverages in plastic cups" and "avoiding using plastic bags and plastic containers holding cooked food or for microwave heating".The respondents' answer options were "0 = No, 1 = Yes".Cronbach's alpha for the 10 items was 0.76, indicating that the measurement was reliable.The food risk prevention behaviour variable was measured by the sum of the scores.The total score ranged from 0 to 10, M = 6.165,SD = 2.617.
Online food safety information acquisition
This issue was assessed with the following question: "How often do you usually obtain food safety-related information (e.g., plasticizers, cooking oil safety, contaminated food, pesticide residue) from the internet?"This self-rated measurement of information acquisition has been commonly applied in previous studies [55][56][57][58].
The respondents' answer options were on a four-point scale (1 = never, 2 = rarely, 3 = sometimes, 4 = often).The higher the value, the higher the frequency of contact with food safety information on the internet.In the propensity score matching analysis, "never" and "rarely" were recoded as 0 and "sometimes" and "often" were recoded as 1, forming the control group and the experimental group, respectively, M = 2.547, SD = 1.156.
Degrees of topic attention
This issue was assessed by the following question: "Do you care about food safety?"The respondents' answer options were on a four-point scale (1 = very indifferent, 4 = very concerned).In this study, "very indifferent" and "not very concerned" were recoded as 0, representing low issue concern, while "a little concerned" and "very concerned" were recoded as 1, representing high issue concern.
Risk perception
Risk perception was measured with two items: "Do you think food safety problems may affect your health?" and "Do you think food safety problems have a serious impact on your health?"The respondents' answer options were "1 = very unlikely, 4 = very likely" and "1 = not serious, 4 = very serious".This study summed the two items and took the average to develop a "risk perception" scale.The higher the score, the higher the risk perception of the respondents.Cronbach's alpha of the 2 items was 0.80, M = 3.388, SD = 0.650.
Information credibility
Information credibility was assessed by the following question: "Do you believe the food safety information provided by the internet?"The respondents' answer options were measured on a four-point scale from "mostly do not believe" (coded 1) to "mostly believe" (coded 4).The higher the value, the more the respondents believed the information about food safety obtained online, M = 3.127, SD = 1.045.
Control variables
In this study, gender, age, and education level were included in the model as control variables.Males accounted for 44.6% of the sample (0 = female, 1 = male).The gender proportion was relatively balanced.The education level was divided into seven categories, with 45.8% of respondents having college degrees or above, indicating that most respondents had a higher education level.Previous literature notes that whether the public has experienced food safety problems also affects their perception of food risks [59].Therefore, the respondents' experience was also included in the model as a control variable.The question "Have you or your family ever been affected by food safety problems?" in the questionnaire was used for measurement.The answer options were "0 = No, 1 = Yes".
Statistical analysis
In the study of media effects, there are some variables that confuse the relationship between independent variables and dependent variables, resulting in selective bias and making it difficult for researchers to directly explore the "net effects" between the two.Some studies have shown that individual heterogeneity is an important reason for the physical access gap [60][61][62].In the quantitative analysis, we attempted to eliminate the influence of these competitive interpretation factors, but none of the models could do so.The existing confounding variables made the research results lose the causal interpretation effect.Given the impact caused by selective bias, the effective response method is propensity value matching.The operational logic of propensity value matching is closer to the requirements of classical random experiments under the counterfactual framework.In the propensity value matching operation, individuals from different units are matched according to the proximity of propensity value points.An individual may match with multiple individuals in different groups to form a processing group and a control group.The matched samples effectively control the selective bias caused by the confounding variables, and a "quasirandom" experiment is reconstructed to calculate the "net effect" of the independent variables on the dependent variables [63].Therefore, this study used propensity value matching to address the impact of selection bias on the research results to ensure the reliability of the research conclusions.
This study first used propensity value matching to analyse the direct impact of online food safety information acquisition on preventive behaviours.To test the robustness of the impact effect, this study simultaneously used three propensity value matching methods, radius matching, nearest-neighbour matching and kernel matching, to conduct the empirical analysis.Furthermore, the heterogeneity impact caused by the difference in the degree of topic attention was examined through the grouping tendency value matching of topic attention.Stata 15.1 SE software was used to analyse the above two steps.Finally, to examine whether information credibility and risk perception are mediators of the relationship between online food safety information acquisition and food risk prevention behaviours, we conducted a mediation analysis using Hayes' PROCESS macro [64].PROCESS reflects the path analysis framework to estimate the ordinary least squares regression coefficients of every model pathway.To test the indirect effects, the bootstrapping technique (5000 samples) was applied to ensure more robust estimations than the Sobel approach [65].Bootstrapping has greater power and minimizes type I errors by resampling subsets of data from the given dataset and then summarizing the final results from the statistical tests on these subsets [66][67][68].All statistical analyses were performed using IBM SPSS Statistics 26.0 and the PROCESS macro Model 6 for SPSS.
Direct effect test
This study constructed a logistic regression to estimate the predictive propensity using the recoded two-category online food safety information acquisition as the dependent variable.The regression results showed that pseudo R 2 was 0.273 and − 2Log Likelihood was 2072.189,indicating that the overall compatibility of fit of the model was high and the selected independent variables had a strong predictive effect on the acquisition of information on food safety online.
Since the effective sample size of this survey was limited, propensity score matching with replacement was conducted, and parallel matching was allowed.Abadie et al. ( 2004) [69] suggested one-to-four matching to minimize the mean square error, so the nearest neighbour matching in this study was one-to-four matching.The average treatment effect on the treated (ATT) of the respondents was the core evaluation index of the propensity matching effect [8].In this study, ATT was equal to the food risk prevention behaviour of the high acquisition of online food safety information group (experimental group) minus the food risk prevention behaviour of the low acquisition of online food safety information group (control group), which is the real effect of the acquisition of online food safety information on food risk prevention behaviour.The results of propensity score matching in Table 1 shows that the ATT before matching was 0.534 (t = 4.36, p < 0.001).However, because the sample before matching was affected by the confounding variable, the net effect cannot be obtained.The ATT result reflects the pseudocause effect produced by the independent variable and the confounding variable.To obtain the real effect of food safety information acquisition on the internet, this study adopted three methods of radius matching, nearest neighbour matching and kernel matching to conduct trend value matching.The ATT results of the three matching methods were 1.057 (t = 6.46, p < 0.001), 1.033 (t = 5.84, p < 0.001) and 1.065 (t = 6.51, p < 0.001), respectively, which shows that the three matching results were basically consistent.The propensity value matching results had strong robustness, indicating that the acquisition of information on food safety online has a significant positive impact on food risk prevention behaviour.
To ensure the validity of the propensity score matching results, the matching process must meet the balance assumption and common support assumption.Taking the nearest neighbour matching (1:4) result as an example, the balance test results are shown in Table 2.The standardized deviation of all covariates after matching was less than 10%, indicating that the test results did not reject the original hypothesis that there was no systematic difference between the treatment group and the control group.Therefore, nearest neighbour matching passed the balance test.Furthermore, radius matching and kernel matching also passed the balance test.Due to space limitations, this will not be repeated.
Figure 1 further illustrates the test results of the common support hypothesis of nearest neighbour matching.It can be seen from the propensity score matching nuclear density diagram in Fig. 1 that there was a very significant difference between the nuclear density curves of the experimental group and the control group.The trend of the nuclear density curve after matching was roughly the same, and the difference between the two groups of samples was significantly reduced.The existence of selective bias affected the estimation of the real effect of the acquisition of information on food safety online on food risk prevention behaviour.The situation of radius matching and core matching was similar and will not be repeated, so the three matching methods met the common support assumption.In summary, the research results of three propensity score matching are valid, and research Hypothesis H1 is supported.
Heterogeneity analysis of topic concern
In this study, the samples with high and low topic attention were used for propensity score matching to examine the heterogeneous impact of users' different levels of attention to online food safety information (see Table 3).The analysis of the high topic concern group showed that the ATT value before matching did not pass the significance test (t = 1.13), but after radius matching, nearest neighbour matching and kernel matching, the ATT value was 0.917 (t = 5.76), 0.866 (t = 4.81) and 0.978 (t = 5.80), respectively, and the results were significant at the 0.001 level.However, the ATT values before and after the matching of the low-topic focus group did not pass the significance test (0.04 ≤ | t | ≤ 0.7).In summary, the food risk preventive behaviour of users with low topic attention will not change with an increase in the frequency of the acquisition of information on food safety online.However, for users with high topic attention, the increase in the frequency of the acquisition of information on food safety online will significantly increase their food risk prevention behaviour.Therefore, research Hypothesis H2 is supported.
Mediating effect test
We used the bootstrap method to test the mediating effect of information credibility and risk perception on the acquisition of information on food safety online and food risk prevention behaviour.The test results (Table 4) show that in the first mediation path, the acquisition of information on food safety online had a significant positive mediating effect on food risk prevention behaviour through information credibility (β = 0.007, 95% CI [0.002, 0.014], excluding 0); therefore, H3 is supported.In the second mediation path, the acquisition of information on food safety online also had a significant positive mediating effect on food risk prevention behaviour through risk perception (β = 0.043, 95% CI [0.029, 0.057], excluding 0); therefore, H4 is supported.
Chain-mediated effect test
We used Model 6 of SPSS 26.0 as the chain mediation and adopted the bootstrap method to test the chain mediation effect of information credibility and risk perception.
The test results (Table 4) showed online food safety information acquisition → information credibility → risk perception → food risk prevention behaviour (β = 0.002, 95% CI [0.001, 0.003], excluding 0), indicating that the chain mediating role of this path is positively significant.That is, the acquisition of information on food safety online can enhance risk perception by improving the public's information credibility, thus affecting food risk prevention behaviour.Therefore, H5 is supported.
Online food safety information acquisition is an important factor affecting food risk prevention behaviour
This study used propensity value matching to address potential selectivity bias and found that obtaining online food safety information significantly increases individuals' food risk preventive behaviour.That is, the more frequently online food safety information is obtained, the more likely individuals are to adopt food risk preventive behaviour.When facing issues that have not yet been personally experienced, individuals are particularly susceptible to the influence of media information [70].In crisis and risk situations, mass communication cultivates the public's cognition and behavioural tendencies [71].Social media can also produce this cultivation effect [72], making cognition, attitude, behaviour and other aspects change [73].The internet, as a major source of information, can raise public awareness of food safety and improve food safety [74][75][76].However, it is necessary to be vigilant about the "social amplification of risks" effect of the internet.Some online media not only present uncertain food risk information but also provide inaccurate, incorrect, or misleading information to catch the eye [77].This can easily lead to public panic and produce a "vicarious traumatization" effect on non-first-hand experience, resulting in an irrational herd phenomenon.
For government agencies, these challenges may be particularly severe [78].Given the strong impact of online food safety information acquisition, online media should enhance their agenda-setting capabilities and expand the coverage and dissemination of correct food safety information on internet platforms.First, efforts should be made to plan topics that highlight the "acquisitionibility" framework, emphasizing the individual value of adopting food risk preventive behaviours.Second, the news media should establish an "action" framework, unite with opinion leaders in professional fields, popularize science effectively, comprehensively and credibly, and provide food safety knowledge related to the daily life of the public to improve the public's awareness and ability to avoid food risks.
The impact of topic attention difference reflects the group difference in the impact of online food safety information acquisition
This study found that internet users with high degrees of topic attention can focus their limited personal attention on the acquisition and reading of food safety information, which significantly increases relevant preventive behaviour.Users with low topic attention do not spend time on food safety information, and their input and effort motivation are low.They quickly browse or skip food safety information.Hence, relevant preventive behaviours are not affected by the acquisition of information on the internet.These research findings further enrich relevant research at the theoretical level.Previous research has focused on the prediction of the degree of public attention to portal news and food information in social media on the public's risk perception [79], with few heterogeneous effects of the degree of attention to research topics.At the practical level, the online media and the government should be reminded that the type and content of food safety news reports should differ from person to person to ensure that everyone receives accurate and timely information, which allows people to take preventive actions to protect their health and safety.
For the general public, their attention to the issue is usually affected by the degree of information intervention [80].Zaichkowsky (1994) [81] proposed that the degree of individual information intervention can be measured with regard to the important, relevant, meaningful, and worthwhile dimensions.When food safety events occur, the news reports on online media and the response of the government can be formulated to emphasize the high relevance of food safety and the importance of taking preventive measures.These actions will improve the public's attention to food safety information.In practice, we can also use big data technology to identify groups with low attention to food safety information among internet users and use algorithm recommendation technology to strengthen their food risk experience to encourage them to take corresponding preventive measures to protect their health and safety.
Information credibility and risk perception are important psychological motivations for online food safety information acquisition to affect food risk prevention behaviour
This study found that information credibility plays a positive mediating role between online food safety information acquisition and food risk prevention behaviour.This is consistent with the research conclusions of Martins et al. (2018) [82]: information with high reliability is more easily accepted by users and can change their behaviour.The elaboration likelihood model (ELM) suggests that whether the audience must undergo careful and fine processing for attitude change can be divided into a central path and a marginal path [83].The attitude formed by the central path is more stable than that of the marginal path.In the internet environment, the ELM model suggests that the quality of information content is the central path and the credibility of information sources is the marginal path [84].In this regard, online media can report food risk events by adhering to the strategy of primarily improving the quality of information content and supplementing it with improvements to the credibility of information sources.
With regard to the quality of information content, the Internet User Information Adoption Model suggests that the quality of information content can be judged from four measurement criteria: accuracy, integrity, timeliness, and relevance [85].To this end, online media should report the content of events related to the public's immediate interests in a timely, scientific, comprehensive and complete manner with high information quality for the release of information on food safety risk events.Media should reject the "eyeball effect", sensationalism and quoting out of context, thereby eliminating the public's sense of uncertainty and helping the public form stable food risk prevention behaviours.
In terms of information source credibility, this study found that with regard to the credibility of information sources, institutional microbloggers are more reliable than individual microbloggers, and professional opinion leaders are more reliable than social celebrities [86].When a food safety event occurs, online media and the government should invite research institutions and experts to interpret the event.These actions will increase the credibility of the information source and reduce the instability of the edge path and sleeper effect.
Moreover, this study found that risk perception plays a positive mediating role between online food safety information acquisition and food risk preventive behaviour.In other words, online food safety information acquisition can indirectly affect the public's relevant preventive behaviour by influencing their risk perception.This research result is consistent with the existing results related to crisis and risk communication [87].However, it should be noted that compared with indirect effects, online food safety information acquisition has stronger direct explanatory power for preventive behaviours.This further supports Perse's view that in crisis situations, the influence of the direct effect model based on traditional magic bullet theory is enhanced [88], and users show stronger information compliance and action response to food safety information.
From single to continuous: the serial mediating roles of information credibility and risk perception
Significantly, this study also found that information credibility and risk perception play a serial mediating role between online food safety information acquisition and food risk prevention behaviour.Previous studies mainly focused on the action mechanism of single mediating variables or multiple parallel mediating variables, and lack of exploration of the interaction mechanism of mediating factors.In practice, the interaction between different psychological factors will have different impacts on individual behaviours, and the implicit internal cognition is closely related to the explicit external behaviour.The chain mediation model can provide a more comprehensive and reasonable explanation for individual behavioural preferences.Therefore, this study innovatively proposes that the two should be seen as a continuous reaction process, because the stimulation effect of online food safety information acquisition is not only realized through one of the information credibility or risk perception, but more likely to affect the participating behaviour through the transmission and joint effect between the two.Information credibility positively affects risk perception, improves the public's susceptibility and severity cognition of food risks, and further promotes the public to adopt food risk prevention behaviour.This fully reveals the "black box" mechanism of online food safety information acquisition affecting food risk prevention behaviors, and builds a theoretical chain of "information acquisition -information credibility -risk perception -prevention behaviour".The relevant theory is extended effectively and provides a reference paradigm for the subsequent research.
Although the above research conclusions help to establish a possible path from the acquisition of information on food safety online to the prevention of food risk, there are still some limitations.First, this study was a crosssectional study.The research logic was to first deduce and infer the causal relationship and mechanism between online food safety information acquisition and food risk prevention behaviour from the level of the PADM and then use the survey data of the Taiwan Communication Survey Database to confirm or falsify the research hypotheses.Second, the secondary data used in this study limited our choice and setting of some variables, which may have had an impact on the final research results.Third, this study only explored the impact of online food safety information acquisition on preventive behaviour at the individual level but failed to include structural factors at the macro level.
Conclusions
Food safety is a significant health issue that people face daily.An increase in the frequency of online food safety information acquisition will significantly increase individuals' food risk prevention behaviour.However, only users with high concern about the issue will be affected.The food risk prevention behaviour of users with low concern about the issue will not be affected by online food safety information acquisition.Further analysis showed that risk perception and information credibility both play a mediating role in the impact of online food safety information acquisition on food risk prevention behaviour.Moreover, the transmission and united effects between information credibility and risk perception play a distal mediating role.Our study findings can provide empirical evidence for risk managers and decisionmakers to reevaluate the role of the internet in food risk management.Further studies should improve the operation of variables and explore the causal relationship and impact mechanism between online food safety information acquisition and food risk prevention behaviour in a more comprehensive and accurate way.In addition, follow-up studies can use the control experiment method, which may be more suitable for the test of causality.
Fig. 1
Fig. 1 Kernel density map of propensity values before and after nearest neighbor matching
Table 2
Balance Test Results of Nearest Neighbor Matching
Table 4
Mediating effect test between information credibility and risk perception | 2023-10-02T13:42:50.067Z | 2023-10-02T00:00:00.000 | {
"year": 2023,
"sha1": "6ceab02bfef87f0c9b956f32e21c561bdadd3231",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-023-16814-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ca7e59f659bd4c492a5f51ea42f309c5b2a4feb",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244174246 | pes2o/s2orc | v3-fos-license | Clinical , Radiological and Evolutionary Aspects of Higher Grade of Injury of Subaxial Spine
Introduction: Higher grade of injury of the subaxial spine is any osteo-disco-ligamentary injury clinically classified as ASIA A and B, very often involving the functional and vital prognosis of the patient. In African literature, few works have attempted to focus on this problem. The objective of this study is to evaluate the prognosis of patients with high grade of cervical spine trauma. Patients and methods: Over a 10-year period from 2009 to 2019, we retrospectively studied 48 records of patients followed in the neurosurgery department of the Fann University Hospital with higher grade of injury to the subaxial spine. Results: Higher grade forms of subaxial spine injuries were frequent and represented 32% of subaxial spine injuries. The average age was 35 years with extremes of 16 to 60 years. We found predominance of males, with a sex ratio of 15, and falls were common cause. All patients presented a severe neurological, i.e. 81% classified as ASIA A. Associated injuries such as head trauma were observed in 6.25%. Surgical treatment was performed in 44%, by anterior approach in the majority. Mortality was high, at 79%. Only one patient had a complete recovery among the 10 survivorsafter 12 months of follow-up. Conclusion: The improvement of the technical platform in the pre-hospital and hospital management of higher grade of injury of the subaxial spine and the creation of secondary neurosurgical care centers can improve the functional and vital prognosis. However, prevention remains the basis and constitutes the best treatment, while at the same time raising the awareness of the population.
Introduction
Trauma to the subaxial spine (ICS) represents all osteo-discoligamentary injuries involving the spine between the C2-C3 and C7-T1 discs.
"Highly mobile and poorly protected, the cervical spine is particularly vulnerable to trauma [1]. We speak of higher grade spinal injuries when the vital and functional prognosis is engaged. In our context, they concern patients classified as ASIA A or B and are mostly male and young [2,3]. Because they are more exposed to traffic accidents, which represent about half of the aetiologies, followed by falls and sports injuries [4].
Injuries to the subaxial spine are caused by multiple traumatic vectors. The diagnosis and management of spinal injuries must be early in order to limit the extension of the injuries. The African literature in general and Senegalese in particular, reports few works devoted specifically to the trauma of the subaxial spine with particular emphasis on their clinical, radiological and evolutionary fate. The objective of this work is to determine the epidemiological-clinicalradiological-evolutionary aspects of the so-called higher grade injury of the subaxial spine.
Patient and Methods
Over a period of 10 years, i.e. from 1 May 2009 to 1 May 2019, we retrospectively studied 48 files of patients followed in the neurosurgery department of the Fann University Hospital who had complained about trauma to the subaxial spine. We only included in this study patients who had suffered about higher grade trauma to the subaxial spine, i.e. patients classified as ASIA A and B. The study parameters were: frequency, age, sex, circumstances of the injury, time of admission, clinical (ASIA classification), imaging findings, type of treatment and outcome at 12 months based on the ASIA score and radiological findings. Data were analysed using SPSS version 20 software.
Results
During the study period, we recorded 150 cases of traumatic injury to the subaxial spine, of which 48 cases were classified as higher grade subaxial spine injuries, i.e. a frequency of 32%. The average age of the patients was 35±15 years, with extremes of 16 and 60 years. The age group most affected was between 26 and 45 years and represented 56%. The predominant sex was male, with a sex ratio of 15. Only 2 trauma patients were referred directly to the emergency department in our study setting, while 46 patients were referred from another health facility to our study setting. The conditions of collection and management at the trauma site were not specified. The admission time was greater than 10 hours in 85% of cases after the trauma. Among the 48 patients, 23 were victims of a fall (fall from a tree, fall from a height, fall from a car and fall from the second floor), 18 of road accidents and 7 of other types of accidents (3 patients were received after a wrestling match, 2 who had received a millet bag on the head, one diving accident and one patient had hit a wall (Table 1).
According to the ASIA classification, ASIA A was predominant in 81.25% of cases, compared to 19% of ASIA B cases. Tetraplegia was the most frequent clinical presentation in the series in 39 cases (81.25%). Neurovegetative disorders were present in 9 cases, i.e. 18.75% of our patients, dominated by priapism in 5 cases and respiratory disorders in 4 cases.
Sphincter disorders accounted for 12 cases, 10 of which had constipation and 2 of which had sphincter disorders such as urinary retention or incontinence. Trophic disorders such as pressure sores were observed in 3 of our patients (6.25% of cases). Associated injuries and symptoms were present in 18.75% of cases. They were dominated by head trauma and skin wounds, respectively 6.2%. The 93.7% of patients in our series had vertebro-medullary lesions. Of these, 28 patients had disco-ligamentary lesions, i.e. 58.3% of cases, 15 patients had bone lesions, i.e. 31.2% of cases, 2 patients had mixed lesions and i.e. 4.2% of cases, and 3 patients had pure spinal cord lesions. The most common lesion levels were C4-C5 and C5-C6 with 24.4% and 37.8% respectively ( Table 2).
Among the lesions, the most common were fracture-luxation followed by simple fracture with frequencies of 43.75% and 31.25% of cases respectively.
Pre-hospital care (pick-up, transfer conditions) was not specified. However, secondary transport was carried out by the health facility's ambulance. Surgery was the most common treatment in 21 cases (44%). Only one case had benefited from a posterior approach as opposed to almost all of the anterior approach and thus osteosynthesis by screwed plate. All patients received functional, bladder and bowel rehabilitation except for the patients who died early. The post-therapy evaluation of our series of 48 patients at 12 months' follow-up found 10 patients who survived, i.e. 20.8%. These patients were re-evaluated neurologically using the ASIA score as (Table 3).
Only one patient in the series made a full recovery, a 32-year-old who received a millet bag on the head causing a trauma to the cervical spine classified clinically as ASIA B on entry and whose CT scan of the cervical spine showed a vertebro-discal ligament injury of the fracture-luxation type; he underwent anterior surgery followed by a few sessions of motor physiotherapy which led to a full recovery after one year.
The average follow-up time was 12 months. The mortality of subaxial spine injuries in our setting was 79% of which 50% or 24 cases of deaths were ASIA A initially unoperated and 14 cases died postoperatively and the cause of death was related to neurovegetative disorders and decubitus complications. Thirty-four cases of death were initially classified as ASIA A with neurovegetative disorders at the beginning, the subjects aged over 60 years were practically all dead and death is considered as early if it occurs before one month of evolution, medium between one month and 6 months and late after 6 months. 35 cases were noted in our series to have died early, i.e. 72%.
Discussion
In Africa, there are few epidemiological studies that give an exact idea of the frequency of cervical spine injuries. During our study period, subaxial spine injuries represented 3% of all pathologies recorded in the department. The work carried out by [5], found a frequency of 2 to 3%, similar to ours, which suggests that cervical spine trauma remains an unexploited field in orthopaedic traumatology. The literature reports 35 years as the average age of occurrence of the trauma. The fact that young adults are the active segment of our society and therefore the most exposed, and the fact that they practice violent sports explains the occurrence of spinal injuries in our context [6][7][8][9].
By far the most common cause of cervical spine injuries in general is road traffic accidents as reported in the international literature [10][11][12][13][14]. The most frequent etiology is falls, which is why most of our patients were farmers with an increased risk of falling from a tree. But also the types of risky work like masonry without any form of protection. The delay between the occurrence of the trauma in the case of higher grade of spine trauma and the first treatment plays an important role in the outcome of the trauma patient. In our context, this long delay was explained by a failure of the emergency management system and the defective state of the roads in our environment. But also the inexistence of peripheral centers for neurosurgical care and management of cervical spine injuries in some regions of Senegal [15,16].
The ASIA (American spinal injury association) score remains the most widely used means of evaluating spinal trauma. The predominance of grade A explains the severity of spinal injuries in falls, which is the most frequent etiology in our study. Higher grade injury to the spine is associated with a severe neurological picture including genito-phincter disorders. Imaging in the context of spinal trauma is currently guided by certain decision trees such as those reported by some Canadian studies (NEXUS, CCR) [17], identifying the type of imaging to be requested.
Disco-ligamentary lesions are always the most frequent in cervical spine trauma; they are due to the fact that the slightest damage to the spinal cord during spinal trauma can be caused by a compressive bone fragment. Secondary injuries can also be observed, such as secondary spinal cord injuries defined by a cascade of events that affects the spinal cord originally spared by the impact and carries out a posttraumatic spinal cord self-destruction [18,19]. The limitation of these secondary injuries is at the origin of multiple studies in search of pharmacological agents capable of stopping this cascade of events [20,21].
The management of spinal trauma begins at the scene of the accident, with a one-piece pick-up, respecting the straightness of the head-neck-trunk axis, and installation in a shell. Medical transport by a team trained in emergency care remains the basis for preventing secondary traumatic injuries and charity fractures. In our environment, more than half of our patients have undergone a secondary transfer, i.e. 98% of the cases, because this transport is first carried out by the fire brigade towards the peripheral health structures which then refer them to the neurosurgical centers due to the lack of services capable of dealing with spinal injuries.
The type of injury guides the management of spinal trauma, which may be orthopaedic or surgical. The aim is to optimize the chances of neurological recovery by achieving decompression of neurological structures and to fix unstable injuries.
The subaxial spine can be approached by an anterior approach (the most used in our series), by a posterior approach or by a double or combined approach. The posterior approach was used for a long time, developed by ROY Camille [22]. Today, the anterior approach is the preferred method for the surgical treatment of cervical spine disorders, whether degenerative, traumatic, tumoral or vascular. It provides easy access to the injured disc/corporal region.
In our series, 42% of the patients operated on were approached from anterior versus 2% from posterior, and the mixed approach was not used. These results are comparable to those reported in the literature (Table 4) [23][24][25].
Historically, the popular treatment in France, following the work of R. Roy-Camille [22], is posterior plate osteosynthesis, aimed at stabilizing the injured underlying disc. If radiological control does not show satisfactory release of the canal, an anterior approach with excision-graft is used. Orthopaedic treatment consisted of reduction by traction followed by compression by a rigid cervical collar. Routine administration of corticosteroids in acute spinal injuries is not indicated according to studies conducted by the NASCI-II group [24].
The evolution of cervical spine trauma in general and the subaxial spine in particular is closely related to the initial neurological state; complete spinal cord injuries grade A have a very poor neurological improvement while incomplete injuries grade B have a better chance of recovery. But also the level of injury, as the case of neurological damage above C4 is accompanied by a very high number of deaths. This is because above C4 there is damage to the respiratory centers and the main breathing muscle (diaphragm) responsible for acute respiratory failure. Post-therapy follow-up of patients is still important.
At 12 months, of the 10 surviving patients, or 21% of cases, one patient had recovered completely. The remaining nine had made a partial recovery with one patient remaining ASIA B, five of the patients had progressed to grade C and three to grade D, all due to early functional rehabilitation. This functional rehabilitation is essential for a good professional reintegration and a maximum of personal independence, provided it is well conducted; however, it remains limited by the lack of financial means of the patients, the often very spaced out appointments do not allow an intensified rehabilitation with the aim of having a better chance of recovery of mobility. Another challenge is the non-availability of appropriate rehabilitation facilities in certain localities, which can compensate for this eventuality, and this remains a subject of reflection in the management of neurological disorders in Africa.
With all these problems linked to the lack of financial and material resources for the treatment of cervical spine injuries, Africa in general and Senegal in particular is obliged to promote the prevention of these injuries by optimizing awareness campaigns on public road accidents through the respect of the highway code and the education of the paediatric population on the risk of falls by climbing trees. As for workers, wearing appropriate safety equipment on construction sites remains the only way to limit the risk of spinal injuries.
Conclusion
The improvement of the technical platform in the pre-hospital and hospital management of serious trauma of the lower cervical spine and the creation of secondary neurosurgical care centers can | 2021-10-18T16:58:23.140Z | 2021-09-25T00:00:00.000 | {
"year": 2021,
"sha1": "bc32f16e0ab4412e3628dbca09ecfbe1e025b635",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24966/asse-3126/100028",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b3a5a1cb36083015156046c5f605b4acfd010224",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15957398 | pes2o/s2orc | v3-fos-license | Daily Rhythmic Behaviors and Thermoregulatory Patterns Are Disrupted in Adult Female MeCP2-Deficient Mice
Mutations in the X-linked gene encoding Methyl-CpG-binding protein 2 (MECP2) have been associated with neurodevelopmental and neuropsychiatric disorders including Rett Syndrome, X-linked mental retardation syndrome, severe neonatal encephalopathy, and Angelman syndrome. Although alterations in the performance of MeCP2-deficient mice in specific behavioral tasks have been documented, it remains unclear whether or not MeCP2 dysfunction affects patterns of periodic behavioral and electroencephalographic (EEG) activity. The aim of the current study was therefore to determine whether a deficiency in MeCP2 is sufficient to alter the normal daily rhythmic patterns of core body temperature, gross motor activity and cortical delta power. To address this, we monitored individual wild-type and MeCP2-deficient mice in their home cage environment via telemetric recording over 24 hour cycles. Our results show that the normal daily rhythmic behavioral patterning of cortical delta wave activity, core body temperature and mobility are disrupted in one-year old female MeCP2-deficient mice. Moreover, female MeCP2-deficient mice display diminished overall motor activity, lower average core body temperature, and significantly greater body temperature fluctuation than wild-type mice in their home-cage environment. Finally, we show that the epileptiform discharge activity in female MeCP2-deficient mice is more predominant during times of behavioral activity compared to inactivity. Collectively, these results indicate that MeCP2 deficiency is sufficient to disrupt the normal patterning of daily biological rhythmic activities.
Introduction
Mutations in the X-linked gene encoding Methyl-CPG-binding protein 2 (MECP2) cause the neurodevelopmental disorder Rett syndrome [1], and MECP2 mutations and duplications have been documented in several other neurodevelopmental and neuropsychiatric disorders, such as X-linked mental retardation syndrome, severe neonatal encephalopathy, Angelman's syndrome, and in some cases of idiopathic autism [2][3][4]. Further, diminished levels of MeCP2 have been noted in the autistic brain [5], and in cases of nonspecific neuropsychiatric disorder [6]. These observations highlight the essential role played by MeCP2 in establishing and maintaining neural homeostasis, and illustrate that modest alterations in its prevalence are sufficient to induce neurological impairments.
To better elucidate how MeCP2 regulates neural development and neural function, and to allow for preclinical translational studies, several mutant mouse models have been developed that either lack MeCP2 or express a clinically relevant mutant form of MeCP2 [7][8][9][10][11]. Studies in these mice have confirmed that MeCP2 deficiency alters normal brain development, synaptic communication, and neural network activities [12][13], and several behavioral impairments have been identified that likely stem from these neural deficiencies. To date, however, the primary behavioral parameters examined tend to rely on tests that take the subjects out of their cages, and transiently expose them to a new environment for assay. While these tasks have high value for assessing specific behavioral endpoints, the daily cyclic behavioral performance of the mutant mice in their home environment is not typically assessed, and electrographic brain wave activity patterns that are known to correlate with specific behaviors in wild-type subjects remain largely uninvestigated in MeCP2-deficient mice. Given the apparent link between impaired MeCP2 function and altered behavioral state rhythmicity in Rett syndrome patients [14] and in Mecp2 308/y mice [15], we sought to determine whether impaired MeCP2 function would be sufficient to alter the daily EEG behavior, thermoregulatory, and/or periodic ambulatory cycles of mice. Here, we provide the first report of how these daily cyclic activity patterns are affected by a heterozygous deficiency of Mecp2 in female mice.
Ethics Statement
All animal experimentation was conducted in accordance with the guidelines of the Canadian Council of Animal Care, and thoroughly reviewed and approved before implementation by the Toronto General and Western animal care committee (Protocol 1321.7). All surgery was performed under general anesthesia, and every effort was made to minimize suffering.
Animal Subjects
Two strains of MeCP2-deficient mice (Mecp2 tm1.1Bird [7] and Mecp2 tm2Bird [16], obtained from Jackson Laboratories, Bar Harbor, ME) were used in this study. The Mecp2 tm1.1Bird (n = 7), Mecp2 tm2Bird (n = 4), and wild-type mice were all female, aged between 300 and 400 days, and maintained on a pure C57Bl/6 background. Although different in molecular design, the Mecp2 gene is disrupted in each of these lines, and each displays common phenotypic progression [16]. Genotyping was done via polymerase chain reaction (PCR) as described previously [16][17]. All animals were housed in a vivarium that was maintained at 22-23uC with a standard 12-hour light on/off cycle commencing at 6:00. For this study, Zeitgeber time of 0 refers to the 6:00 lights-on daily time.
Implantation Surgery
Experimental mice were implanted with a mouse-specific wireless telemetry probe (TA11ETA-F10; Data Sciences International (DSI), St. Paul, MN) for recording of body temperature, general activity and EEG. The surgical implantation procedure was as described previously [18] with minor modifications. Briefly, mice were anesthetized with 2% isoflurane and the wireless transmitter placed into their peritoneal cavity. Silicone elastomer insulated sensing and reference wires connecting the transmitter were orientated rostrally toward the head via a subcutaneous route. The sensing wire was soldered to an intracranial EEG polyimide-insulated stainless steel electrode with an outside diameter of 125 mm, and placed in the parietal cortex region (bregma 20.6 mm, lateral 1.5 mm, and depth 1.5 mm) with the reference wire placed at bregma 25 mm, lateral 1 mm, and depth 1.5 mm. The implantation surgery caused no apparent abnormalities in the mice, and average body weights of both Mecp2 2/+ and wild-type mice returned to pre-operative values within 2 weeks post-surgery (32.3 g versus 32.9 g and 26.8 g versus 27.0 g for Mecp2 2/+ (n = 11) and wild-type (n = 8) respectively).
Electrophysiology Data Collection
Body temperature, activity, and EEG waveforms were collected from the implanted mice for continuous 24-hour periods. Waveform data was transmitted from the TA11ETA-F10 telemetry probes to a wireless receiver (RPC-1, DSI), which passes the data through a data exchange matrix serving as a multiplexer (DSI), and was analyzed using DataQuest A.R.T. (DSI). Body temperature was acquired using the TA11ETA-F10's thermosensor from the peritoneal cavity. Gross locomotive activity was determined by assessing the standard deviation of the wireless signal strength of the transmitter in relationship to two receiving antennae arranged perpendicularly in the RPC-1 wireless receiver. This method and arrangement has been used previously to track and measure locomotive activity in mice [19][20]. The accuracy of the system to detect ambulatory movement was further validated by visually comparing the activity output of the system with movement revealed by synchronized video recordings. Analysis of random 10 minute segments from these video data revealed that the collection program detected all of the ambulatory movements in the mice and conversely that .95% of the activity identified by the program was accompanied by visible gross movement by the mouse (n = 5 mice, Video S1). Both temperature and motor activity data were transmitted at a rate of 50 Hz, using a sampling frequency (analog to digital) of 250 Hz. The EEG waveform was transmitted at 200 Hz and sampled at 1 kHz.
Characterization of cortical epileptiform discharge events 24 hour EEG traces were visually inspected to confirm and quantify the presence of discharge activity as described previously [20][21]. In brief, a discharge event was defined as having amplitudes of at least 1.5-fold background, durations of at least 0.4 seconds, and a frequency of between 6 and 10 Hz. Two genotype-blinded investigators independently assessed EEG activity, and the individual counts averaged. The overall concordance between these individuals was 86.4%, and these differences were averaged for final analysis of discharge incidence rate and the times of discharge occurrence over the 24-hour cycle. Having confirmed the presence of discharge activity using established manual criteria [20][21], we then developed an automated method to characterize the duration and frequency components of the discharges. For this, a 6-10 Hz FIR band pass filter was applied to specifically isolate the frequency band associated with the discharges. The envelope of the filtered signal was produced by convolution of the square of the filtered data with a Gaussian kernel of 200-point aperture [22]. This envelope peaks whenever strong 6-10 Hz activity is present. As normal cortical EEG signals rarely display high-amplitude rhythmic spiking within this frequency, the envelop peak reflects discharge events ( Figure S1). To determine discharge durations, the left and right inflection points of detected events were used to find the start and end points respectively. The inflection points were computed by convolving the envelope with the derivative of the Gaussian kernel as above. The DataQuest A.R.T. program (DSI) was used to generate total spectral plots over the 24-hour period for individual mice. Timefrequency analysis was conducted using the continuous wavelet transform (CWT) found in the Matlab digital signal processing toolbox. The basis function used in the CWT analysis was the Morlet mother wavelet [23][24], which is commonly used in EEG analyses [23]. To minimize the issue of scaling, the analysis was divided into low frequencies (0.5-30 Hz) and high frequencies (30-80 Hz), with 0.5 Hz step size.
Recognition of periodic variations in EEG, gross motor activity, and core body temperature EEG signals within the delta band (0.5-4 Hz) [25] were extracted by applying a series of steps. First, the data were preprocessed by removing segments indicative of movement artifacts (characterized by voltages higher than 0.5 V), and the 0.25 second time period preceding and succeeding these events. Then, a FIR band pass filter with an order of 1000 was applied to isolate specifically the delta band. Delta power was obtained by squaring the delta band signal, and then averaging these values over 30 second intervals so the resulting value aligns with the movement activity and temperature signals (which were also recorded at 30 second sampling periods) by the data acquisition system. The Pearson's product-moment correlations between delta power, motor activity, and core body temperature were conducted using smoothed versions of these raw 30 second interval data. The smoothing function employed was the 50-point Fast Fourier Transformation (FFT) in OriginPro 6.1 (OriginLab Corporation, Northampton, MA). To then discern the daily patterning of these three signals, each was normalized to have 0 mean and variance of 1, and a Gaussian-based kernel with aperture 50 was applied on all three signals generating an envelope of the signals. A threshold of 0 was then applied to discretize the signals into two different states ( Figure S2). The delta power parameter was discretized into delta and non-delta states with a 'complete delta cycle' being defined as a state of delta followed by state of non-delta, where each individual state has a duration of at least 15 minutes. Similarly, 'mobility cycles' and 'body temperature cycles' were defined as the combination of a consecutive active and an inactive state, or consecutive high body temperature and a low body temperature states, respectively.
Statistical analysis
Student's t-tests were used for direct comparisons between two groups. For comparisons between multiple groups, one-way ANOVA with Bonferroni post hoc correction for multiple comparisons was utilized. F-tests were used to compare the equality of two variances between groups. For comparing the correlative strength between two groups, Pearson's product moment correlation coefficient, a determiner of linear dependency between two variables, was employed. Significance was set at p,0.05. Mean and standard error of the mean are presented throughout the text and figures.
Results
The general properties of cortical EEG activity are preserved in Mecp2 2/+ mice Consistent with our previous observations [21], the general neocortical EEG signals in adult Mecp2 2/+ mice did not display overt differences from adult female wild-type mice. Waveforms with elevated amplitude and slow frequency (0.5-4 Hz, delta band) were evident during immobile and sleep-like behavior, while lower amplitude higher frequency activity was seen during periods of movement or exploration. CWT time-frequency analyses [23] of these cortical EEG activities (excluding periods of EEG discharge activity, see below) revealed no qualitative differences in the frequency powers of wild-type and Mecp2 2/+ mice within the 0.5-80 Hz spectrum during either the active or inactive states of behavior ( Figure 1A-D). Additionally, time-frequency analysis of Mecp2 2/+ and wild-type 24-hour EEG waveforms using a Fast Fourier Transformation revealed the overall power spectrum distributions between the two groups was preserved ( Figure 1E, F), and analysis of specific waveforms, e.g., the delta band (0.5-4 Hz), alpha band (8-12 Hz) and beta band (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), across the full 24-hour day also revealed no significant differences in overall power between the groups ( Figure 1G).
Mecp2 2/+ mice display alterations in their daily pattern of cortical delta wave activity
The presence of delta slow wave cortical EEG waveforms is often used as an indicator of sleep in wild-type rodents [26]. Analysis of smoothed cortical delta power (as derived from the EEG waveforms, Figure S3A-D) in wild-type mice revealed clearly defined patterns of rhythmicity over the 24-hour period ( Figure 2A). In contrast, the daily patterns of delta power in Mecp2 2/+ mice were more erratic in periodicity and duration ( Figure 2B). Comparison of wild-type and Mecp2 2/+ mice revealed a significant decrease in the average number of delta cycles over a 24-hour period (12 hours light, 12 hours dark, Figure 2C). Mecp2 2/+ mice displayed an average of 7.660.6 delta cycles compared to 11.360.7 in controls (p = 0.001). This overall decease was equivalently diminished in both phases of the 24-hour day (p,0.01, for each respectively, Figure 2C). Further, the average duration of the non-delta state of each cycle period was significantly longer in Mecp2 2/+ mice than wild-type mice (1.560.3 hours versus 0.7560.1 hours, p,0.05, Figure 2D). This increase in non-delta state duration was present during both the light and dark phases of the 24-hour day, and consistent with the preferential nocturnal behavior of mice, both Mecp2 2/+ and wildtype mice displayed greater non-delta time durations in the dark phase of the 24-hour day.
Mecp2 2/+ mice display alterations in daily cyclic mobility patterns
In contrast to the alterations in delta power periodic patterning, examination of smoothed mobility patterns ( Figure S3E-H) failed to reveal differences in cycle number between wild-type ( Figure 3A) and Mecp2 2/+ mice ( Figure 3B) over the 24-hour day. Mecp2 2/+ mice displayed an average of 9.060.8 total mobility cycles over the 24-hour day, while wild-type mice displayed an average of 9.060.7 cycles ( Figure 3C). However, although the number of mobility cycles was preserved, the distribution of time in the active phase versus the inactive phase of these cycles differed between Mecp2 2/+ and wild-type mice. Specifically, the average duration of the active state of a cycle was significantly decreased in Mecp2 2/+ mice over the 24-hour day (0.5260.03 hours versus 0.6660.04 hours, p,0.05, Figure 3D), with the difference being predominant in the dark phase of the 24-hour cycle. Wild-type mice showed longer active state durations in the dark relative to the light phases (0.7860.06 hours versus 0.5460.02 hours in dark and light respectively, p,0.005), while Mecp2 2/+ mice exhibited similar active state durations in both light and dark phases (0.5260.04 hours versus 0.5060.05 hours in dark and light respectively, Figure 3D).
Mecp2 2/+ mice possess altered home cage mobility profiles
The reduced active state duration in Mecp2 2/+ mice suggests that they spend more time in the awake-immobile state than the awake-active state relative to wild-type mice. Analysis of the raw movement profiles revealed a significant reduction in the total amount of mobility (as deduced by changes in strength of the telemetry signal at the receiver) between Mecp2 2/+ mice and wildtype mice (12167 versus 204620 mobility counts, respectively p,0.005). Further, the overall time spent by Mecp2 2/+ mice moving in their home cages over a 24-hour period was significantly reduced compared to age-matched wild-type mice (694631 versus 931643 segments containing mobility, p,0.001, Figure 3E, F). In addition to total mobility differences, the average rate of movement by the Mecp2 2/+ mice was also diminished relative to wild-type (0.1760.01 versus 0.2260.01 mobility counts/sec, p,0.05), and this effect was the most pronounced in the dark phase of the day ( Figure 3G).
The inverse correlation of delta power and behavioral activity is disrupted in Mecp2 2/+ mice In wild-type mice, there was a strong inverse correlation between mobility and cortical delta power ( Figure 4A). This was not the case in Mecp2 2/+ mice ( Figure 4B). The strength of the Pearson's product-moment correlation coefficient (a measure of linear dependency) revealed a strong inverse correlation (average r = 20.75) between delta power and movement in wild-type mice, consistent with delta power serving as a good predictor for sleep/immobility in wild-type mice [26]. However, the Pearson's correlation coefficient for delta power and movement in Mecp2 2/+ mice was significantly weaker (average r = 20.42), indicating that delta power is not a good predictor for immobile or sleep states in Mecp2 2/+ mice ( Figure 4C). In fact, as shown in Figure 4B, instances of high delta power concomitant with mobility were frequently observed in Mecp2 2/+ mice.
Mecp2 2/+ mice display impaired body temperature patterning and regulation The patterns of cyclic body temperature fluctuations (derived from smoothed raw data, Figure S3I-L) also revealed differences between wild-type ( Figure 5A) and Mecp2 2/+ mice ( Figure 5B). Mecp2 2/+ mice displayed fewer temperature cycles per day than wild-type mice (6.860.5 versus 9.860.7, Figure 5C), and an increase in the average duration of time spent in the high phase of their temperature cycle relative to wild-type mice (1.4760.13 versus 0.9560.09 hours, respectively, Figure 5D). In addition to having impaired periodic rhythmic patterns, the average daily minimal temperature, and the average daily maximal temperature of Mecp2 2/+ mice were each significantly lower than wild-type (33.960.7uC versus 35.660.2uC respectively, for minimum, p,0.05; and 37.960.2uC versus 38.560.1uC respectively, for maximum, p,0.05). Consistently, the core body temperature range of Mecp2 2/+ mice had higher variance than wild-type mice over the 24-hour day (4.06uC 2 range versus 0.31uC 2 range, respectively, p,0.005, Figure 6A). Moreover, during periods of mobility and inactivity specifically, the temperature of Mecp2 2/+ mice was significantly lower than that of wild-type mice (36.660.2uC versus 37.460.1uC for mobile states and 35.860.3uC versus 36.860.1uC for inactive states, p,0.005 and p,0.05, Figure 6B, C), and the correlation coefficient between movement and body temperature in the Mecp2 2/+ mice was Figure 1. The general EEG waveform properties of the Mecp2 2/+ mouse cortex are similar to wild-type. Panels A-D: Representative examples of a 1 minute segment of raw EEG activity (i) taken from wild-type (A and C) and Mecp2 2/+ mice (B and D) during mobility (A and B) and during inactivity (C and D). Shown below each raw EEG trace is the corresponding wavelet transformation showing the spectrum of frequency power for the 0.5-30 Hz range (ii) and for the 30-80 Hz range (iii). Panels E and F: Average normalized time-frequency power spectrum plots of wild-type (E) and Mecp2 2/+ (F) mice. Panel G: Average power of the delta band (0.5-4 Hz), the alpha band (8)(9)(10)(11)(12) and beta band (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) in wild-type and Mecp2 2/+ mice normalized to peak power for each respective mouse. Histograms are plotted as mean 6 SEM. Asterisks denote statistical significance p,0.05, student's unpaired t-test for n = 11 Mecp2 2/+ mice and n = 8 wild-type mice. doi:10.1371/journal.pone.0035396.g001 substantially weaker than that of the wild-type mice (0.76 versus 0.54, respectively, p = 0.001, Figure 6D). Collectively, these results indicate Mecp2 2/+ mice display an overall reduction in core body temperature throughout the day, and that their homeostatic regulation of body temperature is impaired.
Mecp2 2/+ mice display spontaneous cortical epileptiform discharge activity Raw EEG waveform data was examined from Mecp2 2/+ mice to determine the prevalence and distribution of epileptiform discharges throughout the 24-hour period. For these assessments, a discharge event was defined as a high amplitude rhythmic waveform lasting at least 0.4 seconds with a frequency between 6 and 10 Hz ( Figure 7A). No discharge activity was detected in any of the wild-type mice examined (n = 8). Cortical EEG discharges were observed in 8 of 11 Mecp2 2/+ mice. In these mutants, the average number of cortical epileptiform discharges per hour over a 24-hour period was 10.761.6 ( Figure 7B). The average duration of the discharge events was 0.7660.01 seconds, and the average frequency of the discharges was 8.660.02 Hz ( Figure 7C, D). Histograms showing the average number of mobility cycles (C), and the average duration of time spent in an active state per mobility cycle (D). For panels C and D, the total 24-hour data set is shown with the results specific for either the light or dark phases. Panels E-G: Histograms showing the home-cage activity parameters of Mecp2 2/+ and wild-type mice over the full 24 hours, and for the light and dark phases of the day specifically. Panel E shows the total amount of mobility of mice over 24 hours. Panel F shows the number of 30-second segments throughout the day in which Mecp2 2/+ and wild-type mice displayed mobility. Panel G shows the average rate of movement (magnitude of movement per second) performed by Mecp2 2/+ and wild-type mice. Shown are the mean 6 SEM for n = 11 Mecp2 2/+ and n = 8 wild-type mice. Asterisks denote statistical significance at p,0.05, one-way ANOVA with Bonferroni post hoc correction for multiple comparisons. doi:10.1371/journal.pone.0035396.g003 While spontaneous convulsions were not observed in any of the Mecp2 2/+ mice, cortical discharge activity was associated with behavioral freezing, which often lasted longer than the duration of the discharge (data not shown). Analysis of discharge activity during the light and dark phases of the day failed to reveal any significant differences: the incidence rate of the discharges, their average duration, and their average frequency did not significantly differ during the light or dark phases.
Cortical epileptiform discharge activity predominates during the active state of a mobility cycle
To assess whether cortical discharge activity occurred randomly throughout the day, or was preferentially seen during certain behavioral states, we compared discharge activity across active and inactive states, and in periods of high and low core body temperature. These assessments revealed that significantly higher discharge activity was found in Mecp2 2/+ mice during the active phase of their behavioral mobility cycle during the entire day (22.064.0 versus 5.461.0 discharges, p,0.005, Figure 8A, B), and during the light and dark phases of the day, specifically. In contrast, no significant association between core body temperature and discharge activity was seen. For this, we compared discharge activity in mice during times when their core body temperature was within the top or bottom 25% range of the full 24-hour day. No significant differences in discharge rate were observed between these periods of high and low core body temperature either during the entire day (13.263.5 versus 11.263.0 discharges per hour, respectively p = 0.63, Figure 8C, D), or during the light or dark phases specifically.
Discussion
In this study, we examined the daily periodic cortical EEG waveform activity, body temperature, and movement activity parameters of Mecp2 2/+ mice in their home-cage setting. Five principal observations emerge from our work. First, the normal daily pattern of cyclic EEG delta wave activity is altered in Mecp2 2/+ mice. These mutants display a decreased number of daily delta cycles, and spend longer periods than normal in a low delta power state. Second, Mecp2 2/+ mice display significantly less movement in their home-cage environment, particularly during the nocturnal phase, and display significantly more time in an awake-but-inactive state. Third, the daily minimum, maximum, and overall average temperature of Mecp2 2/+ mice is lower than that of wild-type mice. Fourth, Mecp2 2/+ mice display spontaneous cortical epileptiform discharges, and this discharge activity is most pronounced when the mouse is in an active behavioral state. Fifth, the daily rhythmic and correlative patterns of delta power, movement activity, and body temperature are significantly altered in Mecp2 2/+ mice. Collectively, these investigations identify novel behavioral deficits associated with MeCP2 deficiency, and provide a new investigative procedure that can be employed for translational studies.
Although there is clear evidence for disrupted sleep-wake cycles in Rett syndrome patients [27][28], there have been few assessments of whether normal biological patterning is altered in MeCP2-deficient mice. Our data show that Mecp2 2/+ mice display significantly disrupted daily behavioral patterns compared to age and gender-matched wild-type mice. Specifically, Mecp2 2/+ mice display reduced numbers of normal cortical delta activity and body temperature cycles over a 24-hour period. High delta power has been used as an index for determining sleep and awake times in wild-type animals [26] [29]. Consistent with this, we found a strong correlation between periods of high delta power and periods of low activity in wild-type mice. Intriguingly, though, this correlation was not observed in Mecp2 2/+ mice, where high delta power was often observed during periods of high activity. This suggests that the normal homeostatic balance of neural circuits is disrupted in the Mecp2 2/+ brain. However, we cannot exclude the possibility that a movement artifact caused by the slow ambulatory patterns of MeCP2 2/+ mice may have contributed to this signal. Irrespective of origin, the clear difference in delta power and activity correlational strength between wild-type and MeCP2 2/+ mice illustrates a phenotypic difference that arises from the MeCP2 deficiency.
The observation of disrupted daily rhythmic patterning in Mecp2 2/+ mice is consistent with the recent studies that found Mecp2 mRNA to be a direct target of the microRNA miR-132 [30][31]. miR-132 expression is robustly induced within neurons of the suprachiasmatic nucleus (SCN) by light stimulation [32], and miR-132 negatively regulates MeCP2 protein levels in these neurons. This regulation of MeCP2 expression is one component of the system regulating the expression of Period genes and thus contributes to clock entrainment [30]. Given the strong evidence that disruption of clock gene regulation in SCN is sufficient to alter cortical delta periodicity and power [33], the altered delta patterns we observe in Mecp2 2/+ mice are in line with MeCP2 playing a significant role in circadian regulation, as suggested by Alverez-Savaadra et al. [30]. Based on these results and our findings, it Figure 6. Core body temperature regulation is altered in Mecp2 2/+ mice. Panel A: Scatter plot showing the range of core body temperature in Mecp2 2/+ and wild-type mice. Each point represents the absolute range (min to max during the 24-hour day) of core body temperature for an individual mouse. On Panel A, # denotes p,0.05 as determined using an F-test for the equality of two variances. Panel B and C: Histograms showing the mean 6 SEM of the active body temperature of Mecp2 2/+ and wild-type mice (B), and their average inactive body temperature (C) throughout the day, or specifically during the light or dark phases. Panel D: Scatter plot showing the Pearson's product-moment correlation coefficients for mobility and temperature in Mecp2 2/+ and wild-type mice. Each point represents the daily correlative strength between mobility and temperature for a single subject. The bar on the scatter plot indicates the mean for each set. Asterisks denote statistical significance (p,0.05) between the indicated groups (student's unpaired t-test). n = 11 Mecp2 2/+ mice and n = 7 wild-type mice. doi:10.1371/journal.pone.0035396.g006 would be of interest to further explore whether alterations of cortical delta activity patterns occur in Rett syndrome patients and/or in patients with other MeCP2-related neural disorders.
In addition to disrupted rhythmic behavioral patterning, Mecp2 2/+ mice displayed diminished overall movement in their home-cage setting. Consistent with previous results from Mecp2 308/y male mice [17] [34], the activity of female Mecp2 2/+ mice was reduced similarly during the light and dark phases of the diurnal cycle. Analysis of the home-cage body temperature also revealed alterations in daily temperature cycling patterns in Mecp2 2/+ mice. Mecp2 2/+ mice showed significant decreases in both their peak minimum and maximum body temperature over the day, and collectively showed an overall decrease in their average body temperature -both throughout the day, and also during periods of activity and inactivity specifically. These observations confirm and extend from those of a recent report in which the basal body temperature of male MeCP2 2/y mice was found to be reduced compared to wild-type mice [35]. In addition to showing a decrease in average daily temperatures, though, our results also show that the range of normal body temperature fluctuation over the day is significantly greater in Mecp2 2/+ mice, and that Mecp2 2/+ mice have a poorer ability than wild-type mice to regulate body temperature. Collectively, these results are consistent with impaired autonomic nervous system function, which is a cardinal phenotype of clinical Rett syndrome.
In agreement with our previous acute study [21], we observed the presence of abnormal epileptiform-like discharges in the somatosensory cortex of Mecp2 2/+ mice. Our examination of the distribution of discharges throughout the 24-hour diurnal cycle revealed no differences in discharge incidence between the light and dark phases of the day. However, the incidence rate for discharge activity did correlate with times when the mutant mice were in specific behavioral states. Significantly more discharge activity was observed in the mutants during times of activity and/ or movement compared to times of immobility. Perhaps surprisingly, however, no differences in discharge rate were seen in the mice when their body temperature was in the upper or lower 25% of their daily range. This result was somewhat unexpected, as lower temperature tends to slow metabolic processes, and has been linked to an attenuation of seizure rates [36]. The most likely Figure 7. Properties of epileptiform discharges in Mecp2 2/+ mice. Panel A: Representative example of a 10 second segment of raw EEG activity (i) illustrating a typical discharge event in a Mecp2 2/+ mouse, and the corresponding wavelet transformation showing the spectrum of frequency power for the 0.5-30 Hz range (ii) and for the 30-80 Hz range (iii). Panels B-D: Histograms showing the mean 6 SEM of the discharge rate per hour (B), the average discharge duration (C), and the average frequency component of all the discharges (D) in Mecp2 2/+ mice. Presented on each histogram is the total over the 24-hour period, and the data stratified for specifically light and dark phases. No statistically significant differences in discharge activity, duration, or frequency were seen between the light and day phases (student's paired t-test, n = 8 Mecp2 2/+ mice). doi:10.1371/journal.pone.0035396.g007 explanation for this is that although lower, the decreased core body temperature is not sufficient to have a major effect on neural activity, and thus the hyper-excitability of the MeCP2-deficient circuits is not diminished.
In summary, in this study we conducted the first concurrent examination of 24-hour cortical EEG waveforms, movement activity, and body temperature profiles in Mecp2 2/+ mice in their home-cage environment. Our results indicate that in addition to attenuating home-cage movement activity, MeCP2 deficiency is sufficient to alter the normal daily cyclic patterns of cortical delta wave activity and body temperature. Further, we characterize the average incidence, frequency, and duration of epileptiform discharges in Mecp2 2/+ mice over a 24-hour period, and show that there is a relationship between their behavioral state and the prevalence of cortical discharge activity. Figure 8. Epileptiform discharge activity in Mecp2 2/+ mice differs between behavioral states. Panels A and B: Incidence rate of cortical discharge activity during either the mobile or inactive behavioral states. The histogram (A) shows the mean 6 SEM of the discharge rate per hour, normalized to the time spent in each behavioral state as above. Panel B shows a representative plot of epileptiform discharge distribution over the light (i) and dark (ii) phases of a 24-hour day. Red spikes represent individual discharge events and the shaded regions denote times in which mobility was present. Panels C and D: Incidence rate of cortical discharge activity when core body temperature for the Mecp2 2/+ mice was within the top 25% (high) or the lowest 25% (low) of the mean value for their 24-hour cycle. The histogram (C) shows the mean 6 SEM of the discharge rate per hour, normalized for time spent in each temperature category as above. Panel D shows a representative plot in the same similar format as Panels B, except dark shading reflects times when temperature was in the upper 25% and light shading reflects time when temperature was in the lower 25% of the daily range. Times spend in the intermediate temperature range show no shading. Asterisks denote statistical significance (p,0.05) as determined using a Student's paired t-test, for n = 8 Mecp2 2/+ mice. doi:10.1371/journal.pone.0035396.g008
Supporting Information
Video S1 Synchronized video recording and activity output of a MeCP2 2/+ mouse. This video shows a 1 minute segment of a MeCP2 2/+ mouse in its home cage environment. Shown below the video is the activity output plot generated by the DSI analysis program that is synchronized to the video recording. Note the concordance of the ambulation activity of the mouse with the peaks depicted in the plot. (MOV) Figure S1 Automated detection of epileptiform discharges. Panel A: Raw 10-second EEG waveform segment collected from a representative MeCP2 2/+ mouse displaying 2 epileptiform discharges as determined and confirmed by visual inspection (Red lines represent the start and end of the respective discharge event). Panel B: Resulting envelope of the EEG waveform in Panel A after band pass filtering the signal through a 6-10 Hz FIR filter and then convoluting the square of this filtered data with a Gaussian kernel of 200 point aperture (Red lines represent the start and end of the respective discharge events, the green line represents the envelope of the black 6-10 Hz FIR band pass filtered signal). Panel C: Resulting derivative of the convolved envelope signal presented in Panel B used to determine the start and end of the discharge event. The red lines denote the left and right inflection points used to determine the start and end of the discharges, respectively. (DOC) Figure S2 Recognition of periodic variations in EEG, gross motor activity, and core body temperature. Panels A and B: Representative traces of cortical delta power patterning over the light (i) and dark (ii) phases of a 24 hour day in a wild-type (A) and a MeCP2 2/+ (B) mouse. Shaded regions denote areas classified as high delta states and non-shaded regions denote areas classified as low (non) delta states. Panels C and D: Representative traces of mobility patterning over a 24 hour day in a wild-type (C) and a MeCP2 2/+ (D) mouse. Shaded regions denote areas classified as mobile behavioral states whereas non-shaded regions denote areas classified as inactive behavioral states. Panels E and F: Representative traces of core body temperature patterning over a 24 hour day in a wild-type (E) and a MeCP2 2/+ (F) mouse. Shaded regions denote areas where body temperature was above the daily mean value, whereas non-shaded regions denote areas where body temperature was below the mean. (DOC) Figure S3 Illustration of smoothed data generated from raw delta power, mobility, and body temperature traces. Panels A-D: Representative traces of raw cortical delta power (grey line) and the resulting smoothed data (black line) as generated using the 50-point Fast Fourier Transformation (FFT) smoothing function in OriginPro 6.1 (OriginLab Corporation, Northampton, MA) for a wild-type (A and B) and a MeCP2 2/+ (C and D) mouse during the day (A and C) and night (B and D) phases of the 24-hour day. Panels E-H: Representative traces of raw mobility (grey line) and the resulting smoothed data (black line) as generated above for a wild-type (E and F) and a MeCP2 2/+ (G and H) mouse during the day (E and G) and night (F and H) phases of a 24-hour day. Panels I-L: Representative traces of raw core body temperature (grey line) and the resulting smoothed data (black line) as generated above for a wild-type (I and J) and a MeCP2 2/+ (K and L) mouse during the day (I and K) and night (J and L) phases of a 24-hour day. (DOC) | 2016-05-12T22:15:10.714Z | 2012-04-16T00:00:00.000 | {
"year": 2012,
"sha1": "c88a7d5826630168b6518020be9142a19eac046a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0035396&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28a678e64028e6f329caa4bcadd4525c223d5731",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
182952711 | pes2o/s2orc | v3-fos-license | Treating quarks within neutron stars
Neutron star interiors provide the opportunity to probe properties of cold dense matter in the QCD phase diagram. Utilizing models of dense matter in accord with nuclear systematics at nuclear densities, we investigate the compatibility of deconfined quark cores with current observational constraints on the maximum mass and tidal deformability of neutron stars. We explore various methods of implementing the hadron-to-quark phase transition, specifically, first-order transitions with sharp (Maxwell construction) and soft (Gibbs construction) interfaces, and smooth crossover transitions. We find that within the models we apply, hadronic matter has to be stiff for a first-order phase transition and soft for a crossover transition. In both scenarios and for the equations of state we employed, quarks appear at the center of pre-merger neutron stars in the mass range $\approx 1.0-1.6\,{\rm M}_{\odot}$, with a squared speed of sound $c^2_{\rm QM}\gtrsim 0.4$ characteristic of strong repulsive interactions required to support the recently discovered neutron star masses $\geq 2\,{\rm M}_{\odot}$. We also identify equations of state and phase transition scenarios that are consistent with the bounds placed on tidal deformations of neutron stars in the recent binary merger event GW170817. We emphasize that distinguishing hybrid stars with quark cores from normal hadronic stars is very difficult from the knowledge of masses and radii alone, unless drastic sharp transitions induce distinctive disconnected hybrid branches in the mass-radius relation.
I. INTRODUCTION
The observation that the dense matter inside neutron stars might consist of weakly interacting quark matter owing to the asymptotic freedom of Quantum Chromodynamics (QCD) was first made by Collins and Perry [1]. Since then, numerous explorative studies have been conducted to isolate neutron star observables that can establish the presence of quarks deconfined from hadrons. Starting from the QCD Lagrangian, lattice gauge simulations at finite temperature T and net baryon number n B = 0 naturally realize hadronic and quark degrees of freedom in a smooth crossover transition. However, lattice simulations for finite n B at T = 0, of relevance to neutron stars, have been thwarted due to the unsolved fermion sign problem and untenable imaginary probabilities. As a result, the possible phases of dense matter at T = 0 have been generally explored by constructing equation of state (EoS) models of hadrons and quarks that are independent of each other although a few exceptions do exist.
Extensive studies of nucleonic matter in neutron stars for n B 0.5 n 0 , where n 0 0.16 fm −3 is the isospin symmetric nuclear matter equilibrium density, have predicted the presence of a solid crust. Observations of the surface temperatures of accreting neutron stars in their quiescent periods have indeed confirmed the presence of a crust (see Ref. [2], and references therein). This region is characterized by a Coulomb lattice of neutron-rich nuclei surrounded by dripped neutrons with admixtures of light nuclei and a uniform background of electrons in chemical potential and pressure equilibrium in a charge-neutral state. Differences among different equations of state [3][4][5][6] are small and are of minor importance to the structure of stars more massive than 1 M . In this work, we use the EoSs of Ref. [4] (for 0.001 < n B < 0.08 fm −3 ) and Ref. [3] (for n B < 0.001 fm −3 ) to determine the structural properties of the star.
Models of the hadronic EoS for n B > 0.08 fm −3 can be grouped into three broad categories: non-relativistic potential models, Dirac-Brueckner-Hartree-Fock models, and relativistic field-theoretical models. Microscopic many-body calculations in the first two of these categories (e.g., Brueckner-Hartree-Fock, variational, Greens' function Monte Carlo, chiral effective field theory, as well as Dirac-Brueckner-Hartree-Fock) employ free-space two-nucleon interactions supplemented by three-nucleon interactions required to describe the properties of light nuclei as input. In contrast, coupling strengths of the two-and higher-body nucleon interactions mediated by meson exchanges are calibrated at n 0 in the relativistic field-theoretical models. Several schematic potential models based on zero-and finite-range forces also exist that take recourse in the Hohenberg-Kohn-Sham theorem [7,8] which assures that the ground state energy of a many-body system can be expressed in terms of local densities alone. Refinements in all of these approaches are guided by laboratory data on the bulk properties of isospin symmetric and asymmetric matter, such as the binding energy BE = −16 ± 1 MeV [9,10] at the saturation density n 0 = 0.16 ± 0.01 fm −3 [9][10][11], compression modulus K nm = 240 ± 20 MeV [12][13][14], nucleon's Landau effective mass m * /M = 0.75 ± 0.1 [15][16][17], symmetry en-ergy S 2 = 28 − 35 MeV [18,19] and the symmetry energy slope parameter L = 60 ± 20 MeV [18,19] at saturation, etc. Low-to-intermediate energy (0.5-2 GeV) heavy-ion collisions have been used to determine the EoS for densities up to 2-3 n 0 through studies of matter, momentum, and energy flow of nucleons [20][21][22][23][24][25]. The consensus has been that as long as momentum-dependent forces are employed in models that use Boltzmann-type kinetic equations, use of K nm ∼ 240 ± 20 MeV, suggested by the analysis of the giant monopole resonance data [26][27][28], fits the heavy-ion data as well [25].
The lack of Lorentz invariance in non-relativistic models leads to an acausal behavior at some high density particularly if contributions from three-and higher-body interactions to the energy are not screened in medium [29,30]. The general practice has been to enforce causality from thermodynamic considerations [31,32]. In some cases, the reliability of non-relativistic models is severely restricted, some times only up to 2 n 0 as in the case of chiral effective field-theoretical (EFT) models owing to the perturbative scheme and the momentum cut-off procedure employed there [33,34].
To explore consequences of the many predictions of these models at supra-nuclear densities, piecewise polytropic EoSs that are causal have also been extensively used to map out the range of pressure vs density relations (EoSs) that are consistent with neutron star phenomenology [35][36][37][38]. The viability of these EoSs at supra-nuclear densities necessarily depends on the growing neutron star data to be detailed below.
The possibility of non-nucleonic degrees of freedom such as strangeness-bearing hyperons, pion and kaon condensates, and deconfined quarks above n 0 has also been examined in many of these models [35,39,40]. At some n B (2 − 4) n 0 , the presence of quark degrees of freedom has been invoked on the physical basis that the constituents of hadrons could be liberated as the compression in density progressively increases. First-principle calculations [41][42][43][44][45][46][47][48] of the EoS of quark matter have thus far been limited to the perturbative region of QCD valid at asymptotically high baryon densities. The Nambu-Jona-Lasinio (NJL) model [49], which shares many symmetries with QCD -but not confinement -has been used to mimic chiral restoration in quark matter [50][51][52]. Also in common use are variations [53,54] of the MIT bag model [41].
Lacking knowledge about the nature of the phase transition, it has been common to posit a first-order phase transition in many recent studies [54][55][56][57][58]. Even in this case, the magnitude of the hadron-quark interface tension is uncertain [59][60][61][62]. If the interface tension is regarded as being infinite, a Maxwell construction can be employed to determine the range of density for which chemical potential and pressure equality between the hadronic and quark phases exists [63]. The other extreme case corresponds to a vanishing interface tension when a Gibbs construction is considered more appropriate. The Gibbs construction also corresponds to global charge neutrality instead of local charge neutrality, appropriate for matter with two conserved charges (baryon number and charge) [64].
Depending on the models used to calculate the EoSs of the hadron and quark phases, chemical potential and pressure equilibrium between the two phases may not be realized [65]. In such cases, several interpolatory procedures have been used to connect the two phases on the premise that at n B >> n 0 , a purely hadronic phase is physically unjustifiable [65][66][67][68]. As a result, the hadronquark transition becomes one of a smooth crossover with the proportion of each phase depending on the specific interpolation procedure used. This is in contrast to the Gibbs construction (which also renders the transition into a mixed phase to be smooth) in which the fraction of each phase is determined self-consistently.
Although differing in details, other examples of a smooth crossover transition are the chiral model of Ref. [69] and the quarkyonic model of Ref. [70]. A quark phase with additional hadronic admixtures such as hyperons and Bose condensates has also been explored [71]. The precise manner in which the hadron-quark transition is treated influences the magnitudes of the mass and radius of the star. In addition, the behavior of the speed of sound with density affects the magnitude of tidal deformations. It is worth mentioning however, that stars with purely hadronic matter (HM) can sometimes masquerade as stars with quark matter (QM) [72].
The objectives of this work are to seek answers to probing questions such as (a) What is the minimum neutron star (NS) mass consistent with the observational lower limit on the maximum mass (M max ) that is likely to contain quarks? (b) What is the minimum physically reasonable density at which a hadron-quark transition of any sort can occur? (c) Which astronomical observations have the best potential to attest to the presence of quarks?
Toward providing answers to the above questions, we have undertaken a detailed study of the hadron-to-quark matter transition in neutron stars. Our focus is to study the sensitivity of outcomes on neutron star structure, principally mass-radius relations, in the different treatments of the phase transition. Results so obtained are then subjected to the constraints provided by precise measurements of heavy neutron stars [73][74][75], bounds on the tidal deformability of neutron stars in the binary merger event GW170817 [76][77][78][79], and radius estimates of 1.4 M available from x-ray observations of neutron stars [35,36,80].
Earlier studies in this regard have generally chosen one favored EoS in the hadronic sector and one approach to the quark matter EoS [53,54,58,[81][82][83]. Contrasts between the Maxwell and Gibbs constructions have also been made in some of these works, but with the result that R 1.4 are typically larger than 14 km or more (characteristic of the use of mean-field theoretical (MFT) models) which is at odds with most of the available estimates. This work differs in that variations in the EoSs of both the hadronic and quark sectors are considered as well as a global view of the outcomes of different treatments of the transition is taken. By including terms involving scalarvector and scalar-isovector interactions in MFT models, we show that values of R 1.4 more in consonance with data can be achieved. Additionally, we present an extension of the quarkyonic matter model of Ref. [70] to isospin asymmetric matter with the inclusion of interactions between quarks (not considered there) to enable calculations of beta-equilibrated neutron stars. This extension will be useful in applications involving compositional and thermal gradients in quarkyonic stars, such as their long-term cooling as well as quiescent cooling following accretion on them from a companion star and in investigating f -, p-and g-mode oscillations. Our in-depth study of the thermodynamics of quarkyonic matter sheds additional physical insight into the role that the nucleon shell plays in stiffening the EoS.
Our findings in this work reveal that several aspects of neutron star properties deduced from observations may have to be brought to bear in finding answers to the questions posed above. These properties include the masses M , radii R, periods P and their time derivativesṖ andP , surface temperatures T s of isolated neutron stars and of those that undergo periodic accretion from companions, tidal deformations Λ from the detection of gravitational waves during the inspiraling phase of neutron star mergers, etc. Currently, the accurately measured neutron star masses around and above 2 M [73][74][75] pose stringent restrictions on the EoS. Even so, the EoS would be better restricted with knowledge of radii of stars for which the masses are also known, although this would not reveal the constituents of dense matter as the structure equations depend only on the pressure vs density relation ε(P ), and not on how it was obtained. In contrast, the surface temperatures of both isolated neutron stars and of quiescent cooling of accreting neutron stars are sensitive to the composition, but simultaneous knowledge of their masses and radii are yet unknown. The anomalous behavior of the braking indices n = ΩΩ/Ω 2 , where Ω = 2π/P is the spin rate, of several known pulsars [84][85][86] can also be put to good use in this connection.
The organization of this paper is as follows. In Sec. II, we present the models in the hadronic and quark sectors chosen for our study. The rationale for our choice and basic features of these models are highlighted here for orientation. We stress that our choices are representative, but not exhaustive. Results of neutron star properties for different treatments of the hadron-quark transition introduced in Sec. III are shown and discussed in Sec. IV. Our conclusions and outlook are contained in Sec. V. Appendix A contains details about the thermodynamics of nucleons in the shell of quarkyonic matter.
We use units in which = c = 1.
Nucleonic EoSs
To explore sensitivity to the hadronic part of the EoS, we use representative examples from both potential and relativistic mean field-theoretical (RMFT) models. In the former category, the EoS of Akmal, Pandharipande and Ravehall (APR) [87], which is a parametrization of the microscopic variational calculations of Akmal and Pandharipande [88], is chosen as its energy vs baryon density up to 2 n 0 closely matches those of modern EFT calculations of pure neutron matter and symmetric nuclear matter [33,34]. Moreover, it is compatible with current nuclear phenomenology from both structure (equilibrium density and energy, compression modulus, symmetry energy and its slope, etc.) and heavy-ion experiments [25] as well as with the latest constraints from astrophysical observations (largest known NS mass, upper limit on maximum NS mass, tidal deformability, NS radii, etc). Explicit expressions for the energy density ε, pressure P , compression modulus K 0 , Landau effective mass m * /M , symmetry energy S 2 , and the symmetry energy slope parameter L along with the coupling strengths of the various terms therein can be found in Ref. [89]. Recent fits of the APR calculations to the traditional Skyrme energy-density functional (EDF) can be found in Refs. [90,91]. The latter also details the calculation of a complete tabular EoS based on the original APR parametric form.
To provide contrast, we have constructed three EoSs, MS-A, MS-B and MS-C using the RMFT model of Müller and Serot [92] employing terms that contain scalarisovector and vector-isovector mixings as in Refs. [93,94]. That is, we have devised three new parametrizations for the coupling constants appearing in the MS Lagrangian such that consistency with contemporary experimental and observational data is achieved. Many other EoSs based on the MS model are currently in use; for an exhaustive list, see Ref. [95]. Explicitly, the Lagrangian density for this model is with Expressions for the energy per particle ε/n, P , K 0 , the Dirac effective mass M * and hence the sigma field σ 0 = (M − M * )/g σ in the mean-field approximation can be found in Ref. [90]. With input values of these quantities at n 0 , the coupling strengths g σ , g ω , κ and λ are straightforwardly determined by numerically solving the system of nonlinear equations containing these quantities. The strengths ζ and ξ, Λ σ and Λ ω of the quartic ω and ρ fields, remain as adjustable input parameters to control the high-density behavior. The densitydependent symmetry energy in this model is [94] The first term on the right-hand side above contains effects of interaction through σ-meson exchange, whereas the second term includes those from the ρ-meson exchange along with ρ-σ and ρ-ω mixing. The corresponding slope parameter at n 0 becomes Analogous expressions but without the term involving Λ σ can be found in Ref. [96]. The strength g ρ may be fixed with a prescribed value of S 2 at n 0 , which leaves one or a combination of Λ σ and Λ ω to obtain a desired value of L. The values of the various couplings used in this work are listed in Table I. As noted in Refs. [90,94], the quartic and scalarisovector and vector-isovector terms in Eq. (1) enable acceptable values [18] of the symmetry energy slope parameter L at n 0 to be obtained. The reduction in L from its generally large value found for RMFT models is made possible by the second term in L d of Eq. (4), the term in braces being positive definite. These densitydependent terms also influence the high-density behavior of these EoSs, leaving the near-nuclear-density behavior intact. Salient properties at n 0 for these nucleonic models are presented in Table II. The values of L in Table II are to be compared with those of the FSU models [96,97] in the literature; see e.g. Fig. 2 and Table IV in Ref. [96]: L = 60.5 MeV for FSU (but it does not achieve 2 M ) and L = 112.8 ± 16.1 MeV for FSU2 with M max = 2.07 ± 0.02 M , R max = 12.2 km and R 1.4 = 14.42 ± 0.26 km. In comparison to FSU2, the values of L for the MS models of this work are significantly smaller, which result in smaller radii for the maximum mass and 1.4 M neutron stars (see Table III below).
Properties of nucleonic neutron stars
Structural properties of charge-neutral and betaequilibrated neutron stars resulting from the chosen EoSs are listed in Table III. Two of the three MS EoSs satisfy the requirement of supporting a star with mass ≥ 2 M . The EoS of MS-C does not obey the 2 M constraint, but we have retained it in our analysis because, in conjunction with crossover transitions involving quark matter, masses well in excess of this observational limit can be obtained (see Secs. III and IV). Although the RMFT models employ terms that contain scalar-isovector and vectorisovector mixings as in Refs. [93,94] to yield acceptable values of the symmetry energy slope parameter L at n 0 , the radii of neutron stars stemming from these models are somewhat larger than that of the APR model, but lie within the range of those extracted from data [18]. The largest differences between the APR and RMFT models are in the central pressures of the maximum-mass stars. The proton fractions, y c,1.4 and y c,max , are such that stars close to the maximum-mass stars allow the direct Urca processes with electrons and muons to occur [98]. An examination of L in Table II and R 1.4 and R max in Table III would seem to imply an anti-correlation between these quantities for the MS models. That is, smaller values of L appear to lead to larger values of R 1.4 and R max , which is a trend opposite to that observed for many EoS models. The reason for this reversal becomes clear when L's corresponding to different m * 's within the same model are compared, see Fig. 1 (b) and Table IV. In other words, the standard L−R correlation holds within a class of MS models with the same effective mass, whereas there exists an anti-correlation between m * − R which, if not taken into account explicitly, manifests itself as a turnabout in L − R. Similar trends of correlation with L and anti-correlation with m * are also seen in Ref. [99] which used the MS Lagrangian but without the term involving Λ σ in Eq. (2) as in Ref. [96]. Fig. 7 of Ref. [99] suggests that, when both m * and L are varied, L and R can appear correlated, anti-correlated, or uncorrelated. The latter two possibilities are due to the competing effects of m * and L on neutron star radii. We have verified that nonrelativistic potential models also yield similar trends, which are not shown here for brevity.
Moreover, further examination of the P versus n B and M -R relations for the MS models ( Fig. 1) shows that the central densities of 1.4 M stars for the EoSs chosen are all ≥ 2 n 0 with that of the MS-C star being the largest. The symmetry energy slope parameter L however, refers to that at n 0 . The behaviors of the pressures (see panel (a) in this figure) at n B ≥ 2 n 0 for all of these EoSs are distinctly different from their corresponding behaviors at n B n 0 . The M -R curves in Fig. 1 (b) and Table IV also clearly show how the value of n c,1.4 differs in each of these cases. Evidently, the manner in which the size of a 1.4 M is built depends sensitively on the behavior of the EoS well above n 0 . These features deliver the alert that the standard L−R 1.4 correlation involves more subtleties than generally thought.
Quark EoSs
For completeness, we briefly describe the quark matter EoSs considered in this work; details can be found in the references cited. Since the discovery of 2 M neutron stars [73][74][75], the traditional MIT bag [41] and NJL [50] models have been supplemented with vector interactions [53] to achieve consistency with data. These models have been termed vMIT, vBag, vNJL, etc., and are outlined below. Common and different features of these models will be highlighted after a brief description of each model.
The bag model and its variations
The Lagrangian density of the MIT bag model is [41] which describes quarks of mass m i confined within a bag as denoted by the Θ function. For three flavors i = u, d, s and three colors N c = 3 of quarks, the number and baryon densities, energy density, pressure and chemical potentials in the bag model are [41] The superscript k F i in the integral signs is the Fermi wave number for each species i, at which the integration over k is terminated at zero temperature. The first terms in ε Q and P Q are free Fermi gas contributions, ε FG and P FG , respectively, the second terms are QCD perturbative corrections due to gluon exchange corresponding to L int , and B is the so-called bag constant which accounts for the cost in confining the quarks into a bag. The quark masses m i are generally taken to be current quark masses. Often, the u and d quark masses are set to zero (as at high density, k F i in these cases far exceed m i ), whereas that of the s quark is taken at its Particle Data Group (PDG) value. Refs. [41][42][43][44][45][46][47] detail the QCD perturbative calculations of ε pert and P pert , and the ensuing results for the structure of neutron stars containing quarks within the cores as well as self-bound strange quark stars. At leading order of QCD corrections, the results are qualitatively similar to what is obtained by just using the Fermi gas results with an appropriately chosen value of B [100].
In recent years, variations of the bag model have been adopted [53,54,101] to calculate the structure of neutron stars with quarks cores to account for ≥ 2 M maximummass stars. Termed as vMIT or vBag models, the QCD perturbative results are dropped and replaced by repulsive vector interactions between quarks in such works. We will provide some numerical examples of the vMIT model for contrast with other models as those of the vBag model turn out to be qualitatively similar.
The vMIT model
The form where interactions among the quarks occur via the exchange of a vector-isoscalar meson V µ of mass m V , is chosen in Ref. [54]. Here, the quark masses are chosen close to their current quark masses. Explicitly, where n Q = i n i , and the bag constant B is chosen appropriately to enable a transition to matter containing quarks. Note that terms associated with the vector interaction above are similar to those in hadronic models. In the results reported below, we vary the model parameters in the range B 1/4 = (155 − 180) MeV and
The vNJL model
In its commonly used form, the Lagrangian density for the vNJL model in the mean field approximation is Here, q denotes a quark field with three flavors u, d, s and three colors,m 0 is the 3×3 diagonal current quark mass matrix, λ k represents the 8 generators of SU(3), and λ 0 is proportional to the identity matrix. The four-fermion interactions are from the original formulation of this model [49], whereas the flavor mixing, determinental interaction is added to break the U A (1) symmetry [102]. The last term accounts for vector interactions [52]. As the constants G s , K, and G v are dimensionful, the quantum theory is non-renormalizeable. Therefore, an ultraviolet cutoff Λ is imposed, and results are meaningful only for quark Fermi momenta well below this cutoff. The Lagrangian density in Eq. (8) leads to the energy density where the sums above run over u, d, s. The subscript "0" denotes current quark masses and the superscript Λ in the integral sign indicates that an ultraviolet cutoff Λ is imposed on the integration over k. In both ε FG [see Eq. (6)] and ε int , the quark masses m i are dynamically generated by requiring that ε be stationary with respect to variations in the quark condensate q i q i : (q i , q j , q k ) representing any permutation of (u, d, s). The quark condensate q i q i is given by and the quark number density n i = q † i q i is as in Eq. (6). Note that the integrals appearing in Eqs. (9)-(11) can all be evaluated analytically. Eqs. (10) and (11) render the dynamically generated masses m i density dependent, which tend to m 0,i at high density mimicking the restoration of chiral symmetry in QCD.
To facilitate a comparison between the vMIT and vNJL models, Ref. [51] recommends a constant energy density B 0 = ε int | mu=m d =ms=0 to be added to ε int which makes the vacuum energy density zero. With this addition, the energy density takes the form The quark chemical potentials are using which the pressure is obtained from the thermodynamic identity To mimic confinement absent in the vNJL model, often a constant term B dc is used with the replacement For numerical calculations, we use the HK parameter set [103]: Λ = 631.4 MeV, G s Λ 2 = 1.835, KΛ 5 = 0.29, m u,d = 5.5 MeV, m s = 135.7 MeV and B dc = 0.
The vBag model
In Ref. [53], vector interactions are used in the form of flavor-independent four-fermion interactions as in the NJL models (described below): In this case [53], where the explicit forms of ε FG,i and P FG,i can be read off from Eq. (6). The effective bag constant B eff in this model is composed of two parts: where m i is the dynamically generated quark mass as in the NJL model, m 0,i is the current quark mass, and Λ is an ultraviolet cut-off on the integration over k. The quantity B dc is tuned to control the onset of quark deconfinement.
Distinguishing features of the quark EoSs
The vMIT and vNJL models differ in important ways. Fashioned after the MIT bag for the nucleon, the vMIT model incorporates overall confinement of quarks within a giant bag [41,42] through its density-independent (nonperturbative) bag constant B. Repulsive vector interactions in this model are ∝ n 2 Q with n Q = n u,d,s in the pressure and energy density. The kinetic energy is calculated with current quark masses, although use of constituent masses m u,d,s = M n /3 can also be found in the literature. Effects of interactions are included from perturbative QCD calculations, but often they are set to zero in favor of an altered value of B to simulate the same effect.
The most important and distinguishing feature of the mean-field vNJL model is the chiral restoration of the quark masses present in the original QCD Lagrangian. Starting from the dynamically generated quark masses m u = m d 350 MeV and m s 525 MeV in vacuum, the masses decrease steadily toward their current quark values with increasing density. In our numerical calculations, we have used the current quark masses m 0,u = m 0,d = 5 MeV and m 0,s = 140 MeV to conform to the values used in Ref. [51] (use of the current PDG values m u 2 MeV, m d 5 MeV, and m s 100 MeV does not significantly affect the results). In addition to generating the quark masses, the scalar field energies involving the couplings G s and K also enter the energy density (and hence the pressure). Vector interactions in the vNJL model are ∝ i n 2 i , i = u, d, s; this differs from the vMIT model in that cross terms such as n j n k are absent in the former case. The vNJL model lacks confinement, although a constant B 0 is added to the energy density so that B eff = B 0 + ε int to facilitate comparison with the B of the vMIT bag model. B eff is however density dependent, unlike the B of the vMIT model.
In short, both the vMIT and vNJL models incorporate some aspects of the QCD Lagrangian, but only partially. Lacking a truly non-perturbative approach to QCD, we have explored both models as representative of the current status.
Note that in the vBag model, B i χ and B dc , and thus B eff , are independent of density. Unlike in the vNJL model in which all terms in the energy density and pressure are calculated with density-dependent dynamical masses m i , the Fermi gas contributions in the vBag model are calculated with m i (k F i = 0). Consequences of the vBag model on neutron star structure have been studied extensively in Refs. [53,58] (and in this work), and will not be repeated here as the vNJL model provides a more general scheme.
Charge neutrality and beta-equilibrium conditions
Equilibrium with respect to weak-interaction processes d → ue −ν e and s → ue −ν e leads to the chemical potential equalities µ d = µ u + µ e = µ s in neutrino-free matter. Charge neutrality requires that 2 n u − n d − n s − n e = 0. Together with the baryon number relation n u +n d +n s = 3 n B , the simultaneous solution of the equations assures that quark matter with the three flavors u, d, s is charge neutral and in beta equilibrium. In Eq. (16), x i = n i /n B denote the particle concentrations, r = n B /n 0 , and the factor C i = m 2 i /(π 4/3 n 2/3 0 ), i = u, d, s. Note that C i can depend on the density compression ratio r through m i ≡ m i (r) as in the vNJL model. The concentrations of the u and d quarks are given by respectively. Owing to the charges carried by the quarks, the electron concentration in quark matter is generally very small with increasing r.
First-order transitions
The manner in which the hadron-quark transition occurs is unknown. Even if the phase transition is assumed to be of first-order, description of the transition depends on the knowledge of the surface tension σ s between the two phases [59][60][61][62]. In view of uncertainties in the magnitude of σ s , two extreme cases have been studied in the literature.
Maxwell Construction
For very large values of σ s , a Maxwell construction in which the pressure and chemical potential equalities, P (H) = P (Q) and µ n (H) = µ n (Q), are established between the two phases, hadronic (H) and quark (Q), has been deemed appropriate. In charge-neutral and beta-equilibrated matter, only one baryon chemical potential, often chosen to be µ n , is needed to conserve baryon number as local charge neutrality is implicit. The range of densities over which these equalities hold can be found using the methods detailed in Refs. [89,104].
Gibbs Construction
For very low values of σ s , a Gibbs construction in which a mixed phase of hadrons and quarks is present is more appropriate [63,64]. The description of the mixed phase is achieved by satisfying Gibbs' phase rules: P (H) = P (Q) and µ n = µ u + 2 µ d . Further, the conditions of global charge neutrality and baryon number conservation are imposed through the relations where f represents the fractional volume occupied by hadrons and is solved for at each n B . Unlike in the pure phases of the Maxwell construction, Q(H) and Q(Q) do not separately vanish in the Gibbs mixed phase. The total energy density is ε = f ε(H) + (1 − f )ε(Q). Relative to the Maxwell construction, the behavior of pressure vs density is smooth in the case of Gibbs construction. Discontinuities in its derivatives with respect to density, reflected in the squared speed of sound c 2 s = dP/dε, will however be present at the densities where the mixed phase begins and ends.
The Maxwell and Gibbs constructions represent extreme cases of treating first-order phase transitions, and reality may lie in between these two cases. However, there are situations in which neither method can be applied as the required pressure and chemical potential equalities cannot be met for many hadronic and quark EoSs [65]. In such cases, an interpolatory method which makes the transition a smooth crossover has been used [65][66][67][68]105].
Crossover transitions
As it is not clear that a first-order phase hadron-toquark transition at finite baryon density is demanded by fundamental considerations, crossover or second-order transitions have also been explored recently; see, e.g., Refs. [65,69,70]. As details of results ensuing from the model of Ref. [69] have been recorded earlier in Refs. [106,107], we will only examine the cases of interpolated and quarkyonic models in what follows.
Interpolated EoS
We follow the simple recipe in Ref. [66] where the interpolated EoS in the hadron-quark crossover region is characterized by its central valuen and width 2 Γ. Pure hadronic matter exists for n n − Γ, whereas a phase of pure quark matter is found for n n+Γ. In the crossover region,n − Γ n n + Γ, strongly interacting hadrons and quarks coexist in prescribed proportions. The interpolation is performed for pressure vs baryon number density according to where P H and P Q are the pressure in pure hadronic and pure quark matter, respectively. The interpolated EoS for the crossover, Eq. (19), is different from that of the Gibbs construction within the conventional picture of a first-order phase transition in that the pressure equality between the two phases has been abandoned. Also, f − and f + are not solved for, but chosen externally. (Alternative forms of interpolation have also been suggested in Refs. [67,68], but do not qualitatively change the outcome.) The energy density ε vs n is obtained by integrating P = n 2 ∂(ε/n)/∂n: Quarkyonic matter The transition to matter containing quarks in the model termed quarkyonic matter [70,108] is of second or higher order, depending on the behavior of the squared speed of sound c 2 s = dP/dε = d ln µ/d ln n B = (n/µ)(d 2 P/dµ 2 ) −1 with n B . The order of the phase transition is not determined by the quarkyonic matter scenario a priori but depends on its specific implementation. In the model proposed in Ref. [70], c 2 s exhibits a kink at the onset of the transition, hence its derivative with respect to n B is discontinuous. It is in this sense that the transition is of second order for Ref. [70] which may not be the case in other implementations of the quarkyonic matter scenario. This model is a departure from the firstorder phase transition models insofar as once quarks appear, both nucleons and quarks are present until asymptotically large densities when the nucleon concentrations vanish. Keeping the structure of the quarkyonic matter model as in Refs. [70,108] in which isospin symmetric nuclear matter (SNM) and pure neutron matter (PNM) were considered, we present below its generalization to charge-neutral and beta-equilibrated neutron star matter. In quarkyonic matter, the appearance of quarks is subject to the threshold condition [70] where k F B is the baryon momentum, N c = 3 is the number of colors, and the momentum threshold ∆ is chosen to be Above, Λ Q ∼ Λ QCD 300 − 500 MeV, and κ 0.1 − 0.3 is suitably chosen to preserve causality. In PNM, the transition density, n trans , for the appearance of quarks is 0.77 (3.55) n 0 for ∆ = 300 (500) MeV and κ = 0.3, where n 0 is the SNM equilibrium density. The corresponding values for κ = 0.1 are 0.75 n 0 and 3.47 n 0 , respectively, and show weak dependence of n trans on κ. Unlike in the other approaches, the transition density at which quarks begin to appear in this model is independent of the EoSs used in the hadronic and quark sectors, being dependent entirely on Λ QCD and large N c physics. The total baryon density of quarkyonic matter is Notice that once quarks appear, the shell width ∆ in which nucleons reside decreases with density as n −2/3 B , yielding the preponderance of quarks with increasing n B . Including leptonic (electron and muon) contributions ε , the total energy density is where e k is the single particle kinetic energy inclusive of the rest mass energy. The nucleonic part of the energy density for n 0.5 n 0 can be taken from a suitable potential or field-theoretical model that is constrained by nuclear systematics near nuclear densities and preserves causality at high densities. Below 0.5 n 0 , the energy density is that of crustal matter as in e.g. Refs. [3,4]. It is important to realize that the term ε int (n n , n p ) contributes in regions where k F B < ∆ as well as where k F B > ∆.
The chemical potentials and pressure are obtained from where the sum above runs over all fermions. As with nucleons, an appropriate choice of the quark EoS is also indicated. Reference [70] set ε int (q k , q ) = 0, and the quark masses M q were taken as M n /3. The use of the nucleon constituent quark masses takes account of quark-gluon interactions to a certain degree as has been noted in the case of finite temperature QCD as well. This procedure however, omits density-dependent contributions from interactions between quarks. In our work, we will employ quark models (such as vMIT, vNJL) in which contributions from interacting quarks are included. Subtleties involved in the calculation of the kinetic part of the nucleon chemical potentials and in satisfying the thermodynamic identity are detailed in Appendix A.
This model has a distinct behavior for c 2 s = dP/dε vs n B in that c 2 s exhibits a maximum (its location controlled by Λ Q , and the magnitude depending on both Λ Q and κ) before approaching the value of 1/3 characteristic of quarks at asymptotically large densities [70].
IV. RESULTS WITH PHASE TRANSITIONS
The hadronic EoSs chosen in this study satisfy the available nuclear systematics near the nuclear equilibrium density (see Tables I-III). Their supra-nuclear density behavior can however, be varied to yield a soft or stiff EoS by varying the parameters in the chosen model. Depending on the quark EoS examined such as vMIT, vNJL or of quarkyonic matter, the examination of a broad range of transitions into quark matter -soft-tosoft, soft-to-stiff, stiff-to soft and stiff-to-stiff -become possible. For both first-order and crossover transitions, we calculate the mass-radius curves and tidal deformabilities, and then discuss the results in view of the existing observational constraints. Of particular relevance to the zero-temperature EoS is the limit set by the data on the binary tidal deformability [109,110] For each star, the dimensionless tidal deformability (or induced quadrupole polarizability) is given by [111] where the second tidal Love number k (1,2) 2 depends on the structure of the star, and therefore on the mass and the EoS. Here G is the gravitational constant and R 1,2 are the radii. The computation of k (1,2) 2 with input EoSs is described in Refs. [112][113][114]. For a wide class of neutron star EoSs, k 2 0.05 − 0.15 [113,115,116].
Combining the electromagnetic (EM) [117] and gravitational wave (GW) information from the binary neutron star (BNS) merger GW170817, Ref. [118] provides constraints on the radius R ns and maximum gravitational mass M g max of a neutron star: where R 1.3 is the radius of a 1.3 M neutron star and its numerical value above corresponds to M g max = 2.17 M . These estimates have been revisited in a recent analysis of Ref. [119] where a weaker constraint on the upper limit of the maximum mass M g
First-order transitions: Maxwell vs. Gibbs
We first survey the allowed parameter space for valid first-order phase transitions, namely, a critical pressure exists above which quark matter is energetically favored. We then proceed with both Maxwell and Gibbs constructions, calculating quantities to be compared with observational constraints. Our results are summarized in Figs. 2-4. Where possible, we also characterize the behavior of the hadron-to-quark transition with quantities introduced in the "Constant-Sound-Speed (CSS)" approach in Ref. [121]. This approach can be viewed as the lowest-order Taylor expansion of the high-density EoS about the transition pressure P trans , by specifying the discontinuity in energy density ∆ε at the transition, and the density-independent squared sound speed c 2 QM in quark matter.
MS-A + vMIT (stiff → soft/stiff )
Fixing the hadronic EoS to be the stiff model MS-A, we choose in the vMIT model six parameter sets of (B 1/4 , a), where B is the bag constant and a = (G v /m v ) 2 measures the strength of vector interactions between quarks. The Bag constant is adjusted so that the transition to quark matter occurs at n trans = 1.5 − 2.4 n 0 , and the finite vector coupling a stiffens the quark matter EoS. Soft hadronic EoSs are not applied, as they either (with softer quark EoSs) violate the M max ≥ 2 M constraint or (with stiffer quark EoSs) cannot establish a valid first-order phase transition, i.e., there is no intersection between the two phases in the P -µ plane. We note that this limitation (hadronic matter being stiff) does not necessarily hold if a generic parameterization such as CSS is utilized instead of specific quark models to perform first-order transitions.
In the vMIT model, the sound speed varies little even with the inclusion of vector repulsive interactions within the star (see Fig. 2 (b) and (e)) and can be approximated as being density independent. The mass-radius topology with the Maxwell construction is determined by the three parameters (P trans /ε trans , ∆ε/ε trans , c 2 QM ) in CSS, giving rise to either connected, disconnected (i.e. twin stars or third-family stars) or both branches of stable hybrid stars; P trans and ε trans are the pressure and energy density in hadronic matter at the transition, respectively, ∆ε is the discontinuity in energy density at P trans , and c 2 QM is the squared speed of sound in quark matter just above P trans . The threshold value ∆ε crit below which there is always a stable hybrid branch connected to the purely-hadronic branch is given by ∆ε crit /ε trans = 1 2 + 3 2 P trans /ε trans [122][123][124]. The relevant quantities for the mapping between the stiff MS-A+vMIT model (Maxwell) and the CSS parametrization are listed in Table V. After extensively varying all parameters and calculating the corresponding mass-radius relations, we find that a = 0.18 is most likely the smallest value (corresponding to c 2 QM ≈ 0.4) that barely ensures M max 2 M . When a is increased from zero, the energy density discontinuity becomes progressively smaller (∆ε/ε trans 0.5) and eventually the twin-star solutions disappear, roughly at a ≥ 0.15. Within the range a = 0.18 − 0.3, the M (R) curves of stable hybrid stars obtained are continuous, and quarks can appear at 1.0 ≤ M trans ≤ 1.8 M , pertinent to the range of component masses in BNS mergers. For too large vector interaction couplings e.g. a = 0.5, the onset for quarks is beyond the central density of the maximum-mass hadronic star, and thus no stable quark cores would be present even though QM is sufficiently stiff. Fig. 2 (c) shows that requiring M max ≥ 2 M excludes certain twin-star solutions obtained from EoSs with zero (gray dash-dot-dotted) and small (orange dotted) repulsive vector interactions between quarks, mainly due to the insufficient stiffness of the quark matter EoS. By invoking very stiff EoSs with c 2 s → 1 in the quark sector and using the CSS parametrization coupled with hadronic EoSs at low density, recent works have reported twin stars compatible with the constraint M max ≥ 2 M and bounds onΛ from GW170817 (see e.g. Refs. [125][126][127][128][129] and references therein). Moreover, the typical neutron star radius R 1.4 can be observationally constrained by radius estimates from x-ray emission and/or tidal deformability (Λ) measurements in pre-merger gravitational-wave detections. For hybrid EoSs with a sharp phase transition, the value of R 1.4 or Λ 1.4 is sensitive to the onset density n trans , above which M (R) and Λ(M ) deviate from normal hadronic EoSs without a sharp transition. We demonstrate this effect in Fig. 3 by confronting calculated tidal deformations with inferred bounds from the first BNS event GW170817 [76][77][78].
With high accuracy, the chirp mass M = (m 1 m 2 ) 3/5 /(m 1 + m 2 ) 1/5 , where m 1,2 are the masses of the merging neutron stars, was determined to be Fig. 3), this indicates that vNJL model (or NJL-type models) is ruled out in the first-order transition scenario. Resorting to the crossover scenario is inevitable for it to survive.
MeV , a [79]. This event also revealed information on the binary tidal deformability, Λ(M = 1.186 +0.001 −0.001 M ) = 300 +420 −230 for low-spin priors (using a 90% highest posterior density interval). Furthermore, by assuming a linear expansion of Λ(M ), which holds fairly well for normal hadronic stars without sharp transitions, limits on the dimensionless tidal deformability of a 1.4 M NS were derived [77]: 70 ≤ Λ 1.4 ≤ 580 for low spin priors (at 90% confidence level). This single detection of GW170817 rules out purely-hadronic EoSs that are too stiff and correlated with large tidal deformabilities, as shown in Fig. 3 (b) and (c). The stiff MS-A model by itself is incompatible with the estimated ranges of Λ 1.4 andΛ. The only solution to rescue such a stiff hadronic EoS is to introduce a phase transition at not-too-high densities, e.g., a possible smaller Λ can be achieved in a hybrid star that already exists in the premerger stage. For Maxwell constructions, one of the six parameter sets, (B 1/4 , a) = (159, 0.2) (blue dash-dotted) with n trans /n 0 = 1.77 (see Table V) is successful to survive the LIGO constraint. Together with the maximummass constraint, the parameter space for sharp phase transitions is severely limited.
The panels (e)-(f) of Figs. 2-3 represent results for the stiff MS-A+vMIT model with Gibbs constructions, for which the model parameters remain the same as in their Maxwell counterparts (panels (a)-(c)). The smooth feature of the Gibbs construction advances the appearance of quarks in the mixed phase to lower densities, while it defers the region of the purely quark phase to higher densities. These features are also manifested in the corresponding ε(P ) relation and its finite speed-of-sound behavior (Fig. 2 (e) and (f)). Effectively, the softening due to (Gibbs) phase transition occurs earlier, smoothly decreasing the NS radii and tidal deformabilities for a broader range of masses, which gives rise to increased compatibility with observational constraints. Three more parameter sets of the stiff MS-A+vMIT model that satisfy M max ≥ 2 M are now consistent with the tidal deformability constraint (Fig. 3 (e) and (f)), in contrast to the only candidate that qualifies in Maxwell constructions. In this respect, applying Gibbs construction is advantageous to enlarging the quark model parameter space that suitably satisfies the current constraints from observation (and also revives previously-excluded stiff hadronic models). However, the clear-cut distinction between hybrid and purely-hadronic branches in terms of M (R) and Λ(M ) diminishes: the drastic effect from a sharp hadron/quark transition is toned down, and thus distinguishability of quarks with regard to global observables becomes less feasible if they take the form of a mixture with hadrons. This feature accentuates the [79]. In theΛ(M) plot, only EoSs that satisfy Mmax ≥ 2 M are shown. In the interpolation picture, although the maximum mass is mostly determined by the high-density quark part and increases with its stiffness, changes in radii are flexible depending on e.g., the choice of window parameters and the low-density hadronic part (for an extensive exploration, see Ref. [66]). Panel (c) also shows M (R) for a lower cutoff density ntrans = 1.5 n0 (solid colored curves).
significance of dynamical properties such as NS cooling and spin-down, and the evolution of merger products.
MS-A + vNJL (stiff → soft)
In the vNJL model, pressures at 2 n 0 exhibit an unphysical behavior (of being negative and/or decreasing with density) which forbids attempts to shift n trans to low densities. If a finite vector coupling G v is introduced, the onset of quarks is typically reached at n trans 2.3 n 0 (M trans 1.7 M ), leading to a short stable hybrid branch that obeys M max ≥ 2 M because of the stiff hadronic EoS. We display one such example in Fig. 4 for both Maxwell and Gibbs constructions. Note that the speed of sound in the quark phase remains small, restricted by the fact that a too large G v (correlated with stiffer QM) delays the onset for quarks significantly which yields no stable hybrid stars. Some relevant points to note are: (i) M trans 1.7 M indicates that most likely there will be no quarks in e.g., the component neutron stars of a binary before they coalesce. Thus, tidal properties are not shown in Fig. 4 due to the high onset density for quarks: i.e., in this case BNS observables are irrelevant; (ii) A small G v has little effect on stiffening quark matter (c 2 QM 1/3), which is not desirable in terms of supporting 2 M mostly by quarks; and (iii) Gibbs construction helps maintaining slightly more quark content than Maxwell in the most massive stars, but quarks are effectively "invisible" even if they exist.
Note that the tidal deformability constraint rules out a very stiff hadronic EoS, e.g., MS-A. This stiffness in the hadronic EoS is nevertheless a prerequisite for vNJL to construct a valid first-order transition; stable hybrid stars that are consistent with observation do not exist in this scenario. There is no solution other than an alternative treatment, such as a crossover transition to which we turn below.
Crossover transitions: Interpolatory procedures and quarkyonic matter
In obtaining the results shown below in Figs. 5 and 6, we have followed the methods detailed in Sec. III for constructing crossover hadron-to-quark transitions. Although the generalization of the quarkyonic matter model to beta-equilibrated stars is presented in that section, results shown here for this case are for pure neutron matter only to provide a direct comparison with the results of Ref. [70].
Interpolated EoSs
The results shown for this case correspond to a smooth interpolation in the window (n, Γ) = (3 n 0 , n 0 ) between the soft hadronic EoS MS-B and stiff quark EoSs in the vNJL model with G v /G s = 1.5, 2.0, 2.5. Outside this window in density, pure hadronic and pure quark phases are expected to exist. Due to the abrupt cutoff imposed in the boundary condition, there is a finite jump in c 2 s at the lower-end of the crossover window n − Γ = 2 n 0 ≡ n trans below which only pure hadronic phase is present. At the higher-end and above, we continue to use the interpolated form. This is different from Ref. [66], where the interpolated form extended to all densities. As we will see below, the cutoffs are important to typical radii and thus could be significant.
Effects of introducing quarks above n trans = 2 n 0 through smooth interpolations in the EoS are shown in Fig. 5 (a)-(c). The maximum mass is primarily determined by the stiffness above 4 n 0 , hence the use of large vector-coupling strengths in vNJL. Consequently, one can derive a constraint on G v /G s from M max ≥ 2 M if other parameters are fixed, e.g., G v /G s = 1.5 is probably ruled out. On the other hand, typical radii for 1.0 − 1.6 M stars are sensitive to the stiffness in the hadronic phase for n B 2 n 0 , as well as to the choice of the threshold density. For instance, we have found that for n trans = 1.5 n 0 instead of 2 n 0 , R 1.4 decreases by about 0.3 km. Note that the hyperbolic construction results in admixtures of the hadron and quark EoSs in the interpolated region. This feature causes a finite discontinuity in c 2 s (n B ) at low density, which is an artifact of the scheme. Alternative forms of interpolation suggested in e.g. Refs. [67,68] do not allow for spillovers into the region of interpolation. Use of such forms, however, does not qualitatively change the outcome: while 2 M NS can still be produced, the constraints on R 1.4 andΛ cannot be easily transformed into constraints on the parameters of interpolated EoSs. If, however, a stiff hadronic matter EoS such as MS-A in Sec. III is applied, the resulting radius and tidal deformability are apparently too large and violate the conditionΛ(M = 1.186 M ) ≤ 720 [79].
As can be seen from Fig. 5 (d)-(f), the softer MS-B EoS is by itself compatible with the current constraint on the binary tidal deformability. Implementing the crossover region through interpolation further enhances the compatibility. Better measurements ofΛ(M) from multiple merger detections in the future might help in limiting the relevant interpolation parameters. Recall that such "soft HM → stiff QM" combination is usually forbidden in a first-order transition, given the absence of an intersection in the P -µ plane between pure hadronic and pure quark phases.
Quarkyonic matter
In this case, we present results obtained by using the hadronic EoSs MS-B/C for pure neutron matter and twoflavor quark EoSs with and without interactions between quarks when they appear. The main reason for the rapid increase in pressure at supra-nuclear densities and the attendant behavior of c 2 s vs n B is also elucidated in more detail than was done in Ref. [70].
In the quarkyonic picture, both the maximum mass and typical radii are larger than those obtained by EoSs with neutrons only. In fact, some EoSs that are too soft to survive the M max ≥ 2 M constraint can be rescued by the boost in stiffness once quarkyonic matter appears; see e.g., MS-C (PNM) in Fig. 6 (a)-(c). However, for a stiff neutrons-only EoS, if a transition into quarkyonic matter takes place, compatibility with binary tidal deformability constraint from GW170817 becomes reduced, because of the tendency to increase R and therefore Λ. These increases put the model at more risk of breaking the upper limit on Λ. This is evident in Fig. 6 (d)-(f), where the MS-B (PNM) EoS is at the edge of exclusion and with quarkyonic matter the situation is slightly worse.
An examination of the behavior of c 2 s vs n B with and without quarks offers insights into the role played the presence of the shell for k F n > ∆ in the quarkyonic model. Fig. 7 shows results of c 2 s for the cases in which there is no shell (i.e., neutrons only throughout the star), neutrons only below and above ∆, and with the inclusion of quarks for k F n > ∆. The results in this figure correspond to the neutron matter EoSs used in Ref. [70] and the MS-C+vNJL model of this work with G v /G s = 0.5. For the former EoS, values of Λ Q = 420 MeV and κ = 1 were used to calculate the shell width ∆. The onset of quarks in this case occurs at n trans = 0.37 fm −3 . This is to be compared with n trans = 0.24 fm −3 with Λ Q = 380 MeV and κ = 0.3 in the EoS of Ref. [70]. For the twoflavor vNJL model used in this connection, values of the parameters used were Λ = 631.4 MeV and G s Λ 2 = 1.835 as in Ref. [103].
The main differences between the models in Ref. [70] and this work are: (i) For pure neutron matter (no quarks), the EoS of Ref. [70] becomes acausal for n B /n 0 6 owing to the term proportional to n 3 n in its interacting part. As the central density of the star is 6.74 n 0 , this feature may be of some concern. However, the MS-B/C+vNJL models -being relativistically covariant -remain causal for all densities, and (ii) Interactions between quarks are not included in the EoS of Ref. [70] except in the kinetic energy term with the use of M q = M n /3, whereas the MS-B/C+vNJL model uses density-dependent dynamically generated u, d quark masses that steadily decrease with increasing density from their vacuum values of M n /3. In addition, repulsive vector interactions between quarks were used in the vNJL models.
The above differences notwithstanding, the inner workings of the quarkyonic model -particularly, the influence of quarks -are apparent from Fig. 7 (a) and (c). Without the presence of quarks in the shell, the EoSs in both models are very stiff even to the point of being substantially acausal. The presence of quarks in the shell abates this undesirable behavior by softening the overall EoS (dash-dotted blue curves) relative to the case when only nucleons are present (dotted gray curves). With progressively increasing density, the density of nucleons is depleted within the shell whereas that of the quarks becomes predominant. As c 2 s → 1/3 for quarks at asymptotically high densities, it exhibits a maximum (as well as a minimum) at some intermediate density. Note, however, that compared to the neutron-matter only case everywhere (black solid curves), the overall EoS of the quarkyonic matter is stiffer within the central densities of the corresponding stars.
Insofar as c 2 s is a measure of the stiffness of the EoS, the M -R curves shown in Fig. 7 tive differences between the two cases can be attributed to the presence of interactions between quarks in the MS-B+vNJL model.
The hadron-to-quark transition density n trans , the peak value of the squared speed of sound c 2 s, max , and the maximum mass M max all depend on the choice of Λ Q and κ used to calculate the shell width ∆. Fig. 8 shows the variation of these quantities as a function of Λ Q with κ = 0.1, 0.6 and 1 for the MS-C+vNJL model chosen here. Intermediate values of κ lead to results that lie within the boundaries shown in this figure. Note that high values of both Λ Q and κ are required to ensure that n trans 1.5 n 0 and c 2 s < 1. This requirement, however, decreases M max but masses above the current constraint of 2 M can be still obtained. In the absence of interactions between quarks as in Ref. [70], the window of Λ Q and κ values that are usable is very small. We stress however, that the optimum choice of these parameters is model dependent in that if a different hadronic or quark EoS is used, the values of Λ Q and κ can change.
On a physical level, low values of Λ Q and κ lead to a substantial quark content in the star, but at the expense of n trans → n 0 -a disturbing trend. Although quarks soften the overall EoS, the presence of the shell and the redistribution of baryon number between nucleons and quarks causes a substantial stiffening of the overall EoS, which in turn leads to very high values of M max . Conversely, very high values of Λ Q and κ decrease the quark content which makes the overall EoS to be nearly that without quarks. This feature is generic to the quarkyonic model, which enables it to achieve maximum values consistent with the observational mass limit even when the EoS with hadrons only fails to meet this constraint.
The low transition densities and the extreme stiffening of the EoS caused by the shell in quarkyonic matter bear further investigation. Although inspired by QCD and large N c physics, the width of the shell is independent of the EoSs in both hadronic and quark sectors, at least in the initial stage of the development of the model. The energy cost in creating such a shell in dense matter is another issue that warrants scrutiny.
V. CONCLUSION AND OUTLOOK
In this work, we have performed a detailed comparison of first-order phase transition and crossover treatments of the hadron-to-quark transition in neutron stars. For first-order transitions, results of both Maxwell and Gibbs constructions were examined. Also studied were interpolatory schemes and the second-order phase transition in quarkyonic matter, which fall in the class of crossover transitions. In both cases, sensitivity of the structural properties of neutron stars to variations in the EoSs in the hadronic as well as in the quark sectors were explored. The ensuing results were then tested for compatibility with the strict constraints imposed by the precise measurements of 2 M neutron stars, the available limits on the tidal deformations of neutron stars in the binary merger GW170817 and the radius estimates of 1.4 M stars inferred from x-ray observations. These independent constraints from observations are significant in that the lower limit on the maximum mass reflects the behavior of the dense matter EoS for densities 4 − 6 n 0 , whereas bounds on binary tidal deformabilityΛ and estimates of R 1.4 depend on the EoS for densities 2 − 3 n 0 , respectively.
Table VI provides a summary of the transition density n trans /n 0 for the appearance of quarks and the associated neutron star mass, M trans , for the EoSs and different treatments of the phase transition considered in this work. The entries in this table allow us to answer the first two of the three questions posed in the introduction: (a) What is the minimum neutron star (NS) mass consistent with the observational lower limit of the maximum mass (M max ) that is likely to contain quarks?
The answer to this question depends on both the low-density hadronic and high-density quark EoSs as well as the order and the method of implementing the phase transition. Barring rare cases, such as in the Gibbs construction and interpolation (see why below), the minimum mass is M trans 1 M .
(b) What is the minimum physically reasonable density at which a hadron-quark transition of any sort can occur?
Our results indicate the minimum density n trans to be 2 n 0 , again excluding the rare cases. The reasons for the exclusions are as follows. In the Gibbs construction, valid in the extreme case of the interface tension between the phases being zero, the onset density is generally lower than that of the corresponding Maxwell construction. Depending on the softness or stiffness of the hadronic and quark EoSs, n trans can approach near-nuclear densities. Such cases should be discarded as being in conflict with nuclear data near saturation. In the case of interpolation, the onset density is chosen a priori. In this approach, input values of n trans 2 n 0 yield the minimum mass M trans 1 M .
The values of n trans and M trans quoted in Table VI do not conflict with experimental data on nuclei that probe densities near and below n 0 . A mild tension, however, exists with theoretical interpretations of lowto-intermediate energy heavy-ion data [25] which probe densities up to 3 − 4 n 0 . We wish to note, however, that analysis of such data using Boltzmann-type kinetic equations has not yet been performed with quark degrees of freedom and their subsequent hadronization as in RHIC and CERN experiments. Table VII provides a summary of the generic outcomes of our study. If the hadron-to-quark transition is strongly first-order, as is the case for standard quark models such as vMIT and vNJL that we used, then the hadronic part needs to be relatively stiff to guarantee a proper intersection in the P -µ plane. For a hadronic EoS as stiff as MS-A, this combination brings tension with Λ 1.4 or R 1.4 estimates. Concomitantly, a too-high transition density that yields M trans 1.7 M results in either very small quark cores or completely unstable stars that are indistinguishable from those resulting from the (stiff) purely-hadronic EoS. Thus, such hybrid EoSs can easily be ruled out. This is typical for NJL-type models; see e.g. Our analysis indicates that use of the Gibbs construction is beneficial in satisfying the current constraints from observation for many stiff hadronic EoSs, as it enlarges the parameter space of quark models. As similar M (R) and Λ(M ) relations for hybrid and purely-hadronic stars can be obtained, the distinction between the two is, however, lost. This feature underscores the significance of dynamical properties such as neutron star cooling and spin-down, and the evolution of merger products.
To sum up the part about first-order phase transitions, current observational constraints disfavor weaklyinteracting quarks at the densities reached in neutron star cores. Should a first-order transition into stronglyinteracting quark matter (as described by the vMIT bag model or vNJL-type models) take place, the onset density is likely of relevance also to canonical neutron star masses in the range 1.0 − 1.6 M .
One should keep in mind, however, that perturba-tive approaches to the quark matter EoS are not expected to hold in the density range ≈ 2 − 4 n 0 . This limitation brings the validity of first-order phase transitions caused by such EoSs into question. In this regard, model-independent parameterizations circumvent the issue and have the advantage of translating observational constraints more generically. For instance, specific QM models prohibit the transition into soft hadronic matter, but in the CSS parameterization this restriction disappears and a much larger parameter space can be explored including soft HM → stiff QM [128]. However, such parameterizations lack a physical basis and beg for the invention of a non-perturbative approach.
If the hadron-to-quark transition is a smooth crossover, as in the case of interpolatory schemes and in quarkyonic matter, the pressure in the transition region is stiffened unlike the sudden softening of pressure caused by a firstorder transition. This stiffening is also reflected in a local peak in the sound velocity before the pure quark phase is entered. This stiffening is responsible for supporting massive stars that are compatible with the current lower limit of 2 M .
It is also common that the onset density for quarks is somewhat low (M trans 1.0 − 1.6 M ) in these crossover approaches. This feature implies that all neutron stars we observe should contain some quarks admixed with hadrons. We find that at low densities soft hadronic EoSs are necessary, but above the transition changes in radii rely heavily on the methods of implementing the crossover in both the interpolation approach and in quarkyonic matter. Consequently, it is difficult to obtain physical constraints on the crossover EoSs from a better determination of the radius, e.g., R 1.4 , or improved tidal deformability measurements. It is promising, however, in limiting parameters, e.g., the vector coupling strength G v /G s in vNJL or κ and Λ in the quarkyonic model, pertinent to the required stiffening to satisfy the limits imposed by mass measurements of heavy neutron stars.
Regardless of the phase transition being first-order or crossover, our results suggest that the presence of quarks in the pre-merger component neutron stars of GW170817 is a viable possibility. If quarks only appear after the merger (before the remnant collapsing into a black hole), there is a valid soft HM → stiff QM first-order transition that cannot be captured by the vMIT bag or vNJL models. There are exceptions when the onset occurs close to the 2 M limit so that quarks are precluded in cold betaequilibrated NSs due to immediate collapse. While we rejected these solutions by default, these cases can however be relevant for the dynamic products of mergers where quarks may emerge temporarily [130,131]. Numerical simulations that involve quarks [130,132] will assist in identifying such cases during post-merger gravitationalwave evolution. Better understanding and progress in theory, experiments, and observations are required to clarify the situation.
Although the presence of quarks in neutron stars is not ruled out by currently available constraints, it is nearly impossible to confirm it even with improved determinations of radii from x-ray observations and tidal deformabilities from gravitational wave detections. This conundrum arises because purely hadronic EoSs can also satisfy the current constraints of M max , R 1.4 andΛ; i.e., the "masquerade problem" [72] persists. Similarly, it will be difficult to identify the nature of the phase transition on the basis of M and R observations only, unless there is a sufficiently strong first-order transition that gives rise to separate branches of twin stars with discontinuous M -R and/orΛ-M relations.
We now turn to the third question in the introduction: (c) Which astronomical observations have the best potential to attest to the presence of quarks? Dynamical observables such as supernovae neutrino emission, thermal/spin evolution, global oscillation modes, continuous gravitational waves, dynamic collapse etc. that are sensitive to transport properties would potentially provide more distinct signatures of exotic matter in neutron stars [133][134][135]. In future work, it is worthwhile to achieve consistency with dynamical observables, particularly for the crossover scenarios of the transition to quark matter.
In this Appendix, we provide some details of the evaluation of the kinetic parts of the energy density, chemical potential and energy density and pressure for nucleons in the shell. The expressions we obtain will then be used to establishing the thermodynamic identity (TI) in the presence of a shell. For the evaluation of these quantities the relation where α is a parameter in the functions φ 1 , φ 2 and F will be useful.
Energy density
The kinetic energy density of nucleons, neutrons to be specific, in the shell is where ∆ = Λ 3 Q /k 2 F n + κΛ Q /N 2 c . In analytical form, For k F n < ∆, the upper limit U = k F n , the lower limit L = 0 and e k = k 2 + M 2 n , which leads to the familiar expression for spin-1 2 relativistic particles of mass M n . For neutrons in the shell with k F n > ∆, however, U = k F n and L = (k F n − ∆) with e k = (k − ∆) 2 + M 2 n .
Chemical potential
The associated chemical potential ensues from µ (kin) n = dε (kin) n dk F n dk F n dn n = dF 1 dk F n − dF 2 dk F n dk F n dn n .
For the neutrons in the shell, n n = 1 3π 2 k 3 F n − (k F n − ∆) 3 , dn n dk F n = 1 π 2 k 2 F n − (k F n − ∆) 2 For evaluating dF 1 /dk F n , use of the relations φ 1 (k F n ) = 0, φ 2 (k F n ) = k F n ∂φ 1 ∂k F n = 0 , ∂φ 2 ∂k F n = 1 (A6) in Eq. (A1) yields The evaluation of dF 2 /dk F n proceeds along similar lines, but with with the result Putting these results together, we obtain after some simplification (A10)
Pressure
The kinetic theory expression for a single species of spin-1 2 fermions is where β = 1/T , e is the single particle spectrum and µ the chemical potential. A partial integration on the right hand side yields At finite T , the first term vanishes when U = ∞ and L = 0 leaving the second term as the kinetic pressure. For T → 0, and finite L and U , however, we have The expressions for P (kin) n , thus take different forms in the regions k F n < ∆ and k F n > ∆. For k F n < ∆, the limits U = k F n and L = 0 yield the familiar kinetic theory expression where e k = k 2 + M 2 n . The last two terms in Eqs. (A3) and (A14) cancel, and thus in this region, ε (kin) n + P (kin) n = n n µ kin n (the TI) with n n = k 3 F n /(3π 2 ) and µ (kin) n = k 2 F n + M 2 n . In the region k F n > ∆ with U = k F n and L = (k F n − ∆), the kinetic theory pressure becomes (A15) The first term above gives the contribution from the boundaries of the shell. Inserting the appropriate limits for the shell, this term reads as | 2019-06-10T16:14:13.000Z | 2019-06-10T00:00:00.000 | {
"year": 2019,
"sha1": "69e2c9eb8a166e8a08e4de862e2db8a4a0960c5e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1906.04095",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b9191e6a947863c964916108207d23bdee80407b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249642382 | pes2o/s2orc | v3-fos-license | Probabilistic Conformal Prediction Using Conditional Random Samples
This paper proposes probabilistic conformal prediction (PCP), a predictive inference algorithm that estimates a target variable by a discontinuous predictive set. Given inputs, PCP construct the predictive set based on random samples from an estimated generative model. It is efficient and compatible with either explicit or implicit conditional generative models. Theoretically, we show that PCP guarantees correct marginal coverage with finite samples. Empirically, we study PCP on a variety of simulated and real datasets. Compared to existing methods for conformal inference, PCP provides sharper predictive sets.
Introduction
A core problem in supervised machine learning (ML) is to predict a target variable Y ∈ Y given a vector of inputs X ∈ R p . In this problem, a predictive function q(Y | X) is fitted on an observed dataset D = {(X i , Y i )} N i=1 and then used to predict the target Y N +1 of a new data point with inputs X N +1 . While much of machine learning focuses on point predictions of Y , the problem of predictive inference aims at more robust prediction. In predictive inference, our goal is to create a predictive set that is likely to contain the unobserved target [12].
In particular, the field of conformal inference develops predictive inference algorithms that aim for calibrated coverage probabilities [30,39]. Assume the data pairs (X i , Y i ) are sampled independent and identically distributed (iid) from a population distribution P(X, Y ). Given an input X, a conformal inference algorithm provides a set C α (X) such that P X,Y (Y ∈Ĉ α (X)) ≥ 1 − α. (1) The scalar α ∈ [0, 1] is a predefined miscoverage rate andĈ α (X) ⊂ Y is the predictive set. A set that satisfies Equation (1) is called a valid predictive set. Since the trivial setĈ α (X) = Y is valid, one goal of conformal inference is to keep the size of the predictive set small and (thus) informative. This property is known as sharpness [10,21]. In this paper, we develop a new method for conformal inference that produces valid and sharp predictive sets.
Existing conformal inference methods often produce a continuous interval as the predictive set [3, 8,20,26,31,33]. Such intervals are appropriate in some predictive situations. However, consider a target distribution with separated high-density regions. In this setting, validity comes at the cost of sharpness [15]: to ensure validity the set must include all of the high-density regions; but since it's continuous it must also include the low-density regions between them.
For example, consider a prediction problem that estimates the drop-off location of a taxi passenger based on the passenger's information. The target distribution is likely to be multimodal, centered around locations such as tourist attractions and transit centers [37]. A continuous predictive set will have to emcompass these regions, regardless of how far apart they are. A more informative set would contain the regions themselves, but not the areas between them. Other examples of multimodal targets include the effects of a stroke on brain regions [13], and the rewards of actions of a robot [28]. Our method is called probabilistic conformal prediction (PCP). Figure 2 illustrates the algorithm.
In more detail, PCP builds on the split conformal prediction framework [20,30]. It begins by randomly splitting the observed data D into a preliminary set D pre and a calibration set D cal . It then has three stages. (1) It fits a conditional generative model q(Y | X) to the preliminary data D pre .
(2) For each point (X i , Y i ) in the calibration set D cal , it generates K independent samples of preditionŝ Y Xi = {Ŷ i1 , · · · ,Ŷ iK } from the fitted model q(Y | X i ). It then calculates the distance between each sampled predition and the true label Y i . These quantities are called the nonconformity scores and measure the goodness-of-fit of the generative model. (3) Finally, it calculates and records the (1 − α) empirical quantile of the nonconformity scores. These will be used to for the predictive sets.
With these calculations in place, PCP can form the predictive set of a new datapoint. First it generates sampled predictions from the fitted target distribution. Then each sample is expanded to a ball that centers at its point and has a radius equal to the quantile computed from the calibration set. Finally, the predictive set is defined as the union of the balls over the samples. Because it is centered at high-density regions, this predictive set is sharp. Further, as we prove below, it is valid.
There are several advantages to PCP (and a related extension, high-density PCP). First, it adapts automatically to the landscape of the target distribution, providing sharp and valid predictive sets regardless of the underlying distribution. Second, the generative model for PCP may have an explicit or implicit density function as long as the random samples can be generated from it. Without requiring an explicit density, PCP is compatible with the likelihood-free prediction [1,7] and is less prone to model misspecification [27,42]. Last, (HD-)PCP can be applied to multi-target regression where the target variable Y ∈ Y = R T , T ≥ 1 [5,26]. As we shall see, (HD-)PCP scales efficiently with the target Figure 2: Illustration of the stages of PCP. Data: ; Modeling: generate K random samples from a fitted q(Y | X); Calibration: compute scores E i and the quantile ∆y; Prediction: create the predictive setĈ(X) for a test data. dimension and creates a sharp predictive set by capturing the targets' dependencies.
Related Work. PCP provides a contribution to the growing field of conformal inference. Some conformal inference methods are based on predicting summary statistics of the target distribution, for example, by fitting a mean response function [22,34], conditional quantile functions [31] and approximate histograms [33]. However, these methods produce a single continuous interval as the predictive set, which might be too loose for predicting multimodal targets.
Other conformal inference algorithms estimate the full target distribution. Distributional conformal prediction (DCP) is based on the estimated cumulative density function [8] but its prediction is often sensitive to the tail estimation [33]. CDSplit uses a level set of the estimated probability density function as the predictive set [16]. Similar to PCP, CDSplit can produce discontinuous predictive sets. However, the level set might be loose when the distribution has high dispersion and it has to be computed approximately. Thanks to its sampling-based design, PCP is more computationally efficient than CDSplit, and further it is compatible with likelihood-free predictions [1]. Empirically, across multiple datasets, PCP creates sharper predictive sets than these existing conformal methods.
Finally, there are a few conformal methods for multi-target regression [25,26,29]. Compared to these methods, PCP models the target variables jointly and can produce discontinuous predictive sets. As we show in the empirical studies, PCP provides sharper and more interpretable predictions.
2 Probabilistic conformal prediction 2.1 Problem setup , from an underlying distribution. We observe data D and the covariates X N +1 of a new data point. The goal is to form a predictive setĈ(X N +1 ) for the unobserved target Y N +1 with valid uncertainty estimation. Specifically, we create a predictive setĈ α (·) : X → Y that satifies Equation (1) for α ∈ [0, 1]. Since an arbitrary wide predictive set has valid coverage, a predictive set should be as sharp as possible.
Classic conformal prediction is based on leave-one-out estimation [39], which has high computational cost due to multiple model fitting. In this paper, we adopt the split conformal prediction framework, which improves computational efficiency by data-splitting [22,30]. It randomly splits the observed data to a preliminary set and a calibration set. The model is fit on the preliminary set and kept fixed in computing the nonconformity scores on the calibration set and the test set.
Generative model fitting
Our proposed PCP depends on random samples from a conditional generative model q(Y |X) that approximates the target variable distribution p(Y |X). This differs from standard conformal prediction methods that are based on fitting the summary statistics such as the conditional mean and quantiles of the target [20,31] and that depend on evaluating probability densities [8,15,16]. Since the only prerequisite is to sample from q(Y |X), we consider both typical conditional density estimation methods with explicit density function and popular generative models with implicit density.
PCP is compatible with a variety of CGMs, such as the Kernel Mixture Network (KMN) [2], Mixture Density Network (MixD) [4], Quantile Regression Forest (QRF) [24] and implicit generative models. We refer to Appendix D for more details about fitting a CGM and generating random samples from it. We regard CGMs as backbone models for PCP.
Uncertainty calibration with random samples
Suppose a conditional density model is fit on a preliminary data set D pre . We use the fitted model q(Y |X) and the calibration data to construct a predictive set for a new test data point. For a data point (X i , Y i ) in the calibration set, the algorithm first generates K random samplesŶ ik , k = 1, · · · , K independently from q(Y |X i ), denoted asŶ i = {Ŷ i1 , · · · ,Ŷ iK }. Then, it computes the distance from the observed outcome to this set of samples as (2) The scalar E i is set as the nonconformity score. Intuitively, a small score indicates that the speculated outcomesŶ ik are close to the observed outcome Y i , whereŶ ik are from the approximate density q(Y |X i ) and Y i is from the true underlying density p(Y |X i ). Therefore, the scale of E i reflects the distance between the estimated density and the true density. We use the empirical quantile of the nonconformity scores to construct the predictive set. The α-th empirical quantile is defined as For a new data point with covariates X, we generateŶ = {Ŷ 1 , · · · ,Ŷ K } withŶ k ∼ q(Y |X). Suppose that the desired nominal coverage is 1 − α. Then, each sampleŶ k is expanded to a region R k = {y : y −Ŷ k ≤ r} with an arbitrary norm and a radius r as the (1 − α) quantile of the scores {E 1 , · · · , E n } ∪ {∞}. We call R k an element region of the data point X. The proposed predictive set is the union of the element regions, As a special case, when the outcome is a scalar, the predictive set can be written explicitly aŝ The proposed PCP algorithm is summarized in Algorithm 1.
As shown in Equation (3), the predictive set can be either continuous or discontinuous. Therefore, it can produce a sharp estimate by automatically adapting to the distributional properties of the target distribution. When the generative model is less well fitted, PCP maintains a valid marginal coverage, properly quantifying the predictive uncertainty. When the generative model fits the target distribution well, the predictive set allocates its volume according to the random samples. For example, if p(Y |X) is multimodal and the multimodality is captured by the estimated q(Y |X), the predictive set would consist of discontinuous sets around the modes where each set is relatively small.
Though in some situations, a continuous interval prediction is preferred in terms of interpretability [33], when the target is multimodal, a discontinuous set might be more interpretable. For example, when predicting a watch price based on its appearance without knowing the brand, a price range
Algorithm 1 Probabilistic Conformal Prediction
Step I: Conditional generative model 1: Split the data into three folds Z tr , Z val , Z cal with set of index as I tr , I val , I cal respectively 2: Fit q(Y |X) on Z tr with hyper-parameter chosen by cross validation on Z val Step II: Predictive set for a test point Compute the predictive setĈ(X,Ŷ) by Equation (3) Output: Predictive setĈ(X,Ŷ) ($100, $200) ∪ ($1000, $1200) might be more informative than ($100, $1200). Nevertheless, one can take the convex hull of a discontinuous set to form a continuous interval but not vice versa.
By the construction of the predictive set defined in Equation (3), the estimated density q(Y |X) can be explicit or implicit. This flexibility makes PCP compatible with a wide range of density estimators and generative models. Moreover, the predictive set in Equation (3) can be computed without approximation, making PCP scalable to a high dimensional target variable Y . Finally, PCP has a guaranteed marginal coverage as shown in Theorem 1.
(2) when the scores E 1 , · · · , E n are distinct almost surely, Theorem 1 demonstrates that the marginal coverage of PCP is tight. In particular, the condition of the upper bound is satisfied when p(Y |X) is continuous with respect to the Lebesgue measure.
In practice, we take the quantile of E 1:n instead of the inflated scores E 1:n ∪ {∞} in Equation (3). The following corollary offers the coverage guarantee under such modification.
High Density Probabilistic Conformal Prediction (HD-PCP). Ideally, we may want the predictive sets to contain only high density regions to offer informative predictions. As shown in Appendix B, for different sets covering a specific probability of a multimodal distribution with the same marginal coverage, the high density region has the smallest size.
In PCP, the generated random samples include low density samples. When K increases, the set size of the low density region will decrease. However, PCP may generate many isolated sets and make interpretation difficult for practitioners. To mitigate this problem, we propose High Density Probabilistic Conformal Prediction (HD-PCP) to filter out β fraction low-density samples to identify the high density regions when q(Y |X) is explicit. Instead of sampling K samples from q(Y |X) like in PCP, we sample K/(1 − β) samples for each X, keep 1 − β fraction of samples with the highest estimated density, and keep all other parts of the algorithm the same as PCP. The HD-PCP algorithm is summarized in Appendix Algorithm 2. The marginal coverage guarantee still holds for HD-PCP, as shown in Corollary 2.
Corollary 2. Under the conditions of Theorem 1, HD-PCP has the same marginal coverage as PCP.
The proofs of the theorems and corollaries are presented in Appendix A.
Experiments
In this section, we conduct a comprehensive analysis demonstrating the advantages of PCP compared to previously proposed conformal inference methods. We aim to answer the following questions: (a) how does PCP perform in terms of coverage and predictive set size when compared with baseline models on synthetic datasets, that has multimodal p(Y |X) distributions? (b) Does the filtering technique improve the predictive set of PCP? (c) How well do PCP and HD-PCP perform on real datasets with a single target? (d) How do the backbone models impact the performance of PCP? (e) Does PCP provide better predictive sets in tasks with multi-dimensional targets? The code for the simulations are available at https://github.com/Zhendong-Wang/Probabilistic-Conformal-Prediction.
Our experiments are structured into three sections. We first conduct experiment on classic 2D synthetic data to answer question (a) and (b). Then, we compare PCP and HD-PCP with a full set of baseline methods on several selected real datasets to address question (b), (c) and (d). Finally, we conduct experiments on multi-dimensional regression tasks to address question (e). We run all our experiements on machines with AMD EPYC 7763 CPUs.
Baselines. we consider CHR [33], DistSplit [16], CDSplit [16], DCP [8], and CQR [31] as our comparison baselines. For CHR, we use two different conditional density estimation models based on neural network model and random forest model, and we denote them as CHR-NN and CHR-QRF. We evaluate all the mentioned baselines based on their public Github implementation except for CDSplit. We implement python-based CDSplit based on their official R implementation to use the same backbone generative model for a fair comparison, denoted as CDSplit-KMN and CDSplit-MixD.
Choosing the hyperparameter K. We conduct an ablation study on the effect of the sample size K of PCP. As shown in Figure 3, empirically we find when K increases, the average size of the predictive sets reduces fast first and then gets slow. In practice, we set K moderately large to balance the sharpness and the computational cost, i.e., K = 40 or K = 1000 (two-dimensional targets).
Synthetic data experiments
To evaluate the effectiveness of proposed methods, we compare the predictive set of PCP and HD-PCP with other baseline methods on classic 2D synthetic data: s-curve, half-moons, 25-Gaussians, 8-Gaussians, circle and swiss-roll. We show the evaluation results of the s-curve and the 25-Gaussians in Figure 4 and place detailed results in Appendix E. Figure 4 illustrates, for dataset that has multimodal p(Y |X) distribution, models that consider multimodality, such as CDSplit and (HD-)PCP, works apparently better than the models that can only provide unimodal predictions. This is consistent with our discussion in Section 1. Quantitatively, all models achieve the target marginal coverage (1 − α), while the average set sizes of CDSplit and (HD-)PCP are several times smaller than that from CHR. We demonstrate that CDSplit and PCP both can provide sharp and informative predictive sets for these multimodal datasets and PCP is slightly better with respect to the set size. The right two panels show the effect of filtering high density samples. The predictive sets from HD-PCP become cleaner and concentrated on the correct modes. Correspondingly, the average set sizes of HD-PCP drop a lot compared to PCP. The histogram moving from blue bins to orange bins also demonstrates the effectiveness of the filtering.
Datasets. We conduct real data experiments on 9 public-domain data sets: bike sharing data (bike), physicochemical properties of protein tertiary structure (bio), blog feedback (blog), and Facebook comment volume, variants one (fb1) and two (fb2), medical expenditure panel survey number 19 (meps19), number 20 (meps20), and number 21 (meps21) [31] and temperature forecast data [9]. Table 3 in Appendix F illustrates the dataset sizes and data splits for training, calibration and testing.
Evaluation Protocol. We evaluate the marginal coverage, conditional coverage (approximated by the worst-slab conditional coverage [6,32]), and the predictive set size.We report results based on 50 random splits for all datasets. Table 1 shows numerical results. For our methods, we report PCP-MixD and HD-PCP-MixD; for baselines, we report the backbone model that works generally the best across the 9 datasets with respect to set size in the main paper. See detailed results in Appendix F: Table 4 reports the best results among the variants of each method; Table 5, Table 6 and Table 7 report full experiment results.
We observe that all conformal methods achieve (1 − α) marginal coverage and perform well in terms of the worst-slab conditional coverage. Thus, our comparison focuses on the size of predictive sets. As shown in Table 1, HD-PCP-MixD outperforms all the other baselines on 7 out of 9 datasets in terms of the predictive set size. When picking a backbone model for each dataset is allowed, our models outperform baselines on all datasets, as shown in Table 4. Comparing HD-PCP with PCP, we find that the filtering technique leads to consistent performance improvement. Table 4 shows that PCP outperforms the baselines by a large margin especially on blog, facebook1 and facebook2 datasets.
Moreover, note that PCP is flexible for backbone generative models, as long as the model can easily generate random conditional samples. The flexibility of PCP makes it achieve good performance by choosing a proper generative model according to data sets. For example, PCP-SIVI works well in bike and facebook data with implicit backbone conditional generative models, as shown in Table 5.
The limitation of CDSplit may be explained by noting that the method needs to make partitions of data based on K-means algorithm, which is known to be unstable due to local minima. It also needs to approximate the level set on a grid of the target space to form the predictive set. It may be sensitive to the range and coarseness of the grid. Thus, we notice that CDSplit produces large predictive sets on facebook data and meps data. Similar to Sesia and Romano [33], we observe that DCP is sensitive to the estimation of the distribution tails [33], which makes it unstable for some datasets. CHR is more robust because it only needs to estimate a histogram with relatively few bins [33], while it could only provide a single continuous interval and produces a loose predictive set when the data exhibits mulimodality. CQR predicts intervals based on learned lower and upper quantiles, which leads to large intervals when the learned quantiles are not accurate or the data distribution is multimodal.
Multi-dimensional targets
Beyond single target regression tasks, we study PCP and HD-PCP on multi-target datasets. We use the same strategy in Neeven and Smirnov [29] to adapt previous baselines to multi-target conformal algorithms by fitting each dimension separately with coverage level We construct a synthetic dataset to illustrate the benefit of PCP when targets are dependent. Covariates X are sampled from N (0, 1) 5 and the target Y is randomly sampled from a bi-modal Gaussian distribution The synthetic data distribution in Figure 5a shows that the distribution concentrates as ρ increases. As shown in Figure 5b and Figure 5c, PCP achieves the best performance in terms of average predictive set size. We observe when ρ gets higher, only PCP shrinks the predictive set accordingly while the predictions from other methods have little change and become loose. The detailed quantitative results are in Table 8, Appendix E.
We further study conformal methods on two multi-target real datasets. Taxi for energy efficiency analysis [38]. The predictors are building information such as orientation, glazing area and wall area. We use 568, 100, and 100 samples for train, calibration and test set. We report the predictive set sizes of each algorithm. To calculate the predictive set size of PCP, due to the overlapped regions, the set size cannot be calculated exactly. We estimate it by Monte Carlo simulation with a grid size of 100 on each dimension. For (HD-)PCP and CDSplit, we use MixD as the backbone model. Table 2, algorithms considering multimodality have significantly better performance compared to other baselines. We visualize the conformal region predicted by (HD-)PCP in Figure 1 (See Appendix G for more qualitative results). We notice that PCP can successfully capture the most popular regions of New York City for drop-off, downtown of Manhattan, LaGuardia airport and JFK airport, while methods with continuous predictive sets would learn a wide bounding box. Furthermore, as shown in Table 2, PCP has similar coverage level but with much smaller predictive set compared to CDSplit. This might be because the joint estimation of the high-dimensional targets can capture dependencies between the target elements. Applying the filtering, HD-PCP provides a cleaner and interpretable predictive set and further reduces the set size.
As shown in
We include more experiments with two other multi-target real datasets in Appendix H( 8 and 3 targets) due to space constraint. Since Monte-Carlo estimation of volume of high-dimensional nonconvex set suffers from curse of dimension, we focus on two dimensional task and evaluate the predictive set generated for every pair of all targets. PCP and HD-PCP offer significantly sharper predictive sets compared to baselines on almost all pairs, and HD-PCP can further improve the performance of PCP.
Conclusion and Future Work
We proposed novel conformal inference algorithms, PCP and HD-PCP, that find valid and sharp predictive sets using random samples from a conditional generative model; PCP and HD-PCP can be built from either implicit or explicit models. PCP and HD-PCP outperform existing methods, particularly with multimodal data and multi-dimensional targets.
Like most existing conformal methods, PCP and HD-PCP rely on a reasonable estimation through a fitted conditional generative model. When the generative model is inaccurate, PCP may only provide wide, uninformative predictive sets to ensure the validity. However, given the recent success in conditional generative models, PCP and HD-PCP may be promising in many domains, including precision medicine, quantitative finance and marketing [17,19,41].
Here we focused on regression tasks. Future research can consider adapting PCP and HD-PCP to conformal treatment effect estimation and classification problems [11,32,44].
Appendix A Proof
For the completeness, we first present a lemma adapted from Tibshirani et al. [36], Romano et al. [31].
The quantile By Lemma 1 of Tibshirani et al. [36], the events Furthermore, by exchangeability and the definition of empirical quantile, where the upper bound hold when Z 1:n+1 are almost surely distinct. By Equation (8), the proof is completed.
Proof of Theorem 1. Given the estimated conditional density q(Y |X) on an independently sampled training set D tr . By assumption, the calibration set {(X i , Y i )} n i=1 and the test point (X n+1 , Y n+1 ) are exchangeable. Denote D i = (X i , Y i ,Ŷ i ) for i = 1, · · · , n, n + 1. Then D i ∼ p(X, Y )q K (Y |X i ) and The nonconformity score E i in Equation (2) is defined as a deterministic function of D i . Therefore {E i } n+1 i=1 are exchangeable [18] and are almost surely distinct. By Lemma 1, Next we demonstrate that for the following statement holds Suppose the LHS is true, then ∃ m, 1 ≤ m ≤ K, s.t. Y n+1 ∈ y : y −Ŷ n+1,m ≤Q . This means On the other hand, suppose the RHS is true, letting t = arg min k Y n+1 −Ŷ n+1,k , we have Y n+1 −Ŷ n+1,t ≤ Q, i.e., Y n+1 ∈ y : y −Ŷ n+1,t ≤Q . Therefore, Y n+1 ∈Ĉ(X n+1 ,Ŷ n+1 ).
Then by Equation (10), we have Marginalizing out D tr the statement is proved.
Proof of Corollary 1. Suppose 0 ≤ β ≤ n/(n + 1), then (n + 1)β = n + 1. We have whereZ 1:n is defined in Equation (7). By Equations (8) and (9), By Equation (13), we have Proof of Corollary 2. With the notations in the proof of Theorem 1, For a fixed conditional density function q(y|x), the nonconformity score E i is fully determined by is a deterministic function including the filtering step of HD-PCP. By Kuchibhotla [18], since The other parts of proof follow the same as the proof of Theorem 1. Figure 6 shows the different predictive sets with 95% coverage when the underlying distribution is bi-mode normal.
B High Density Probabilistic Conformal Prediction
We summarize our HD-PCP algorithm in Algorithm 2.
C Hyperparameters
PCP introduces one additional hyperparameter, the sample size K. We conduct ablation study on the effect of K in Figure 3 and find that as long as K is set moderately large, i.e., K = 40, the predictive set size is near optimal. K is not a very sensitive hyperparameter that needs much effort for tuning.
D Summary of Conditional generative models
SIVI Model. Following [40], we build a conditional distribution estimator by using semi-implicit variational inference [42] and we call it SIVI model. Specifically, we approximate P(Y | X) by an inference distribution q φ (Y | X) with respect to parameter φ. We construct the inference distribution as a hierarchical model, y ∼ q φ 1 (y|x, z), z ∼ q φ 2 (z|x, ψ), ψ ∼ p(ψ) Here z ∈ R d is an auxiliary latent variable and p(ψ) is a known noise distribution, i.e., N (0, I). The inference distribution is the marginal distribution of the hierarchical model, q φ (y | x) = q φ 1 (y|z)q φ 2 (z|x)dz
Algorithm 2 High Density Probabilistic Conformal Prediction
, nominal level α, test point X, generative model class Q, sample size K, β grid B (For HD-PCP).
Step I: Conditional generative model 1: Split the data into three folds Z tr , Z val , Z cal with set of index as I tr , I val , I cal respectively 2: Estimate q(Y |X) on Z tr with hyper-parameter chosen by cross validation on Z val Step II: Predictive set for a test point 1: For i ∈ I cal , sampleŶ i1 , · · · ,Ŷ iK ∼ q(Y |X i ) 2: For β ∈ B, Filtering out β fraction of {Ŷ ik } K k=1 with the lowest density. Repeat Line 3-7 for x ∈ I cal . β 0 = arg min β λ( x∈I calĈ β (x,Ŷ)) 3: For a test point, sampleŶ 1 , · · · ,Ŷ K ∼ q(Y |X) 4: Filtering out β fraction of {Ŷ k } K k=1 with the lowest density 5: Compute nonconformity score {E i } i∈I cal by Equation (2), E N +1 = ∞,Ĩ cal = I cal ∪ {N + 1} 6: Set r as the (1 − α) empirical quantile of {E i } i∈Ĩ cal 7: Compute the predictive setĈ β (X,Ŷ) by Equation (3) Output: Predictive setĈ β0 (X,Ŷ) with φ = (φ 1 , φ 2 ). We model q φ 1 (y|z) and q φ 2 (z|x) as Gaussian distributions, whose mean and standard deviation are the outputs of neural networks feed with corresponding (x, z) and (x, ψ). The marginal distribution q(y | x) can thus be constructed with flexibility in modeling multimodality, skewness and kurtosis. We learn the φ by maximizing the ELBO [40], where p(z) is a prior distribution for latent variable z, i.e.,N (0, I) and K is set as 20.
GAN model. To fit a conditional distribution, we follows [27] to build a Conditional GAN model with a generator G(x, z) and a discriminator D(x, y). For simplicity, we call it GAN model. Here, the z is a latent variable, which is usually set as z ∼ N (0, I), G(x, z) is modeled by a neural network, whose outputs are the samples of y, and D(x, y) is another neural network, which outputs the probability that the given y is from the true data distribution. We train G and D with the following adversarial loss, G(x, z) where p denotes the index of the observed data points, K j is the pre-set kernel function, j is the index of selected bandwidth for K j , and w p,j (x; W ) represents the weight of each kernel. A common choice for K j is the Gaussian kernel, The weights w p,j (x; W ) are determined by a deep neural network (DNN), with covariates x as the inputs and W as the parameters. All weights are non negative by applying non-negative activation functions on the output layer of DNN. We train the KMN model by minimizing the loss function, proposes the mixture density network as fellows, where π k (·), µ k (·) and σ k (·) are all modeled by neural networks.
K k=1 π k (x) = 1 is guaranteed by using softmax activation function. The model is trained by minimizing the loss function, are the observed data points. Quantile Regression Forest Meinshausen [24] shows that random forests provide information about the full conditional distribution of the response variable, not only about the conditional mean. Conditional quantiles can be inferred with quantile regression forests, a generalisation of random forests. Quantile regression forests give a non-parametric and accurate way of estimating conditional quantiles for highdimensional predictor variables. We refer to [24] for more details about QRF model. PCP needs to get samples from QRF. We first sample a percentile τ ∼ U [0, 1], the uniform distribution on the unit interval, and then use QRF to get the estimated conditional quantile value y τ as a y sample.
E Full synthetic experiment results
We include the Full synthetic experiment results for 2D toy datasets, s-curve, half-moons, 25-Gaussians, 8-Gaussians, circle and swiss-roll, in Figure 7 and Figure 8. We compare conformal prediction with mean estimation (CP-MeanPred), CHR-QRF and CDSplit-MixD with our method (HD-)PCP.
The data used to plot Figure 5b and Figure 5c is included in Table 8. When ρ increases, the set size decreases for PCP and HD-PCP while keeping nearly constant for other baselines, which overlooks the joint relationship between targets. : Y response variable of blog, facebook1, and facebook2 data. The first row shows the Y marginal histogram plots. The second row shows an approximation of Y |X distribution, where we fit a K-Means on X to generate the 5 clusters. We could find that multimodality exhibits clearly in the Y |X distribution.
F Full real data experiment results
We report all experiment results for our single target real data regression tasks in Table 5, Table 6 and Table 7. Table 4 illustrates the best performance of each method with best backbone model picked for each dataset repsectively. respectively.
G Additional Plots for NYC Taxi Data
Since the Monte-Carlo estimation of overlapping hypersphere suffers from curse of dimensionality. We convert each dataset into two-dimensional pairwise comparisions to evaluate the robustness of each method (8 targets result in 28 pairs). We plot the pairwise comparison of PCP and HD-PCP against CHR and CDSplit, the two baselines that performs generally the best among other datasets. We use Mixture density Network for CDSplit, PCP and HD-PCP and Neural Netork based CHR, the results are averaged over 5 runs.
For X-axis, we plot the set size of PCP / HD-PCP and for Y-axis, we plot the set size for CDSplit and CHR, and we also show the Y = X line. If all points fall into the left region, it means PCP / HD-PCP outputs a sharper predictive set. For PCP, almost all pointsfall into the left region, which indicates PCP has a better or comparable performance with CDSplit and CHR in all pairwise comparisions. HD-PCP has a much better performance and the points all fall into the far left part in the figure, which shows HD-PCP offers a much sharper predictive set. | 2022-06-15T01:15:49.523Z | 2022-06-14T00:00:00.000 | {
"year": 2022,
"sha1": "2f8e61b73a27a3b8cf35d1c36d9fd0fec8182ac4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "caf9b1c5458ee90a6351f6438c3a705e4f2086df",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
268445037 | pes2o/s2orc | v3-fos-license | The aromatherapy relaxation effect of lavender ( Lavandula angustifolia ) and peppermint ( Mentha piperita ) against decreased stress in patients with pulmonary tuberculosis
ABSTRACT
INTRODUCTION
Until now, pulmonary TB has been a global concern, with various control efforts being carried out to reduce the incidence and mortality rate.Pulmonary TB is still one of the public health problems in the world, with mortality rates exceeding the Human Immunodeficiency Virus (HIV).Tuberculosis causes various health problems, namely decreased physiological abilities, limited social interaction, limitations in carrying out spiritual needs, decreased work productivity, and psychological changes. 1 The high incidence of cases and deaths with pulmonary TB in Indonesia, according to WHO in the Global Tuberculosis Report (2016), made Indonesia the 33rd in the world with the most tuberculosis in 2015. 1 The incidence of tuberculosis is reported to have increased drastically in the last decade throughout the world, including in Indonesia.Complaints felt by pulmonary TB patients can vary or even be found pulmonary TB without any complaints at all in their health examination.
The pharmacological treatment for tuberculosis has been determined by the government, namely in accordance with the Directly Observed Treatment Short course (DOTS) strategy.Pulmonary TB treatment will be completed within 6 months.However, the conditions in the field, many TB patients fail to complete treatment because of the uncomfortable side effects of OAT and boredom on long treatment. 1ccording to previous research to reduce shortness of breath in pulmonary TB patients, apart from using medical drugs, peppermint aromatherapy can also be given with simple inhalation or evaporation methods. 2 Aromatherapy is a therapeutic action because it uses oils that are useful for improving physical and psychological states for relaxation therapy, relieving stress, and calming the mind. 3One of the aromatherapy often used is lavender aromatherapy which is calming.According to Prima Dewi (2017), aromatherapy using lavender oil is believed to have a relaxing effect on nerves and muscles that are tense (carminative) after being tired from activities, in addition to having a drowsy (sedative) effect. 4Based on the results of the description above, researchers are interested in knowing the effect of relaxing lavender (Lavandula Angustifolia) and peppermint (Mentha piperita) aromatherapy on reducing stress in pulmonary TB patients.
General Data of Respondents
Based on Table 1, almost half of the respondents are aged 46-60 years, namely 11 people (38%).Most of them are male, as many as 19 people (66%).The education level of the respondents is mostly high school, with as many as 20 people (69%).In terms of work, almost half of them are 10 people (35%) do not work, and 10 people (35%) are self-employed.This is because most patients are no longer working due to declining health conditions and must receive 6 months of treatment.Some even respondents just got a job but had to stop working because of their illness.
Table 2 shows that of the 29 respondents before being given the intervention, it was found that most of the respondents were at moderate stress levels (19-25), namely 15 people (52%).
Table 3 shows that of the 29 respondents after the intervention, most experienced a change in normal stress levels, as many as 17 people (59%) and a small portion, as many as 1 person (3%), with moderate stress levels.
Table 4 shows that the results of the statistical analysis of the Wilcoxon sign rank test obtained a value of (α count) = 0.000 so that H1 is accepted, which means that there is a significant effect of providing interventions in the form of relaxing aromatherapy Lavender (Lavandula angustafolia) and peppermint conducted intervention, then observed again after given intervention.Population study is patient Pulmonary TB at the Bhakti Dharma Husada Hospital was taken from the number of TB patients in the Sadewa Isolation Room on average every month from January to February 2022, with as many as 31 patients.The respondents of this study were pulmonary TB patients with 6 days of treatment.The technique of taking samples used is simple random sampling is the taking of sample members from the population, which is done randomly without regard to the existing strata in the population.The Independent variable of the study is aromatherapy relaxation lavender and peppermint.The research-dependent variable is stress reduction.Instruments used in the study are secondary data that record medical patient and contains demographic data, patient cover identity patient, age, type, gender and education.The researcher copies on sheet recapitulation.Only researchers know the data.The instrument on variable independent relaxation lavender and peppermint aromatherapy, namely SOP. 5 On variable dependent with use sheet observations and sheets DASS 42 questionnaire.
Relaxation aroma therapy given twice daily (day and night) for 30 minutes is repeated independently by the patient, who helped his family for 6 days.Concentration aromatherapy l lavender and peppermint provided with 2-3 drops of each oil essential lavender and peppermint in water as much as 100 ml in a diffuser, concentration gift aroma therapy based on previous research. 5After the diffuser is connected to electricity, hold it close to the patient at a distance of 100 meters inside the room patient on the bedside cabinet (table) for each patient.
The questionnaire sheet (post-test) is given again in the following week to be filled in by the patient through interviews with patients and their families.After filling out the questionnaire, the researcher observed vital signs.During the research process, patients are expected not to take sedatives.This aims to determine the effectiveness of aromatherapy lavender and peppermint in reducing stress without the presence of other factors that affect, for example, medicine from a doctor.The
ORIGINAL ARTICLE
(69%) had a high school education level and in terms of work, almost half of them were 10 people (35%), did not work and 10 people (35%) were self-employed.
According to the theory of stress predisposing factors, namely the level of education and work.According to previous study, a person's level of education can affect his level of understanding of information obtained from various sources, the higher the level of education received, the better one's understanding and knowledge. 9Meanwhile, according to another study, said work problems are a source of stress that many people experience. 10Many people suffer from depression, anxiety because of this work problem, for example, too much work, promotion, job loss (PHK) and so on.
Effect of relaxation aromatherapy lavender (Lavandula angustafolia) and peppermint (Mentha piperita) on stress reduction in pulmonary TB patients
Based on the results of statistical analysis of the Wilcoxon sign rank test, it was found that there was an effect of giving lavender and peppermint aromatherapy on reducing stress in pulmonary TB patients.This is obtained based on the research results before the intervention, it was found that most of the respondents were at moderate stress levels (19-25), namely 15 people (52%).The data after the intervention showed there was a decrease in stress, namely, most of the normal stress levels (0-14), as many as 17 people ( 59%).The content of linalyl acetate and linalool in lavender and pure menthol in peppermint as anti-anxiety.According to a previous study, peppermint with menthol can reduce stress, anxiety, negative thoughts and fear. 11This is because after the respondent inhales peppermint, the molecules and aromatherapy particles will enter through the respiratory tract (nose), which will then be forwarded by nerve receptors to be received as a good signal and then presented as a pleasant aroma and the final stage the odor stimulation will enter and affects the limbic system as a person's emotional center so that feelings become more relaxed.
A feeling of calm will allow a person to think calmly to overcome stressors so
DISCUSSION
Respondent's stress level before being given relaxation aromatherapy lavender (Lavandula angustafolia) and peppermint (Mentha piperita) The results showed that the stress levels of 29 respondents before being given relaxation were 3 people (10%) with severe stress levels (26-33), 15 people (52%) moderate stress (19-25) and 11 people (38%) with moderate stress.Mild stress (15-18).Stress is a variety of physical, chemical, or emotional factors that can cause physical or mental anxiety and can be one of the factors causing disease. 6he results showed that most of the respondents at moderate stress levels were even found to be at severe stress levels.Based on general data, almost half of the respondents aged 31-45 years were 9 people (31%) and aged 46-60 years were 11 people (38%), so almost all respondents, 20 people (69%), were aged <60 years.This is in accordance with the theory put forward by Yusuf et al., 2015 the stress predisposing factor is age.In the 2015 Mandaknalli study, it was found that stress in TB patients was experienced by patients aged 24-60 years.The older a person with tuberculosis is, the higher the stress level. 6ck conditions, especially in pulmonary TB patients who require very long healing therapy.This can affect the psychological state of the patient.One of them is that the patient's emotional status will be disturbed due to chronic illness conditions that can cause severe stress. 7ulmonary TB is a classic example of a disease that not only affects the patient's physical, biological, psychological, social and spiritual changes. 8Pulmonary TB as an infectious disease causes negative stigma in the community, so patients experience discrimination due to neglect and public reluctance to interact with pulmonary TB patients.This causes the patient to become more stressed and even depressed and afraid to interact with the community.
The stress level of respondents after being given relaxation aromatherapy lavender (Lavandula angustafolia) and peppermint (Mentha piperita)
The results showed that from 29 respondents after being given aromatherapy relaxation, 17 people (59%) did not experience stress or normal (0-14) while 11 people (38%) had mild stress (15-18), and 1 person (3%) in moderate stress levels (19-25).There is still 1 person at a moderate stress level after being given aromatherapy relaxation.Based on general data, it was found that most of the respondents, namely 20 people
ORIGINAL ARTICLE
that adaptive coping will be created.This is reinforced by research.According to another study, the benefits of aromatherapy include helping to relieve stress. 12-15
CONCLUSION
Based on the results of the research and discussion that have been described, it can be concluded that there is an effect of giving relaxation aromatherapy Lavender (Lavandula angustafolia) and Peppermint (Mentha piperita) on reducing stress in pulmonary TB patients. | 2023-10-22T19:19:28.042Z | 2023-09-20T00:00:00.000 | {
"year": 2023,
"sha1": "22451e13d2ef9b8fe7e386a4781fd49e02227b2b",
"oa_license": null,
"oa_url": "https://doi.org/10.15562/bmj.v12i3.4376",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "22451e13d2ef9b8fe7e386a4781fd49e02227b2b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
118696292 | pes2o/s2orc | v3-fos-license | The Phantom Term in Open String Field Theory
We show that given any two classical solutions in open string field theory and a singular gauge transformation relating them, it is possible to write the second solution as a gauge transformation of the first plus a singular, projector-like state which describes the shift in the open string background between the two solutions. This is the"phantom term."We give some applications in the computation of gauge invariant observables.
Introduction
Perhaps the most mysterious aspect of Schnabl's analytic solution for tachyon condensation [1] is the so-called phantom term-a singular and formally vanishing term in the solution which appears to be solely responsible for the disappearance of the D-brane. Much work has since shed light on the term, either specifically in the context of Schnabl's solution [1,2,3], or in some generalizations [4,5,6,7,8,9], but so far there has been limited understanding of why the phantom term should be present.
In this paper we show that the phantom term is a consequence of a particular and generic property of string field theory solutions: Given any two solutions Φ 1 and Φ 2 , it is always possible to find a ghost number zero string field U satisfying This is called a left gauge transformation from Φ 1 to Φ 2 [10]. The existence of U implies that Φ 2 can be expressed as a gauge transformation of Φ 1 plus a singular projector-like state which encapsulates the shift in the open string background between Φ 1 and Φ 2 . This is the phantom term. The phantom term is proportional to a star algebra projector called the boundary condition changing projector, which is conjectured to describe a surface of stretched string connecting two BCFTs [10]. One consequence of this description is that phantom terms, in general, do not vanish in the Fock space, as is the case for Schnabl's solution. This paper can be viewed as a companion to reference [10], to which we refer the reader for more detailed discussion of singular gauge transformations and boundary condition changing projectors. Our main goal is to show how the phantom term can be used to calculate physical observables, even for solutions where the existence of a phantom term was not previously suspected. We give three examples: The closed string tadpole amplitude [11] for identity-like marginal deformations [12]; the energy for Schnabl's solution [1]; and the shift in the closed string tadpole amplitude between two Schnabl-gauge marginal solutions [13,14]. The last two computations reproduce results which have been obtained in other ways [1,15], but our approach brings a different perspective and some simplifications. This description of the phantom term will be useful for the study of future solutions.
The Phantom Term
To start, let's review some concepts and terminology from [10]. Given a pair of classical solutions Φ 1 and Φ 2 , a ghost number zero state U satisfying is called a left gauge transformation from Φ 1 to Φ 2 . If U is invertible, then Φ 1 and Φ 2 are gauge equivalent solutions. U, however, does not need to be invertible. In this case, we say that the left gauge transformation is a singular gauge transformation.
It is always possible to relate any pair of solutions by a left gauge transformation. Given a ghost number −1 field b, we can construct a left gauge transformation from Φ 1 to Φ 2 explicitly with the formula: Here, Q Φ 1 Φ 2 is the shifted kinetic operator for a stretched string between the solutions Φ 1 and Φ 2 . Equation (2.2) is not necessarily the most general left gauge transformation from Φ 1 to Φ 2 . This depends on whether Q Φ 1 Φ 2 has cohomology at ghost number zero [10].
In the examples we have studied, the left gauge transformation U has an important property: If we add a small positive constant to U, the resulting gauge parameter U + ǫ is invertible. 3 This raises a question: If an infinitesimal modification of U can make it invertible, why are Φ 1 and Φ 2 not gauge equivalent? The answer to this question is contained in the following identity: which follows easily from the definition of U. The first term Φ 1 (ǫ) is a gauge transformation of Φ 1 , and the second term ψ 12 (ǫ) is a remainder. Apparently, if Φ 1 and Φ 2 are not gauge equivalent, the remainder must be nontrivial in the ǫ → 0 limit: This is the phantom term. The phantom term is proportional to a star algebra projector, called the boundary condition changing projector [10]. The boundary condition changing projector is a subtle object, and is responsible for some of the "mystery" of the phantom term. Based on formal arguments and examples, it was argued in [10] that the boundary condition changing projector represents a surface of stretched string connecting the BCFTs of Φ 2 and Φ 1 . The phantom term is useful because it gives an efficient method for computing gauge invariant observables from classical solutions. In the ǫ → 0 limit, the pure-gauge term Φ 1 (ǫ) effectively "absorbs" all of the gauge-trivial artifacts of the solution, leaving the phantom term to describe the shift in the open string background in a transparent manner.
In this paper we evaluate the on-shell action and the closed string tadpole amplitude [11]: Here we use the notation, In the subalgebra of wedge states with insertions, the 1-string vertex Tr[·] is equivalent to a correlation function on the cylinder [16] whose circumference is determined by the total wedge angle (cf. appendix A of [8]). The shift in the action and the tadpole between the solutions Φ 1 and Φ 2 can be conveniently expressed using the phantom term: and These equations are exact for any ǫ, though they are most useful in the ǫ → 0 limit. It would be interesting to see whether the phantom term can also be useful for computing the boundary state [17]. Note that the phantom term is a property of a pair of solutions and a singular gauge transformation relating them. In this sense, a solution by itself does not have a phantom term. That being said, some solutions-like Schnabl's solution-seem to be naturally defined as a limit of a pure gauge configuration subtracted against a phantom term. Other solutions, like the "simple" tachyon vacuum [8] and marginal solutions, can be defined directly without reference to a singular gauge transformation or its phantom term. It would be interesting to understand what distinguishes these two situations, and why.
Relation to Schnabl's Phantom Term
The phantom term defined by equations (2.3) and (2.4) is different from the phantom term as it conventionally appears in Schnabl's solution [1] or some of its extensions [4,5,9].
The standard phantom term can be derived from the identity where we define X: In the N → ∞ limit, the second term in (3.1) is the phantom term: This is different from (2.4), though both phantom terms are proportional to the boundary condition changing projector. The major difference between the identities (2.3) and (3.1) is not gauge equivalent to Φ 1 , or even a solution, for any finite N. In this sense (2.3) is a more natural, and this is the definition of the phantom term we will use in subsequent computations. However, the phantom term can be defined in many ways using many different identities similar to (2.3) and (3.1), and for certain purposes some definitions may prove to be more convenient than others.
To make the connection to earlier work, let us explain how the identity (3.1) leads to the standard definition of Schnabl's solution as a regularized sum subtracted against a phantom term. We can use Okawa's left gauge transformation 4 to map from the perturbative vacuum Φ 1 = 0 to Schnabl's solution Substituting these choices into (3.1) gives the expression To simplify further, expand inside the second term of (3.7), where B n are the Bernoulli numbers. The expansion in powers of K is equivalent to the L − level expansion [9,19], 5 which will play an important role in simplifying correlators involving the phantom term. The upshot in the current context is that the higher powers of K in (3.9) can usually be ignored in the N → ∞ limit [5], so we can effectively replace the sum by its first term Then the N → ∞ limit of (3.1) reproduces the usual expression for Schnabl's solution 11) where ψ N is the phantom term.
Example 1: Identity-like Marginals
We start with a simple example: computing the shift in the closed string tadpole amplitude between the identity-like solution for the tachyon vacuum [8,20,21], and the identity-like solution for a regular marginal deformation [12], where V is a weight 1 matter primary with regular OPE with itself. Both these solutions are singular. For example, we cannot evaluate the tadpole directly because requires computing a correlator on a surface with vanishing area. However, with the phantom term, we can circumvent this problem with a few formal (but natural) assumptions.
We can relate the above solutions with a left gauge transformation The shift in the open string background is described by the phantom term: 6 where in the third line we defined the states These are wedge states whose open string boundary conditions have been deformed by the marginal current V [22]. In the ǫ → 0 limit the phantom term is Note that the phantom term corresponds to a nondegenerate surface with the boundary conditions of the marginally deformed BCFT, and so will naturally reproduce the expected coupling to closed strings. This is in spite of the fact that both solutions we started with were identity-like. Also note that the phantom term vanishes in the Fock space (since B kills the sliver state), but still it is nontrivial. Now we can use the phantom term to compute the shift in the closed string tadpole amplitude: (4.8) Since the amplitude vanishes around the tachyon vacuum, only the marginally deformed D-brane should contribute. Plugging (4.5) in, we find Bc∂c . (4.9) The first term in the trace formally vanishes. 7 Then (4.10) With the reparameterization we can scale the deformed wedge state inside the trace to unit width: Integrating over t gives Note that this is manifestly independent of ǫ. In the general situation, explicitly proving ǫ-independence requires much more work than is needed to compute the result, and it is easier to assume gauge invariance and take the ǫ → 0 limit. At any rate, further simplifying (4.12), we can replace the ghost factor Bc∂c in the trace with −c, 8 . Mapping from the cylinder to the unit disk gives (4.14) This is exactly the closed string tadpole amplitude for the marginally deformed D-brane, as defined in the conventions of [11].
Example 2: Energy for Schnabl's Solution
In this section we compute the energy for Schnabl's solution, The original computation of the energy, based on the expression (3.11), was given in [1] (see also [2,3]). Our computation will be quite different since we define the phantom term in a different way. We take the reference solution to be the perturbative vacuum, and map to Schnabl's solution using Okawa's left gauge transformation The regularized phantom term is In the ǫ → 0 limit the ratio ǫ 1−ǭΩ approaches the sliver state (see later), so we can replace the factor K 1−Ω with its leading term in the L − level expansion. Then (2.3) gives a regularized definition of Schnabl's solution: Note that Ψǭ here is precisely the pure gauge solution discovered by Schnabl [1]: Using (3.8) we can express this regularization in the form Clearly this is different from the standard definition of Schnabl's solution, (3.11).
To calculate the action we use (2.7): A quick calculation shows that the second term can be ignored in the ǫ → 0 limit, essentially because the phantom term vanishes when contracted with well-behaved states. Therefore S = 1 6 lim ǫ→0 + Tr ψ 0Ψ (ǫ) Qψ 0Ψ (ǫ) .
To understand what happens in the ǫ → 0 limit, note that the factor ǫ 1−ǭΩ inside the trace (5.9) approaches the sliver state: To prove this, expand the geometric series n Ω n . (5.15) and expand the wedge state in the summand around n = ∞: where Ω (1) , Ω (2) , ... are the coefficients of the corrections in inverse powers of n+1 (actually Ω (2) is the first nonzero correction in the Fock space). Plugging (5.16) into the geometric series and performing the sums gives Only the sliver state survives the ǫ → 0 limit. This means that for small ǫ equation (5.9) is dominated by correlation functions on the cylinders with very large circumference. In this limit, it is useful to expand the fields C 1 , ...C 4 into a sum of states with definite scaling dimension (the L 0 level expansion). To leading order this expansion gives Now consider the following: If a cylinder of circumference L has insertions of total scaling dimension h separated parametrically with L, rescaling the cylinder down to unit circumference produces an overall factor of L −h , which vanishes in the large circumference limit if h is positive. Since the sum of the lowest scaling dimensions of C 1 and C 2 is positive, the corresponding term in (5.9) must vanish. The sum of the lowest scaling dimensions of C 3 and C 4 is zero, so the corresponding term in (5.9) is nonzero and receives contribution only from the leading L 0 level of C 3 and C 4 . Therefore the action simplifies to Expanding the geometric series gives Tr Kc∂cBΩ L−k c∂cΩ k . (5.23) Scaling the total wedge angle inside the trace to unity, Expanding the factor in parentheses around L = ∞, the sum turns into an integral: Tr The order 1/L terms and higher do not contribute in the ǫ → 0 limit, as explained in (5.17). Therefore A moment's inspection reveals that this integral is precisely the action evaluated on the "simple" solution for the tachyon vacuum [8], expressed in the form [23] in agreement with Sen's conjecture.
Example 3: Tadpole Shift Between Two Marginals
In this section we use the phantom term to compute the shift in the closed string tadpole amplitude between two Schnabl-gauge marginal solutions [13,14]: where V 1 and V 2 are weight 1 matter primaries with regular OPEs with themselves (but not necessarily with each other). 9 Our main interest in this example is to understand how the boundary condition changing projector works when connecting two distinct and nontrivial BCFTs; In this case the projector has a rather nontrivial structure and possible singularities from the collision of matter operators at the midpoint [10]. This is the first example of a phantom term which does not vanish in the Fock space (at least in the case where the V 1 -V 2 OPE is regular). This example also gives an independent derivation of the tadpole amplitude for Schnabl-gauge marginals, which previously proved difficult to compute [15]. Another computation of the tadpole for the closely related solutions of Kiermaier, Okawa, and Soler [24] appears in [25].
We will map between the marginal solutions Φ 1 and Φ 2 using the left gauge transfor- 9 For example, we could choose to be the rolling tachyon deformation, and V 2 = e − 1 √ α ′ X 0 to be the "reverse" rolling tachyon; or we could choose V 1 and V 2 to be Wilson line deformations along two independent light-like directions. In both these examples the V 1 -V 2 OPE is singular.
This choice of U is natural to the structure of the marginal solutions, since it factorizes into a product of left gauge transformations through the Schnabl-gauge tachyon vacuum [10]. The regularized boundary condition changing projector is 3) The first two terms, P 1 and P 2 , are regularized boundary condition changing projectors from Φ 1 to the tachyon vacuum (Schnabl's solution), and from the tachyon vacuum to Φ 2 , respectively. These terms vanish in the Fock space in the ǫ → 0 limit. The third term P 12 is the nontrivial one: In the ǫ → 0 limit it approaches the sliver state in the Fock space, with the boundary conditions of Φ 2 on its left half and the boundary conditions of Φ 1 on its right half; it represents the open string connecting the BCFTs of Φ 2 and Φ 1 [10]. If V 1 and V 2 have regular OPE, P 12 is a nonvanishing projector in the Fock space. If the OPE is singular, P 12 may be vanishing or divergent because of an implicit singular conformal transformation of the boundary condition changing operator between the BCFTs of Φ 2 and Φ 1 at the midpoint. Part of our goal is to see how this singularity is resolved when we compute the overlap. The phantom term is First let us consider the contribution to the tadpole from P 12 : Plugging everything in gives In the second step we inserted a trivial factor of cB next to the commutator [c, Ω], which allows us to remove the c ghost from the difference between the solutions. Now let's look at the factor above the braces. Re-express it with a few manipulations Express the factors on either side of the underbrace in equation (6.6) in the form: Plugging everything into (6.6), the factors in parentheses above cancel against the factors in parentheses in (6.7). Thus the contribution to the tadpole from P 12 simplifies to In the ǫ → 0 limit, we claim that the factors above the braces approach the sliver state with boundary conditions deformed by the corresponding marginal current, multiplied by the factor K 1−Ω . If V 1 and V 2 have regular OPE, we can expand the factors outside the braces in the L − level expansion and pick off the leading term in the ǫ → 0 limit. This gives precisely the difference in the closed string tadpole amplitude between the two solutions. Unfortunately this argument does not work when V 1 and V 2 have singular OPE, since contractions between V 1 and V 2 produce operators of lower conformal dimension which make additional contributions. It is not an easy task to see what happens in this case in the ǫ → 0 limit, but there is no reason to believe that (6.9) should calculate the shift in the tadpole amplitude. This is a remnant of the midpoint singularity of the boundary condition changing projector when the boundary condition changing operator between the BCFTs of Φ 2 and Φ 1 has nonzero conformal weight. To fix this problem we need to account for the "tachyon vacuum" contributions to the phantom term. Let us focus on the contribution from P 1 : (6.10) Note that this precisely cancels the problematic contractions between V 1 and V 2 in the first term in (6.9). A similar thing happens for the second term when we consider the contribution of P 2 . Therefore, in total the overlap is (6.11) Note that we have not yet taken the ǫ → 0 limit, so this formula is valid for all ǫ. Now we simplify further by taking ǫ → 0, and a short calculation shows that the first three terms vanish. Therefore consider the contribution from the fourth term: (6.12) Expanding the summand perturbatively in V 1 , To derive the ǫ → 0 limit we expand this expression around L = ∞. Then each term in the perturbative expansion is a correlator on a very large cylinder, and we can pick out the leading L − level of every field in the trace whose total width is fixed in the L → ∞ limit. This gives: (6.14) Scaling the circumference of the cylinders with 1 L , the sums above turn into integrals which precisely reproduce the boundary interaction of the marginal current [24]: which is the expected shift in the closed string tadpole amplitude between the two marginal solutions. | 2012-02-13T14:04:23.000Z | 2012-01-24T00:00:00.000 | {
"year": 2012,
"sha1": "c6f8def577a2ac337983c5a9fe70bcca6a8a5593",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1201.5122",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c6f8def577a2ac337983c5a9fe70bcca6a8a5593",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
89972120 | pes2o/s2orc | v3-fos-license | A Stability Mathematical Model of Nasopharyngeal Carcinoma on Cellular Level
This paper discussed the stability of “Tumorigenesis Models” to link between EBV and carcinoma of the nasopharyngeal from normal cell to invasive carcinoma. The review on this case accomplished the previous theorem of equilibrium point on “Tumorigenesis Models”.
INTRODUCTION
Nasopharyngeal carcinoma (NPC) is a malignancy which is derived from epithelial or mucosa and kripta that coat the surface of nasopharynx.According to Munir (2006) in Indonesia nasopharyngeal carcinoma is the most excessively founded in the ear, nose, throat domain and the most patient are at age 40 or older.Soetjipto (1989) said that prevalence of nasopharyngeal carcinoma in Indonesia is 4,7/100.000inhabitant per year.Whereas nasopharyngeal carcinoma cases in Sardjito Hospital in 2011 are comprised of 31 men and 11 women (Ratnawati, 2012).
In the previous paper we published a mathematical modeling of "Tumorigenesis Models" for EBV-assosiated nasopharyngeal carcinoma and analyzed the equilibrium point of the cell development from normal cell to invansive carcinoma (Lo et al., 2012).This paper will described the stability mathematical model from normal cell to invansive carcinoma.
MATHEMATICAL MODELING
As discussed in previous paper (Sugiyanto et al., 2016), with refer to process "Tumorigenesis Models" for EBVassosiated nasopharyngeal carcinoma (Lo et al., 2012), then we can construct the diagram as follows: From the diagram above we obtain system of differentional equations that describes "Tumorigenesis Models" for EBV-assosiated nasopharyngeal carcinoma as follows (Lo et al., 2012).
Where () Nt : The density of nasopharynx normal epithelium cell () Lt : The density of lesions epithelium cell () L Dt : The density of low grade dysplastic lesions cell () It : The density of EBV latent infection cell () H Dt : The density of high grade dysplastic lesions cell () Ct : The density of invansive carcinoma.
The equilibrium point of system differential equations ( 1) to ( 6) is,
STABILITY EQUILIBRIUM POINT
The Jacobian matrix of system differential equations (1) is, ( , , , , , ) where E is the equilibrium condition.
The characteristic equation is, After some calculations we get, 11 () bd Lemma 1.
The equilibrium point ( *, *, *, *, *, *) In a medical point of view, in order to avoid the cancer cells then the abnormal cell should be dead quickly until the amount of proliferation cells smaller then apoptosis, so a stability will be formed, which mean that the cancer do not develop.Killing abnormal cells need a highly immunogenic (Janeway et al., 2001).In the stage of nasopharynx carcinoma, killing cancer cells will be more difficult, because cancer cels have characteristic of resisting cell death (Hanahan & Weinberg, 2011).
SIMULATION
The selected parameters are given in tabular form below. Case I : Not infected with EBV.Case II : EBV infected but do not develop nasopharyngeal carcinoma.Case III : Infected EBV and nasopharyngeal carcinoma develops.
For case I:
In the first case of this sub-population: cells infected, high dysplastic lesion cells, and invansive carcinoma cells is zero.1) to (6) became system differential equations (1) to (3).In Figure 2, EBV is not detected in the body.In the first case, the normal cell decreased, while there was an increase lesion cell.Then the cell becomes low dysplastic lesion cell because p16 is inactive.Because there is no detection of EBV then the high dysplastic cells did not happen and does not became to invasive carcinoma.This is consistent with description from Figure 2.
For this case II:
In the second case of this sub-population: high dysplastic lesion cells and invansive carcinoma cells is zero.1) to (6) became system differential equations ( 1) to (4).In Figure 3, EBV is detected in the body.In the second case, the normal cells decrease very rapidly in the early years and higher lesions cells which is very fast.When p16 is inactive, the lesion cells becomes low dysplastic cells.Presence of EBV lytic cause abnormal cells, low dysplastic cells, can easily be infected by EBV lytic and makes infected cells.The EBV lytic there is an increase, although not significant.
For this case III:
System differential equations same with system differential equations (1) to (6).In Figure 4, EBV is detected in the body.In the third case, because low immune system, lytic viruses evolve very rapidly.Thus making normal cell down very quickly, otherwise latent EBV-infected cells increased.This coincided with increase high dysplastic cells.The high growth fueled high dysplastic cells growth invasive carcinoma very quickly.These cases need special attention.We need for follow-up studies, so there is prevention in anticipation of invasive carcinoma.It should be a particular concern because it studied the beginning of nasopharyngeal carcinoma.
CONCLUSION
This paper have completed the equilibrium point system in (1).For the asymptotic stability on equilibrium point will be fulfilled if
Figure 2 .
Figure 2. The process of cell development in the case which is not infected with EBV.
Figure 3 .
Figure 3.The process of case development of EBV-infected cells, but did not develop nasopharyngel carcinoma.
Figure 4 .
Figure 4.The process of development case of EBV-Infected and developing nasopharyngeal carcinoma.
Table 1 .
The values parameter for first case, not infected by EBV.
Table 2 .
The values parameter for case infected by EBV, but did not develop nasopharyngeal carcinoma.
Table 3 .
The values parameter for case infected by EBV and developing nasopharyngeal carcinoma.
and 66 ad .It mean that to aim the stability it must fulfilled that the proliferation of lesion cells, low dyplastic cells, invection cells, hight dyplastic cells, and invansive carcinoma cells are smaller than the rate cell that go in the next direction compartment and died cells. | 2018-12-07T07:44:51.062Z | 2016-10-24T00:00:00.000 | {
"year": 2016,
"sha1": "343c2a28f49f585e6e945a7256281f734ae9a467",
"oa_license": "CCBYNC",
"oa_url": "https://sciencebiology.org/index.php/BIOMEDICH/article/download/42/33",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "343c2a28f49f585e6e945a7256281f734ae9a467",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Mathematics"
],
"extfieldsofstudy": [
"Biology"
]
} |
248881182 | pes2o/s2orc | v3-fos-license | Achieving Groundwater Governance: Ostrom's Design Principles and Payments for Ecosystem Services Approaches
Abstract Groundwater is a largely unseen common pool resource. Yet, driven by strong economic incentives, whether or not encouraged by existing policies, and the difficulty to exclude others, groundwater users are competing with each other to extract as much as possible, with devastating consequences for its sustainability. The challenges faced for sustainably managing such common pool resources, on which people have established de facto individual rights, are manifold. However, creating a market for trades of some kind in ecosystem services associated with groundwater could actually enhance the protection of this critical resource on the basis that protection can benefit individual groundwater users economically as well as provide a broader public good. This article uses Elinor Ostrom's design principles as an analytical tool to examine how market-based approaches such as payments for ecosystem services (PES) fit with some of the governance models that could be used to protect and enhance groundwater as a common pool resource. It argues that while there are specific design challenges to be overcome, PES as an institutional tool can align with Ostrom's ideas for the governance of groundwater.
can be replenished naturally, with many of the major aquifers in arid and semi-arid parts of the world that rely most heavily on groundwater experiencing rapid rates of groundwater depletion. 10 The devastating mid-and long-term impacts of this behaviour on the environment and local populations, together with the unpredictable patterns of intensification on the global water cycle as a result of climate change, combine into an urgent need for effective governance. Much has been written about the reasons behind the depletion of global groundwater supplies, 11 and understanding these reasons is a necessary first step towards an effective governance system. Central to this is the lack of appreciation of the interconnections between surface water and groundwater in existing water law and policy. 12 These and other problems provide at least a prima facie case for questioning and rethinking our existing governance approaches to groundwater management. Like many CPRs, the effective governance of groundwater requires a set of rules, norms and values that underpin exploitation consistently with the ecosystem approach, while building on possible synergies.
The challenges faced in sustainably managing such common resources, over which people have established de facto individual rights, are manifold. Traditionally, the regulation of groundwater has been based mostly on setting limits on how much water can be abstracted by prohibiting over-exploitative behaviour. 13 This article focuses on market-based approaches, such as creating a market for trades of some kind in the ecosystem goods and services associated with groundwater. Even if we consider it preferable that groundwater should be under public stewardship and that the role of the private sector should be supplementary, 14 marketization promises to enhance the protection of such a critical resource on the basis that protection can benefit individual groundwater users economically as well as provide a broader public good (such as flood protection, or recreational or aesthetic value). This perspective can lead to the re-evaluation of wetlands and groundwater aquifers, as well as improved understanding of the linkages between groundwater and various ecosystems and ecosystem services, and the vulnerability and resilience of groundwater-dependent systems. 15 The question that this article investigates is: How do market-based approaches such as payments for ecosystem services (PES) fit with the governance models that could be used to protect and enhance groundwater as a CPR? To answer this question, the article draws on Elinor Ostrom's seminal work Governing the Commons, 16 which led to the development of eight design principles that summarize factors which have played a role in long-enduring CPR governance mechanisms. The article argues that PES can function in line with Ostrom's ideas for the governance of CPRs and fit the eight design principles. Section 2 sets out why we need a better governance structure for groundwater, and highlights some of the core areas of tension. Section 3 analyzes governance models for CPRs, including an assessment of Ostrom's work. Section 4 discusses the main features of PES approaches and the extent to which Ostrom's design principles inform the application of PES. Section 5 concludes.
Groundwater makes up 97% of global freshwater and is the most intensively exploited natural material in the world. 17 Its importance in meeting global water demands for humans and the environment and for spurring socio-economic growth across the world cannot be overstated. For example, estimates from 2010 show that three of the largest economies in the world (the United States (US), China, and India) account for over 50% of global groundwater abstraction. 18 This significance has led to increasing interaction between individuals and groundwater systems, with the cumulative effects of individual action and behaviour usually not resulting in socially optimal situations. 19 This practice has devastating consequences for the long-term sustainability of groundwater resources and the wider environment, consequences of which individuals may be unaware or may not seek to mitigate. Besides the individual actions, challenges such as population growth and climate change add to the problem by generating changes in patterns of water flow into aquifers that may not be known or understood by users. Given the wider community benefits of groundwater resources and the range of problems that beset the resources, there is a need for both individual and collective action to respond to these problems.
Historically, groundwater and surface water have been treated separately in water law and policy. Most significant is the possibility of private ownership of surface reservoirs, with private owners being allowed to sell water to other parties for uses such as irrigation works. In contrast, in some jurisdictions groundwater is perceived as a similarly 'private resource' (landowners consider that they have an absolute right to the water beneath their land, often irrespective of what the law may say). 20 It is legally considered a CPR in many other jurisdictions, especially in the global south, with little regulatory attention to its management by landowners. 21 This characterization encourages the unsustainable use of groundwater, with widespread consequences for the wider community.
Groundwater and surface water interlink in significant ways. For example, the abstraction of groundwater reduces surface water supplies, while the diversion of surface water may lead to the depletion of groundwater resources. 22 This interconnectedness historically has not informed water law and policy in many jurisdictions, although this is starting to change. 23 We see this, for example, in the European Union (EU) Water Framework Directive, 24 which provides a framework for water management, including groundwater, and in Australia's federal Water Act 2007. 25 Both instruments refer to the 'conjuctive use' of groundwater and surface water: a situation where 'both groundwater and surface water are developed (or co-exist and can be developed) to supply a given urban area … although not necessarily using both sources continuously over time nor providing each individual water user from both sources'. 26 The aim of conjunctive use and management is to 'maximise the benefits arising from the innate characteristics of surface and groundwater water use'. 27 This requires coordination of the operation and governance of groundwater and surface water resources in order to increase total water supplies and enhance water quality. While, scientifically, the concept has potential, legally and from a governance perspective it neither fully addresses the problem of separation between the notions of groundwater and surface water, nor does it assist in clarifying the nature of rights over groundwater and surface water resources. 28 The interconnectedness between groundwater and surface water and the problematic nature of developing adequate policies that cover both is further compounded by the transboundary nature of water resources. There are approximately 276 transboundary river basins covering almost half of the earth's surface and 60% of freshwater supplies; 145 countries have territory in this area and there are also approximately 300 transboundary aquifers, which serve the almost 3 billion people who depend on groundwater resources. 29 This means that they may transcend national, state, and local boundaries, thereby requiring collaboration between stakeholders across different jurisdictions to manage them. The EU Water Framework Directive is one of still relatively few examples of such cross-border collaboration, as is the Guarani Aquifer Agreement between Argentina, Brazil, Paraguay, and Uruguay. 30 In the context of increased use and dependence on groundwater, there is a need to improve the governance of this largely unseen but increasingly precious resource. There is a need for an approach that would involve, in part, greater cooperation and coordination between actors (that is, of both surface water and groundwater) and levels of governance. Such a governance system requires a holistic approach that involves potential trade-offs. 31 The Groundwater Governance Project 32 is an attempt to provide a framework for such a holistic approach. One of the main project outcomes was a Shared Global Vision for Groundwater Governance, which sets out targets to be achieved by 2030. 33 These targets include instituting 'legal, regulatory and institutional frameworks … that establish public guardianship and collective responsibility' and the development of 'incentive frameworks' that encourage sustainable use of groundwater. 34 The project also produced a Global Framework for Action to Achieve the Vision on Groundwater Governance (GWGF), which specifies ways in which these targets can be achieved. 35 28 E.g., in the South African National Water Policy there is no explicit mention of conjunctive use and management of surface water and groundwater. Yet, s. 6.4 discusses the establishment of a water conservation and utilization policy in relation to optimum use of water for each of the main user sectors (agriculture, industry and mining) while s. 6.6.3 states that the development and use of all water resources should be undertaken in accordance with the principles of Integrated Environmental Management, thereby placing water use and management in a broader perspective. See The difficulty faced by any internationally focused instrument that seeks to encourage such actions is that individual groundwater resources need to be considered in their context: an approach that might work in one country will not necessarily work in another where the ecological, cultural, social, and political conditions are different. Additionally, the existing legal provisions (for example, where extractions permits and licences are recognized as private property rights in some jurisdictions but not in others, whereas in other jurisdictions no permits are required at all) and the positions taken by various local stakeholders may also make a uniform approach impossible, certainly in the short term. In response to these challenges, there is a need for a governance approach which strikes a balance between making specific suggestions that give states and stakeholders a 'push in the right direction', while avoiding taking a prescriptive stance that could render a framework unusable.
'
Governance is a context-specific, dynamic concept. There is no single approach to governancebe it community-based, private sector, or state-ledthat can deliver the desired outcomes. 36 Some interpretations of governance see it as the exercise of political, economic, and administrative authority in national affairs at all levels. On this view, it comprises the mechanisms, processes and institutions through which citizens articulate their interests, mediate their differences and fulfil their legal rights and obligations. 37 Specifically concerning groundwater, Foster and Garduño state: Groundwater governance comprises the promotion of responsible collective action to ensure socially sustainable utilisation and effective protection of groundwater resources for the benefit of humankind and dependent ecosystems. 38 Given the 'common pool resource' characteristic of groundwater, collective or collaborative action would seem to offer the best chance to achieve favourable outcomes. This is a key feature of Ostrom's vision on governance. 39 She argues that it is crucial to resolve questions about how to regulate CPR because, by their very nature, if everyone is allowed to use the resource freely, it will eventually be impossible to protect its existence. 40 Ostrom's work indicates that a polycentric approach is preferable to monocentric methods, even in attending to the issue of CPR governance as it provides a greater opportunity for experimentation, choice, and learning across levels of social organization, and also has the tendency to 'enhance innovation, learning, adaptation, trustworthiness, levels of cooperation of participants, and the achievement of more 21(2) Hydrogeology Journal, pp. 317-20. 38 Ibid., p. 317. 39 Ostrom, n. 16 above. 40 Ibid., p. 49. effective, equitable and sustainable outcomes at multiple scales'. 41 Using several case studies, she identified that 'ecological sustainability' was an outcome of successful CPR governance. 42 Ostrom reasoned that: [m]ost of the institutional arrangements used in the success stories were rich mixtures of public and private instrumentalities. If this study does nothing more than shatter the convictions of many policy analysts that the only way to solve CPR problems is for external authorities to impose full private property rights or centralized regulation, it will have accomplished one major purpose. At the same time, no claim is made that institutional arrangements supplied by appropriators, rather than by external authorities, will achieve optimal solutions. 43 Such a characterization reinforces the view that good governance is created by the combined efforts of the public and private sectors. Ostrom cautions against any regime being 'optimal' but nevertheless found that institutional arrangements that relied only on the CPR owners/appropriators could lead to a better outcome. While warning against the risks of applying normative criteria, 44 Ostrom's measure of success is the fact that the users have maintained these institutions over a long period. In exploring the conditions under which these successful institutions had operated and comparing them with examples where efforts to manage CPRs had failed, Ostrom was able to identify her eight design principles for long-enduring institutions: (1) clearly-defined boundaries; (2) congruence between appropriation and provision rules and local conditions; (3) collective-choice arrangements; (4) monitoring; (5) graduated sanctions; (6) conflict-resolution mechanisms; (7) minimal recognition of rights to organize; and for CPRs that are part of larger systems (8) nested enterprises. 45 Throughout, the focus of governance according to the design principles, therefore, involves a range of self-interested rational actors to create, implement and enforce systems that balance the rights and responsibilities of everyone who benefits from the CPR.
Ostrom's design principles have been very influential, 46 and often considered a counterweight to Hardin's 'tragedy of the commons' narrative. 47 Over the years, the design principles have developed further through empirical insights. 48 CPR institutions. 49 Also, the focus of the design principles on individual actors could lead to resource users being blamed for problems associated with wider socio-economic factors beyond the CPR institution. 50 Indeed, the wider socio-economic factors may be just as important for the likely success of the CPR institution as the actions of individual actors. 51 Accordingly, there is a need to understand the wider context if we are to unpack the barriers to successful CPR institutions. 52 Singleton has argued that this criticism is only partially valid as design principles 1, 2, 7 and 8 'all suggest an awareness that local institutions do not exist in isolation'. 53 Yet, the risk of the design principles being interpreted as ignoring wider socio-economic factors remains. These and other theoretical criticisms 54 of the design principles underscore the need to apply them with care. Indeed, they continue to be employed by researchers and have been applied to various contexts to help our understanding of CPR governance. 55 The next section uses the design principles as an analytical tool to examine the role of market-based instruments, such as PES, in achieving groundwater governance. It discusses the main features of PES approaches and assesses the extent to which Ostrom's design principles inform the application of PES.
'
The need to establish linkages with other water resources and other sectors in groundwater governance requires a better scientific understanding of these linkages and lessons from approaches in managing other resources, such as habitats. Such new approaches build on the 'ecosystem approach', which involves 'a strategy for the integrated management of land, water and living resources that promotes conservation and sustainable use in an equitable way'. 56 This more integrated approach requires a shift in both the mindset and practices of many of those who manage and use land and groundwater resources. For example, farmers who depend on groundwater ecosystem services are called on to see themselves as 'integrated land managers' who produce food and provide ecosystem services, rather than merely 'food producers'. In the context of groundwater, this framework requires that an ecosystem and resilience-based approach includes 'allowing certain level of (controlled, temporary) groundwater overdraft in 49 See, e.g., Singleton, n. order to make room for farmers to generate income and transition into other non-groundwater-dependent livelihoods'. 57 This means that a balance needs to be struck between groundwater use and income generation for farmers. The idea of PES is based on this increased recognition of the role of the 'natural' environment in providing a range of goods and services of great practical, economic and spiritual value to society, either directly or indirectly. A starting point in the shift in policies and practices to reflect the value of land (and groundwater) in providing ecosystem services is to calculate in economic terms the value of such services and to ensure that this is properly taken into account when decisions that affect the state of groundwater are being taken. Such an approach would address some of the groundwater governance challenges highlighted in Section 2 above by ensuring that the provision of groundwater ecosystem services is integrated with other land uses and there is coordination with other water sources. This is significant because available evidence shows that the spatial layout of ecosystems is important for the interactions that give rise to ecosystem services. For example, linkages between groundwater, surface water and rainfall within the river catchment area mean that impacts on any one of these can affect hydrological processes within the catchment and the ecosystem services linked to these processes, such as clean water provision. Equally, the social value of groundwater services (such as the thermal spas in Salto (Uruguay) 58 ) relates spatially to where they are consumed; hence the need for context-specific groundwater management.
PES approaches offer one potential method for addressing the need for collective action, and combining private and public instruments to resolve CPR problems. Accordingly, the remainder of this article explores the the role that PES approaches can play in groundwater governance, evaluating how PES (which rely heavily on outside structures for enforcement) could be structured to fit with Ostrom's design principles, which allow for governance within the relevant community.
Using PES to Address CPR Problems
There are different and evolving understandings of PES. 59 Generally speaking, PES is a system whereby there is recognition that land and natural resources, such as groundwater, provide benefits for landowners and the wider community such that payment is provided to ensure the maintenance of these services. This requires calculating the benefit of that service and its market value compared with any activity that will have to stop or start to protect that service. This allows for transactions between those 57 CGIAR & IWMI, n. 15 above, p. 7. 58 seeking the benefits of the service and those who have to behave differently to ensure the continued provision of the service. 60 At the heart of PES approaches are heterogeneous players in market transactions for ecosystem services, without complex regulatory interventions. Buyers and sellers are separate entities and may be organized differently. For example, groups of farmers (providers) act together, allowing flooding on their lands to prevent flooding downstream. The downstream entity paying for such services (the buyer), however, is separate from that community; it could be another community, or a municipality, or even a single landowner. Similarly, the farmers might act in common, to allow flooding for the sake of a migratory bird habitat and the buyer could be another community, such as a birding group, a local or national association, or even a single philanthropist. 61 In both examples the sellers or buyer, or both, are engaging in their collective action, but they are separate entities connected through a market transaction (contract) or subsidy (government) scheme.
The heterogeneity of many groundwater aquifers and users means that in any given scenario, there may be several diverse groups of buyers and sellers getting together as separate entities to decide, on the one hand, who should pay how much and to whom and, on the other hand, what services (actions) should be provided in return for payments. While there are several major user types (municipalities, manufacturers, and so on), the greatest volume of groundwater is pumped for agriculture, and there may be a great number of agriculturists spread over a large territory, given the size of the aquifers that they all exploit. In addition, most ecosystem concerns revolve around leaving enough groundwater in place so that it comes to the surface in rivers or wetlands. However, riverine and wetland ecosystem services are also extremely diverse (for example, flood control, habitat conservation for a great variety of aquatic and terrestrial species, fishing, boating, hunting). This means that those who might provide or pay for ecosystem services related to groundwater are a large and/or very diverse set. The market for groundwater-related ecosystem services is therefore likely to be made up of large numbers of farmers on the one side, and groups of birders, ecologists, hunters, fishers, etc., on the other side, all of whom want enough groundwater back in the ground to sustain rivers and wetlands. The same would be true if the payment went in the opposite direction, as in irrigating farmers paying for upstream ecosystem conservation to keep groundwater in place. 62
Applying Ostrom's Design Princicples to PES for CPR
The following assessment of the design principles focuses on these kinds of subgroup and their efforts to resolve their collective action issues. It does not attempt to capture all the subgroups in each groundwater aquifer, but aims to show on a broad conceptual level how PES fits with the design principles if each subgroup were to organize itself for collective action.
Design Principle 1: Clearly defined boundaries
According to Agrawal, 63 this principle requires a clear definition of the contents of the CPR that the community uses, boundaries around the community of users, and the effective exclusion of external unentitled parties. This is essential as a first step for achieving the sort of collective action to which Ostrom refers. 64 In setting clear boundaries, the aim is to help to 'internalize the positive and negative externalities produced by participants, so they bear the costs of appropriation and receive some of the benefits of resource provision'. 65 Clearly defined boundaries are necessary in order to achieve expected level benefits from any scheme. Concerning groundwater, this will mean having clearly defined boundaries of the spatial layout of the aquifer, defining who the community of users are, setting limits on abstraction rates and having a clear system in place to exclude unentitled parties.
The starting point for any PES scheme is to identify the limits of the ecosystem services that are being paid for 66 and who can sell the services. 67 It is about paying for benefits (services) from the land (or groundwater) and, more specifically, linking the suppliers of the benefits, such as upstream land managers, to the beneficiaries, such as the downstream communities who use water. In fact, the very categorization of PES schemes as a market-based approach to dealing with environmental challenges 68 means that in order for it to be successful, as with any economic instrument, clear boundaries within each community group need to be set. Given that PES schemes are very much driven by benefits to be derived (hence their characterization as win-win solutions), the impact of losing those benefits through lack of clear limits as to who can participate in and benefit from the PES scheme would certainly reduce the incentive to participate. However, as explained by Ostrom, clear boundaries alone would be insufficient to ensure success as it may still be possible for a limited number of community users to take more than the allocated units. 69 This underscores the importance of clarifying land and tenure rights.
Those who receive payments under PES schemes usually have proprietary rights over the land that they are (not) using to further environmental protection. 70 This is problematic in some cases, as PES schemes that rely on proprietary rights have sometimes been designed so that payments are made only to people who have individual titles to land, rather than to communities who own an area of land in common, thereby disadvantaging Indigenous communities. 71 Similarly, water rights will need to be clarified to identify who has duties and who has rights under the PES scheme.
Therefore, in addition to setting clear boundaries, there must be a system of rules that limits how much everyone within the subgroup can take or sell as far as it is necessary to achieve the desired benefit. Such clear boundaries and rules are achieved in PES schemes through the use of contracts that are signed by everyone involved in the project, thereby defining who will directly contribute to the scheme and benefit from it. 72 Here, boundaries around the community of users are determined based on which activities might provide the necessary services (for example, groups of farmers acting together to allow flooding on their lands) and who might seek to benefit from these services (for example, the government, another community, municipality or a private entity). For groundwater, this will be based on a degree of proximity to the aquifer, but there is also the issue of the congruence with surface waters which may be more remote, further extending the scope of user groups that might be involved.
Related to the issue of setting boundaries around who can or cannot participate in PES schemes is the question of whether payment should be based on 'inputs' (work done to maintain or enhance groundwater levels) or 'outputs' (the actual benefits delivered, such as the quantity and quality of groundwater benefited). 73 While the advantage of an input-based structure is that it will be less difficult to determine the work that is required in order to receive payment, it is argued that an output-based mechanism would support environmental protection better because a real benefit will have to be established before payment is made. Conversely, if the latter structure is less appealing to potential participants than the input-based mechanism, there may be less participation (see Principle 3 below), which may weaken efforts to improve the protection of the resource. Irrespective of which approach is adopted, it will be up to the separate groups of heterogeneous players to decide on the boundaries to govern their collective actions for the best outcome.
Design Principle 2: Congruence between appropriation and provision rules and local conditions
Ostrom's second design principle refers to the appropriation and provision of common resources that are adapted to local conditions. 74 There are two separate conditions under this principle. The first condition is that both appropriation and provision rules have to match the local situation. 75 of the CPR, such as its spatial and temporal heterogeneity. The second condition is that there must be congruence between the appropriation and provision rules. For groundwater, this means that the approach to managing abstraction/recharge has to match the resource conditions and must continue to match the local conditions even when the situation has moved on from its original form.
Successful PES approaches are designed on an adequate understanding of the local conditions. This involves having a well-defined ecosystem service (or a land use likely to secure that service) but will also involve establishing ecosystem service baselines and the scope for additionality, among others. At the heart of identifying the local conditions is the issue of costs in obtaining detailed knowledge about the local context, including identifying the source(s) of problems that the groundwater is facing and what needs to be done about them. Indeed, the GWGF emphasizes the role that scientific knowledge should play in designing frameworks to protect groundwater. 76 Importantly, it also stresses the need to disseminate this knowledge in a way that non-scientists can understand. 77 Sharing the knowledge will further encourage action because the more people understand the effects of their actions, the more they might be willing to put in more effort to protect the resource.
An adequate understanding of the local conditions will determine the kind of PES design that is best suited to deliver the desired outcomes: general subsidy, direct contracts, auctions or paying a third party (intermediary). 78 PES approaches that adopt a flat subsidy design cannot clearly distinguish between those parties who can provide high-value services and those who provide low-value services. 79 According to Salzman, this is because they often operate by allowing any landowner or occupier within a particular area to participate regardless of whether they provide valuable services. 80 The public goods character of groundwater and the diffuse nature of the users (farmers, municipalities, manufacturers, birding groups, ecologists, mushroom gatherers, duck hunters, duck lovers, fishers, etc.) may mean that a flat subsidy design may be the most suitable. Allowing any user within a particular area to participate regardless of whether they provide valuable services may benefit from reduced costs in identifying the landowners or occupiers with whom to enter into contracts. Yet other designs of water-related PES schemes have been shown to deliver the desired benefits to protect the aquifer by controlling the abstraction of groundwater and ensuring its sustainable use. The Vittel PES programme in France is an example of a scheme in which the direct contract design approach has been used. 81 The objective of the programme was to provide a high level of water quality by reducing nitrate rates in the aquifer. Therefore, depending on the local context, several PES approaches can be tailored within certain limits to deliver the desired outcome. Furthermore, once a scheme has been designed based on the initial information about the local context, the challenge will be to update this information continually so that the scheme continues to meet requirements.
Concerning the second condition under this principlethat there has to be congruence between appropriation and provision rulesthis is sometimes referred to as a balance between the costs incurred by users and the benefits they receive by participating in collective action. 82 According to Pomeroy, Katon and Harkes, in successful CPR systems 'individuals have an expectation that the benefits to be derived from participation in and compliance with community-based management will exceed the costs of investments in such activities'. 83 Central to the idea of PES is the need to link costs and benefits more directlythat is, ensuring that decisions about economic development are not prejudicial to the loss of ecosystem services, and that there is a balance between the gain from development and the loss of ecosystem services. 84 While complications may arise in attempting to determine the costs and benefits of specific groundwater ecosystem services, irrespective of which PES approach is adopted, the underlying idea is for the users of the groundwater ecosystem services to weigh the costs and benefits of any potential land-use changes, thereby leading landowners to change the ways in which they think about the benefits that their land produces. 85 For example, groups of upstream farmers act together to allow flooding on their lands to prevent flooding downstream. These benefits include a positive impact on groundwater-dependent ecosystems and such ecosystems generate revenue, which contributes to financial prosperity. 86 Indeed, PES schemes have been noted to overcome the barriers of narrow thinking that can prevent the development of integrated water management, 87 as well as a 'tool for addressing systems in which ecosystems are mismanaged because many of their benefits are externalities from the perspective of ecosystem managers'. 88 Viewed from this perspective, PES approaches as an institutional tool would seem to fit with Ostrom's second design principle.
Design Principle 3: Collective-choice arrangements
Ostrom proposes with this principle that 'most individuals affected by the operational rules can participate in modifying the operational rules'. 89 The principle is designed to allow most resource appropriators to participate in the decision-making process. It underscores the significance of local knowledge in natural resource management, and builds on the fact that 'local users have first-hand and low-cost access to information about their situation and thus a comparative advantage in devising effective rules and strategies for that location, particularly when local conditions change'. 90 This means that for CPRs such as groundwater, this participation will be more effective if the knowledge about changing situations is also shared so that participants can make informed contributions to the process of modifying the rules (for example, on abstraction, or the extent to which upstream farmers can allow their lands to be flooded).
PES approaches are certainly a form of collective-choice arrangement, with diverse groups of 'producers' of ecosystem services on one side and diverse groups of 'beneficiaries' on another, sometimes with an intermediary in between. This mechanism provides 'spaces for negotiation and bargaining' about the operational rules (including rules on who has duties and who has rights under the PES scheme), 91 drawing on the local knowledge of users, and characterized by existing power dynamics. 92 In this way opportunities for collective action are created throughout the process. In a case study in Nicaragua, for example, a foreign governmental aid agency outlined the idea of a PES scheme to the local authorities, who then discussed participation with a local factory; the three organizations then decided which of the local farms should be invited to engage in the project. 93 PES is thus a scheme which can engage both governmental and non-governmental actors. Naturally, there will not be the same initial design process for every PES scheme; however, the idea that one or more parties have to secure the engagement of other parties will be a constant feature.
Besides relying on 'first-hand and low-cost access to information about their situation' 94 from local appropriators, to ensure low-cost compliance with the rules, the involvement of local users in the decision-making process contributes to the ideas of shared responsibility 95 and stewardship of the natural world. 96 Indeed, the GWGF notes that 'public agencies alone cannot manage groundwater for the common goodinstitutions typically need to be inclusive of all stakeholders'. 97 By their very nature, PES approaches offer more by way of 'collective action' than the more traditional approach, which relies on public bodies to regulate groundwater exploitation where, for example, groundwater users have to follow the rules set out in legislation; this does not encourage their interaction with other users. The exchange of money for environmental services makes it highly likely that there will be a variety of actors involved in any given PES scenario, engaging as separate entities in respective collective action within their subgroup, but connected through the market transaction (contract) or subsidy (government). 98 Experience shows that local users are more likely to support decisions in which they feel vested. 99 In fact, concerning the participation of individual actors and local communities, it has been shown that farmers, for example, were inspired to participate in resource governance schemes because they knew that water scarcity has a negative impact on their lives, and the cash payments that they received for participating were of value to them. 100 Although, as Ostrom argues, the presence of effective rules does not guarantee compliance by local users, 101 the use of cash or in-kind payments as incentives to encourage collective action within each subgroup 102 in part shows that PES approaches as an institutional tool can support the governance of groundwater resources and fit with Ostrom's Design Principle 3.
However, while this principle emphasizes the importance of local knowledge in designing effective rules and strategies for each situation, it is worth noting that the available 'low-cost' information may be insufficient. Successful PES schemes also depend on the information held by those outside the users' subgroup, such as the government or statutory body or a private entity. 103 On the one hand, if the main buyer of services is the government or a statutory body, key information will be held at a single point and may be available either through annual reports, accounts or other publications, or under freedom of information rules. 104 However, while legislation in some countries may provide for transparent decision making and access to information, other countries (especially in the developing world) do not yet have the adequate statutory reporting rules possessesd by most developed countries. Even where such rules already exist, there is a discrepancy between legislative requirements and implementation in practice. As a result, once collected by the government, environmental information tends to be difficult to obtain. 105 On the other hand, if the services are bought by the private sector, such as a manufacturer, the transactions will not be subject to such disclosure rules. This can create mistrust within the subgroup and have a negative impact on efforts to resolve their collective issues. This may suggest a need for further measures to be put in place to ensure the collection of data, transparent decision making and access to the information, but these measures must be balanced against the needs of private entities to maintain some confidentiality of potentially market-sensitive information. 106 Where there is a discrepancy between legislative requirements and implementation, availability and transparency in decision making in PES schemes can be achieved in different ways, such as through workshops. 107 In any case, in order to be effective, it is important for the public or everyone within the same subgroup(s) to understand the data that is made available. In this way, PES schemes can help to drive up standards because local users are more accepting of the rules.
A potential challenge to this argument is that market-based institutional tools such as PES can reinforce social hierarchies and structures. 108 In this sense, participation in a PES project can seem an attractive option for larger stakeholders. This could be a disincentive for potential individual participants, thereby limiting the applicability of this principle. With regard to groundwater, though, there is recognition that PES schemes 'can be set up so that the most vulnerable are protected', 109 thereby dismantling some of the social hierarchies to encourage greater participation. However, there must be an intention to achieve this benefit and such intentions will often depend on who instigates the arrangements for PES. 110 This is demonstrated clearly in the context of Mexico's implementation of PES schemes; while the World Bank did not equate environmental services with poverty reduction schemes, the government's position highlighted social benefits as an important reason to fund PES schemes. 111 Ostrom's argument for the engagement of as many stakeholders as possible should mean that, even if PES approaches are not going to facilitate revolutions in the very structure of society, participants that operate within their subgroups, either as buyers or sellers, should be assured they can make a difference and not endanger the success of the groundwater governance.
Design Principle 4: Monitoring
This principle requires, firstly, the presence of monitors, and, secondly, that the monitors are members of the community or accountable to the community. 112 As PES is an incentive-based scheme, monitoring not only ensures that participants are complying with the rules and that their decision making about the resource is informed by how the rules are being complied with, it also ensures that participants are incentivized to continue to engage in the scheme. Under most PES approaches, monitoring provisions are a standard feature of the contracts signed by the parties. 113 Such contracts may be between a buyer (such as a government or public agency), 114 on the one side, and sellers (such as landowners or managers), on the other side, or any intermediaries for the management of the ecosystem goods and services on private land. Even in the absence of standard contracts, appropriate monitoring would be necessary to ensure cohesion between subgroups, whether buyers or sellers, when they come together to coordinate their joint efforts.
Under CPR institutions where responsibility falls on the whole community or subgroup of users, setting up a monitoring mechanism with monitors who are part of or accountable to the appropriators is straightforward. Under some PES approaches that are governed by public regulation (such as some of the general subsidy schemes 115 ), it may be possible to devise a monitoring mechanism that fits with the requirements for this principle, in which accountability is to the community as a whole, but it may not be so straightforward with PES approaches that are a matter of private law arrangements. Under such approaches, monitoring compliance is essentially a matter for the parties, and they must be in a position to carry out this task. They will still be members of the community or subgroup, but accountability may be to other non-community or subgroup parties to the agreement as part of the market transaction. For example, groups of farmers (providers) act together in allowing flooding on their lands for the purpose of preventing flooding downstream, and monitor each other. Together, the farmers are accountable to the downstream entity that is paying for such services (buyer), which is separate from that community and could be another community, municipality, or even a single landowner. This process may be repeated for the other diverse subgroups of birders, ecologists, hunters, fishers and so on that rely on the groundwater aquifer, with monitoring and accountability within and between the subgroups to ensure compliance.
This segregation in monitoring duties may be avoided if a joined-up approach is adopted to ensure that even within different subgroups the various individual agreements reinforce each other to deliver wider community goods and services. In fact, under most PES schemes, such an approach will be necessary to ensure the delivery of services with interlinkages between different resources. For example, the Bolivian Los Negros-Santa Rosa PES scheme has as its overall objective to conserve biodiversity and protect the Los Negros River watershed. To achieve this, the buyers of the services (downstream irrigators), acting in common, negotiated two types of contract with the services providers (upstream farmer-landowners), also acting collectively. One type of PES contract prohibited 'tree cutting, hunting and forest clearing on enrolled lands'; the other type focused on reforestation of deforested areas of land in the watershed. 116 Here, the different contracts reinforce each other to achieve the overall objective, with the monitoring duties falling on parties who are members of the community. While it is not clear if the parties to the contracts feel accountable to the community, the fact that they are all working towards the overall goal of protecting the watershed, which has wider community benefits, would suggest some level of accountability to the community. Therefore, to achieve the monitoring requirements under this principle with such privately negotiated PES agreements, consideration should be given to the right to have access to relevant information and rights of entry to carry out inspections by monitors who are members of the wider community of users but may fall within a different subgroup. These can either be left as a matter to be negotiated individually (perhaps parties should provide legal certainty by ensuring that the parties have the same understanding of their own and the other parties' respective rights, responsibilities and obligations, and where risk is allocated. Such legal clarity is especially important for groundwater ecosystem services that are diverse and may require periods beyond the lifetime of the original parties to the agreement before significant benefits are delivered. However, as the example above shows, even a simple rule that appears clear to everyone on paper may be misinterpreted by some users, thus raising the possibility of dispute and failure later.
As securing benefits through PES approaches lie in contract, it necessarily follows that relying on the external court system to resolve intra-and inter-community contractual conflicts may be easier. While there are issues of costs to contend with, the example of the acequia irrigation communities in northern New Mexico proves just how important such a conflict resolution mechanism can be to a functioning sustainable groundwater governance arrangement. Here, the communities for over 100 years have turned to the 'external court systems under different national regimes to resolve intercommunity conflicts'. 125 While the availability of low-cost and accessible conflict-resolution mechanisms does not guarantee that PES approaches will deliver groundwater benefits, it is also difficult to see how any such scheme will succeed without such conflict-resolution mechanisms, given the complex set of rules across the different subgroups of providers, buyers or both that need to be maintained over the time necessary to deliver the groundwater ecosystem services.
Design Principle 7: Minimal recognition of rights to organize
This principle requires that local users be free to devise their institutions and rules free from challenge by external governmental authorities. 126 It is worth noting that this principle does not rule out a role for external governmental agencies in the governance of CPR institutions but that the role must be carried out in recognition of the rules set by local users for themselves and their ability to enforce the rules. In this way, the risk of imposing rules that do not match the local conditions is avoided. 127 For PES approaches to succeed as an institutional tool in the governance of CPRs such as groundwater, they must start with recognizing the rules in operation for local users and build on this. In general, PES approaches would rely on local users' knowledge of the local conditions. While external governmental support (perhaps in the form of payments and also in prescribing the management practices to be implemented) may be necessary, this is only additional and is often done with recognition of the local rules and management practices in place. In the Nicaraguan PES case study, for example, a foreign governmental aid agency worked with local authorities and a local factory to identify and support local farms under the PES project. 128 Similarly, under the EU agri-environment schemes, while some payments are made conditional on compliance with additional environmental standards imposed under EU regulations, this is done in recognition of the local rules in operation (including rules on tenure) among farmers. This means that for a groundwater ecosystem with its interconnected nature, there must be recognition of the local rules in operation within and between the diverse user groups of, for example, farmers, birders, fishers and grazers.
For PES approaches to accord with this principle, they must start by 'recognising local knowledge and existing institutions at an early stage', 129 and then building on them for the long-term sustainability of any scheme. Indeed, the exact nature of any new groundwater governance projects will depend on the broader governmental and legal context already in place. This is especially important in cases where the PES scheme is initiated with the help of funding from an external agency, which may bring with it a presumption that it has the authority to set the rules on certain financial models that may be incompatible with local rules. 130
Design Principle 8: Nested enterprises
This principle stipulates that for larger CPRs, governance activities are organized in the form of multiple layers of nested enterprises, with small local CPRs at the base level. 131 Cox, Arnold and Villamayor-Tomás highlight the importance of nesting smaller CPR systems into larger systems, especially 'given the high probability that the social systems have cross-scale physical relationships when they manage different parts of a larger resource system and thus may need mechanisms to facilitate cross-scale cooperation'. 132 They also note that institutional nesting is important in accomplishing the user and resource boundaries requirement in Principle 1 in many situations. 133 Nesting can occur between local user groups themselves (horizontal linkages) or between local user groups and various governmental jurisdictions (vertical linkages). In any case, nesting will be important in designing PES schemes to support groundwater governance. It highlights the linkages between the different operating levels represented by participants in PES projects and from the different subgroups of local users who are in charge of delivering the services to the buyers of the services. 134 As discussed under Principle 4 above, by their very nature PES approaches target specific local action, at either the individual or community level but with all working towards an overall goal, such as protecting the recharge of the aquifer, which has wider community benefits. Nesting is therefore at the heart of PES approaches and will be especially important in transboundary catchments where there is a need not only for the kind of horizontal and vertical linkages described above, but also for horizontal linkages between various governmental jurisdictions across borders.
Such nesting across different jurisdictional boundaries can be seen in the US in the New York City Catskill Watershed PES scheme, which was designed to ensure that the city continues to enjoy high-quality, affordable drinking water. 135 The scheme involves agencies at the federal level (the US Environmental Protection Agency) and state level (New York City, eight upstate counties and more than 60 towns and villages crossing multiple jurisdictions, outside New York City). 136 Another example where nesting is seen to operate in PES approaches across large transboundary catchment areas is the Promoting Payments for Ecosystem Services in the Danube Basin, which ran from 2010 to 2014, involving Bulgaria, Romania, Serbia, and Ukraine. With financial support from the United Nations Environment Programme (UNEP), the Global Environment Facilty (GEF) and the European Commission, the project focused on developing and demonstrating national and local-level PES schemes to be integrated into the River Basin Management Plans for the River Danube and its sub-basins. 137 These examples show the importance of integrating nesting in PES approaches for the governance of larger enduring CPRs, such as large-scale transboundary aquifers, with several major user groups, each getting together to decide who should pay and then 'nesting' with other groups to coordinate their joint efforts to pay another groupfor example, birding groups getting together and then connecting with rafting, fishing and/or hunting groups to coordinate joint efforts to pay farmers either directly or through an intermediary.
Design Principles: A summary
In many ways, PES approaches may be designed to fit with Ostrom's design principles, as demonstrated by the preceding analysis. However, PES schemes reward bad environmental behaviour in so far as they do not dispute, for example, the right to pollute, but rather attempt to 'construct avenues for reducing environmental impact and the degradation of nature as a result of offenders misbehavior'. 138 The kinds of behaviour that PES schemes should incentivize is therefore a central question. One approach would be to treat PES schemes as additional to conventional regulation, and use them to target activities that are permitted to provide another layer of incentives. 139 This highlights the importance of effective design and implementation of PES schemes, and their fit with local laws and politics. The Offsetting Scheme in Kumamoto (Japan) shows how the careful application of PES to groundwater in conformity with Ostrom's principles can achieve desired outcomes. 140
The challenges of groundwater resources management are considerablefrom the nonappreciation of interconnectedness between groundwater and surface water law and policy to the transboundary nature of groundwater. It is clear that no single approach to governance can deliver the desired outcomes. Effective governance needs to include private sector and non-governmental actors, as well as governmental activity. This article considers the potential for PES-style models to provide a new and flexible framework for governing a CPR such as groundwater.
The analysis shows that PES approaches can be designed to fit with Ostrom's principles on governing the commons. PES approaches offer linkages to be established between groundwater and other resources, and are flexible enough to accommodate different contexts and attract different stakeholders. The groundwater governance framework envisages that 'the ideal institutional set-up would integrate linkages and functions of groundwater management vertically between the national level and the local level, and horizontally at each level with other sectors and agencies impacting on groundwater'. 141 This 'ideal' structure essentially demands that various stakeholders at different levels in society are allowed to play a role in groundwater management and that the management is executed in a way that makes it effective, given the effect that it can have on other aspects of the environment, and the effect that those other aspects can have on groundwater.
It is true that 'effective groundwater management and protection without stakeholder participation is hard to achievebut equally stakeholders alone are unlikely to be able to manage an aquifer without some form of government support'. 142 PES schemes can help to achieve these two cornerstones of groundwater governance, especially in jurisdictions in the global south where it currently receives little regulatory attention, and in other jurisdictions where direct regulation is not quite delivering on the sustainable management of the resource. Rather than operating against it, they would correspond with Ostrom's analysis of best governance practice for CPRs, offering great potential for the future of groundwater management. | 2022-05-19T15:10:37.776Z | 2022-05-17T00:00:00.000 | {
"year": 2022,
"sha1": "e7032ead29ab5f19184ea708ceb074ddabb25d6e",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/9DD7B9F53405CA885BB1CFFDD79A6FD5/S2047102522000164a.pdf/div-class-title-achieving-groundwater-governance-ostrom-s-design-principles-and-payments-for-ecosystem-services-approaches-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "4867572254c4a95ccbe6c4a97d6d057c7ba51c8b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
229458976 | pes2o/s2orc | v3-fos-license | Evidence-Based Strategies for the Treatment of Peritoneal Malignancies during Health Care Resource Restriction: The COVID-19 Pandemic
Background: The COVID-19 pandemic has put enormous pressure on hospital resources, and has affected all aspects of patient care. As operative volumes decrease, cancer surgeries must be triaged and prioritized with careful thought and attention to ensure maximal benefit for the maximum number of patients. Peritoneal malignancies present a unique challenge, as surgical management can be resource intensive, but patients have limited non-surgical treatment options. This review summarizes current data on outcomes and resource utilization to help inform decision-making and case prioritization in times of constrained health care resources. Methods: A rapid literature review was performed, examining surgical and non-surgical outcomes data for peritoneal malignancies. Narrative data synthesis was cross-referenced with relevant societal guidelines. Peritoneal malignancy surgeons and medical oncologists reviewed recommendations to establish a national perspective on case triage and mitigating treatment strategies. Results and Conclusions: Triage of peritoneal malignancies during this time of restricted health care resource is nuanced and requires multidisciplinary discussion with consideration of individual patient factors. Prioritization should be given to patients where delay may compromise resectability of disease, and where alternative treatment options are lacking. Mitigating strategies such as systemic chemotherapy and/or surgical deferral may be utilized with close surveillance for disease stability or progression, which may affect surgical urgency. Unique hospital capacity, and ability to manage the complex post-operative course for these patients must also be considered to ensure patient and system needs are aligned.
system, individual patient factors and availability of alternative treatment strategies. Such knowledge is essential for case prioritization, distribution of resources, and to enable evidence-informed conversations with patients.
This review aims to summarize data regarding resource utilization, as well as disease recurrence and progression for complex peritoneal diseases of non-gynecologic origin to help guide resource allocation and decision-making during a time of strained and restricted operating room (OR) and critical care resources.
Literature Sources
A targeted rapid literature review [15] using PUBMED and MEDLINE was performed, identifying publications reporting on outcomes for peritoneal malignancies with surgical and/or non-surgical interventions. Evidence discussing short-term morbidity, resource utilization, and predictors of oncologic outcome was examined. Citation tracking and manual reference list examination were performed to screen for additional studies. Recommendations and guidelines from oncologic societies in North America as of April 11/2020 were consulted to identify and compare current expert consensus with data extracted for this review.
Inclusion and Exclusion Criteria
English language papers between years 2000-2020, which were available in full article form, and discussed treatment outcomes for peritoneal malignancies were included. Feasibility-only studies of treatment strategies were excluded. Due to heterogeneity in levels of evidence and the relatively small volume of literature, all relevant randomized studies, cohort studies, case series, and retrospective data reviews were included.
Data Synthesis
F.S. performed literature search and data abstraction, and D.B./A.M./A.G., peritoneal surgeons at Mt Sinai Hospital agreed upon data inclusion and recommendations. Due to heterogeneity in study quality and treatment strategy, a narrative summary approach was selected, and data was not pooled for further analysis. Data were examined and summarized by disease process, and divided by treatment strategy (i.e., observation, surgical management, medical management, combination management) when relevant.
A review of data and recommendations was performed by two medical oncologists from the peritoneal disease program at Mount Sinai Hospital to ensure appropriate multidisciplinary perspectives. Finally, data synthesis and recommendations were reviewed and accepted by peritoneal surgeons from 8 complex peritoneal malignancy programs in Canada, achieving agreement and consensus on a national level. A summary of major conclusions can be found in Table 1.
Appendiceal Mucoceles
The term appendiceal mucocele refers to dilatation of the appendix with mucinous content, which may be the result of a variety of causes including chronic appendicitis/inflammatory mucoceles, appendiceal diverticula, and benign or malignant appendiceal neoplasms. While size alone is not predictive of underlying etiology, mucoceles less than 2 cm in diameter are less likely to be associated with neoplastic causes [16,17]. These have been included in this review, as rupture portends the potential for peritoneal spread when appendiceal neoplasms are the underlying etiology.
At present, there is no clear evidence regarding likelihood of mucocele rupture over a particular time course. Within neoplastic causes, low-grade appendiceal mucinous neoplasms (LAMNs) are generally chronic and indolent in their progression [18]. In contrast, invasive lesions such as adenocarcinoma have potential to progress more quickly both within the appendix itself as well as locoregionally and to distant organs [19]. The decision to pursue operative intervention for mucoceles in the time of pandemic conditions should be guided by the specific stresses of an individual hospital or health care region. If Operating Room (OR) access is limited, deferral is reasonable to consider given their relatively slow growing nature. In contrast, if hospital beds and Intensive Care Unit (ICU) resources are the primary concern, resection of the mucocele may be reasonable, as these can often be resected laparoscopically, with same day discharge. Where these cases are postponed, close surveillance with cross-sectional imaging should be performed to ensure mucocele stability, as features such as progression in size, or suspicious nodal findings may suggest more aggressive pathology and need for expedited resection.
Pseudomyxoma Peritonei (PMP)
The term pseudomyxoma peritonei describes a clinical syndrome consisting of mucinous ascites secondary to perforation and peritoneal dissemination of mucinous neoplasms primarily of the appendix (most commonly low-grade mucinous neoplasms-LAMNs) [19]. The current standard treatment for PMP originating from a LAMN is cytoreduction of peritoneal disease followed by intraperitoneal perfusion of hyperthermic chemotherapy. CRS/HIPEC is associated with significant potential peri-operative morbidity and increased length of stay [20,21], which must be considered when surgical and intensive post-operative resources may be limited.
Observation
Retrospective data from Zih et al. (2014), examined overall (OS) and progression free survival (PFS) in PMP patients with limited, low-grade disease, and minimal symptoms, managed expectantly with clinical and imaging surveillance [22]. Five-year PFS in this population was 82%; in those who progressed, median time to disease progression was 50 months, with no compromise in the ability to achieve adequate cytoreduction [22]. These results speak to the indolent nature of this pathology, and support the safety of surgical delay with close observation in patients meeting the above criteria.
A theoretical risk of observation in patients with PMP from low-grade appendiceal neoplastic disease is one of malignant dedifferentiation and progression to more invasive disease. Tumour analysis after repeat CRS for recurrent PMP by Chua et al [23] found 9 of 58 (15%) patients underwent malignant dedifferentiation of the primary tumour-four into well differentiated mucinous adenocarcinoma, and five into moderately differentiated mucinous adenocarcinoma. While 5-year survival rates in these patients were lower (75% vs. 89% from time of diagnosis), median time from diagnosis of PMP to disease progression was 41 months [23]. Despite the risk of tumour dedifferentiation, this speaks to the relative safety of expectant management if required in the context of a time-limited health resource crisis, as the likelihood of progression to "unsalvageable" disease within a short timeframe of 3-6 months is exceedingly low.
Cytoreductive Surgery (CRS) and Hyperthermic Intraperitoneal Chemotherapy (HIPEC)
CRS alone as a management approach results in a median OS of up to 10 years [24]. Median progression free survival from CRS surgery alone, however, is between 24-30 months [25], with a median time to recurrence of 24 months, even in patients with complete or near-complete cytoreduction (CC0-no visible peritoneal disease after cytoreduction, or CC1-nodules < 2.5 mm after cytoreduction) [26].
Over the past decade, an increasing number of large case series examining outcomes with the addition of HIPEC have been published. Although intraperitoneal chemotherapy regimens differed, 5-year disease free survival from PMP following CRS/HIPEC of 31-74% have been reported, with better results and median progression free survival of 8 years in patients who had complete (CC0) or near-complete cytoreduction (CC1) [25,[27][28][29].
At present, no randomized trials comparing CRS alone versus CRS/HIPEC for PMP exist [22]. Of note, multivariate analyses performed in a large retrospective study by Chua et al. (2012) demonstrated association between HIPEC and improved progression-free survival [23]. Acknowledging the indolent nature of this disease, and a need in many cases for extensive cytoreduction, PMP cases should be triaged based on site-specific challenges. In cases where maintaining higher-level ICU and monitored care capacity is of concern, definitive CRS and HIPEC for asymptomatic or minimally symptomatic patients can be delayed. As with mucoceles, these patients should undergo close surveillance for more rapid progression, which may suggest mixed or dedifferentiated tumour type requiring more urgent intervention. Given the low rates of response to systemic chemotherapy in PMP patients, and lack of alternative non-surgical treatment options [30]; those with progressive disease on surveillance, borderline resectability and large Krukenberg lesions causing symptoms should be prioritized for surgery. Additionally, patients with significant symptomatology related to intraabdominal disease burden, such as diaphragmatic compression, partial bowel obstruction, and renal failure should be considered for temporizing debulking surgery, which may improve symptoms with minimal operative morbidity [31], and serve as a bridge to definitive surgical therapy if eligible.
Colorectal Cancer (CRC) with Peritoneal Metastases
Peritoneal metastatic disease from colon and rectal cancer (M1c disease), with or without visceral organ metastases is estimated to occur in 8-10% of colorectal cancer patients [32]. Patients with peritoneal metastasis of colorectal origin differ from PMP patients due to the more aggressive and progressive nature of disease. In contrast to pseudomyxoma, delayed treatment of these patients may carry a risk for disease progression, and subsequent surgical unresectability [32]. As a consequence, this group of patients generally requires more urgent prioritization over more indolent peritoneal pathologies such as PMP.
Systemic Chemotherapy
A recent systematic review by Waite and Youssef (2017) [33] suggested no clear evidence that neoadjuvant chemotherapy improves overall survival in patients with peritoneal metastases from CRC going on to surgical intervention. This study was limited, however, by significant heterogeneity between studies, all of which were non-randomized and/or retrospective. At present, the current standard treatment in Canada consists of neoadjuvant 5-fluorouracil-based chemotherapy for most cases of colorectal cancer with peritoneal metastases [34], with the potential benefits of tumour downstaging, improvement in the completeness of cytoreduction, and treatment of micrometastases. Most importantly, response to chemotherapy may provide a useful marker of disease biology, which may help guide decisions about whether aggressive surgical intervention should be pursued [33]. In times of pandemic, patients demonstrating disease stability or improvement on systemic chemotherapy should continue treatment as tolerated until operating room and critical care resources are more readily available.
Cytoreductive Surgery and HIPEC
For patients with limited peritoneal metastases in the absence of visceral or systemic metastases in colorectal cancer, cytoreductive surgery with hyperthermic peritoneal chemotherapy perfusion has been shown in a randomized study by Verwaal et al [35,36] to improve median survival from 12.6 months in patients receiving systemic chemotherapy alone to 22.4 months; these rates are comparable to those seen in more recent case control studies [36,37]. This may be under-estimating the efficacy of oxaliplatin-based chemotherapy. Subgroup analyses demonstrated vast differences in survival depending on volume of peritoneal disease [36]. In the context of limited operative resources, those with lower peritoneal cancer index (PCI) scores are more likely to obtain significant survival benefit from CRS/HIPEC, and should be considered over those with higher PCI scores. In order of most to least predicted benefit, patients may be stratified as having PCI < 10, PCI 10-20, and PCI > 20. Given poor outcomes in the more advanced group, PCI > 20 should be considered a contraindication for CRS/HIPEC [32,38].
Although the discussion regarding optimal regimen of HIPEC after CRS is ongoing [39], at present, the administration of HIPEC after cytoreductive surgery in the absence of limiting patient factors is standard management for patients with peritoneal metastases from CRC. Should CRS with HIPEC not be feasible in the context of individual OR and hospital resources, disposition for these patients and consideration of resumption of systemic chemotherapy as a bridge to surgery should be discussed in a multidisciplinary setting, whenever possible. While there are currently no studies examining such a treatment strategy specifically, one can extrapolate from randomized studies examining palliative chemotherapy treatment strategies. Specifically, those patients initially demonstrating chemosensitive tumours, who experience progression while on "chemo break," can often be managed successfully with resumption of the same treatment [40]. For those patients in whom neoadjuvant chemotherapy was poorly tolerated, decisions regarding optimal timing of CRS/HIPEC versus alternative systemic options should be made in a multidisciplinary team on a case-by-case basis.
In the cases of unexpected incomplete cytoreduction, or completion of cytoreduction score greater than CC1 [26] (i.e., >2.5 mm size lesion or confluence remaining), the elimination of HIPEC should be strongly considered, weighing the potential morbidity against the minimal survival benefit in these patients when compared to systemic chemotherapy alone [41]. At present, given the absence of evidence supporting prophylactic HIPEC in patients without visible peritoneal metastatic disease, this should not be pursued during pandemic conditions [42].
Appendiceal Adenocarcinoma
The majority of the treatment for appendiceal adenocarcinomas is extrapolated from CRC data, given the rarity of this disease (0.12 per 1,000,000 people annually) [43]. As with M1c colorectal cancers, patients with intermediate and high-grade lesions-including exgoblet-cell carcinoids-are treated upfront with 3-6 months of neoadjuvant 5-fluorouracil-based chemotherapy in an attempt to diminish tumour burden, increase probability of complete cytoreduction, and ensure no development of visceral metastases in the interval preceding surgery [43][44][45][46]. In patients who are currently on systemic chemotherapy, we suggest continuation of this treatment as tolerated by the patient until surgical capacity is available.
The relationship between completeness of cytoreduction and outcome seen in colorectal cancers has been similarly demonstrated in retrospective studies examining appendiceal adenocarcinomas [18,46]. We suggest the principles for pursuing CRS with or without HIPEC as outlined for management of metastatic colorectal cancer (above) be applied to this pathologic group.
Peritoneal Mesothelioma
Peritoneal Mesothelioma (MPM) is a rare peritoneal malignancy arising from the peritoneum itself, with high propensity for morbidity and mortality due to significant local abdominal progression and predicted survival of less than 1 year without treatment [47,48]. Disease control rates ranging from stability to improvement in up to 71% of patients have been reported with systemic chemotherapy alone; however generally only result in a median overall survival of up to 10-26.8 months [48,49]. In comparison, a systematic review and meta-analysis performed by Helm et al (2015) reported CRS/HIPEC to confer a median overall survival in carefully selected patients of 19-92 months [48]. Despite the significant heterogeneity in histologic profiles and treatment protocols, these results support the pursuit of CRS/HIPEC when possible. Of note, 75% of patients received pre-operative systemic chemotherapy. Given the rarity of this disease, minimal data exists regarding optimal chemotherapy options for MPM; systemic treatment protocols are extrapolated from trials in pleural mesothelioma [50]. Adjuvant treatment protocols have, however, demonstrated superior progression free survival over surgery alone [51] and retrospective studies demonstrate short-term survival benefit from chemotherapy in both neoadjuvant and adjuvant settings [52]. Accordingly, for those patients likely to derive benefit from CRS/HIPEC in times of OR strain and limitation, neoadjuvant chemotherapy is a reasonable mitigating strategy until operative intervention can be performed. Importantly, these patients should be monitored closely for disease progression, so that more urgent prioritization can occur if concerns regarding resectability arise.
Patient factors significantly associated with poorer prognosis and minimal benefit from CRS/HIPEC that may be used to stratify mesothelioma patients include: age >60 years, high-grade tumour biology, and incomplete cytoreduction (>CC1) [47]. In the context of limited resources, patients in whom CC2 cytoreduction (residual nodules 2.5 mm-2.5 cm after cytoreduction) is not possible should be preferentially treated non-operatively with systemic chemotherapeutic agents.
Chemotherapy and COVID-19 Infection
In triaging and selecting treatment plans for patients with peritoneal disease, it is important to consider the risks associated with alternative treatment strategies utilized while surgical access is limited. Both chemotherapy and primary malignancy itself can cause systemic immunosuppression, and consequent increased susceptibility to viral infection, resulting in increased hospital admission and risk of mortality [53,54]. Specific to COVID-19 infection, data from a recent review of prospectively collected data in China suggested more frequent and more severe COVID-related complications in those who had undergone chemotherapy within the past month [12].
To mitigate risk of infection in these potentially immunocompromised patients, treatment protocols may be tailored by the medical oncology team to minimize exposure of patients to hospital settings where the likelihood of COVID-19 exposure is highest [55,56]. Practical considerations include telemedicine visits when appropriate, chemotherapy breaks, and substituting oral chemotherapy options where possible (i.e., Capecitabine for 5FU) which reduces time in the chemotherapy unit, negates the need for a central line, and minimises steroid doses [56].
Resource Utilization
When triaging patients in the context of a pandemic and strained health care resources, it is critical to consider the risk of the disease to the individual patient, the areas of capacity and deficiency within the individual health care setting, and how these often competing priorities can be balanced to offer the most benefit to the largest number of patients [1,3,4].
Early reports of CRS/HIPEC describe ICU stays of 7-20 days in the post-operative period [25]. While requirement for ICU, especially immediately following surgery has decreased as HIPEC has become more frequently utilized some retrospective studies note up to 67% of patients require ICU admission at some point during their hospital stay [20,59]. In cases of limited ICU resources, both upfront and delayed post-operative issues such as anastomotic leak potentially requiring higher level care must be considered and mitigated (e.g., lower threshold for diverting stoma in higher risk lower gastrointestinal anastomoses) when possible. Finally, hospitalization duration for CRS/HIPEC patients can be significant, with reported length of stays of 8-36 days (mean 6-13 days), including patients managed with enhanced recovery after surgery protocols [20,60].
A final resource potentially affected by the COVID-19 pandemic has been blood product availability. Due to closures of blood donation centers and social distancing practices, health systems are facing critical blood product shortages [61,62]. Large retrospective reviews demonstrate transfusion rates of up to 74% in patients undergoing CRS/HIPEC [63,64] suggesting blood product availability is a critical resource that must be considered prior to proceeding with these surgical cases in pandemic conditions.
Conclusions
The COVID-19 pandemic has strained hospital systems, and has resulted in triage and prioritization of surgical cases in an unprecedented way. For cancer patients specifically, this has required health care providers to weigh a number of intertwining factors that include not only disease process, but also the availability of temporizing alternative treatment strategies as well as system capacity to manage post-operative care. This review summarizes the available data regarding treatment strategies and the risk of progression of peritoneal malignancies to help inform and guide prioritization of these cases. While discussed in the context of the current pandemic, the strategies described in this review can also be applied to broader situations of restricted health care resources.
Triage of operative interventions for peritoneal disease is highly nuanced and depends significantly on individual site resources and challenges. Within the peritoneal disease population, consideration should be given to initial resource burden and predicted hospital stay, as well as the potential for additional critical care utilization should post-operative complications arise. Discussion in multidisciplinary contexts should be utilized frequently to aid in the prioritization and mitigating treatment strategies for individual patients as this dynamic situation continues to evolve. As capacity returns within systems, cases will require re-prioritization to ensure those requiring most urgent resection are given precedence; and hospitals will need to consider strategies to deal with the mounting backlog of delayed cancer cases. When resources are available, CRS and HIPEC should be pursued in line with pre-pandemic criteria, and remains the standard of care for the management of peritoneal disease. | 2020-12-07T14:07:56.878Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "d289b7478228bbfbbfb77a40a30fe365ac751454",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7816179?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "820f48c26b834f137befdbcf659e4cfe4533f24c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237335442 | pes2o/s2orc | v3-fos-license | Exploring Court Culture and its Scale Development
We conducted three interlocking studies to explore the concept of court culture and to develop a court culture scale. In study 1, we conducted in-depth interviews of legal professionals and clients (n=51) from the Indian states of Uttar Pradesh (UP) and Uttarakhand (UK) to identify the indicators of court culture. In study 2, we generated the items and refined them. In study 3, we surveyed legal professionals to assess measurement dimensionality and reliability (n=517). The scale indicated seven independent court culture dimensions: outcome orientation, manipulation, discipline, individualism & collectivism, work orientation, pride, and professionalism.
Introduction
Courts are one of the crucial pillars of any civilized society. The culture of any organization influences various aspects and outcomes of any organization. Hence, understanding and measuring court culture is vital to make efficient decision-making for better performing courts.
Literature Review
The culture of any organization, institution, or place tends to impact the working and outcomes. To maintain peace and harmony in society, courts are an important organization.
Understanding the court culture can give insights into the courts' strengths and weaknesses to improve their overall work. Measuring the court culture is a big challenge. Some researchers have tried to define the court culture. There are a few studies that have been attempting to define and measure the court culture. Legal culture is the similar term used to denote macro level culture of society in the legal context. In the recent years legal culture is studied extensively whereas the concept of court culture in the context of modern courts is studied rarely [14;15;16;21;22;23;28].
The study of T. W. Church is the pioneering work for court culture. They have used the term local legal culture and defined it as "the expectations, practices, and informal rules of judges and attorneys' behavior [2]." B. J. Ostrom et al. have classified four types of culture, i.e., clan, adhocracy, market, and hierarchy. In the clan type of culture, the organization is like an extended family. In adhocracy-type culture, an organization is a dynamic and entrepreneurial place. In a market-type culture, the organization is result-oriented, and the hierarchy-type culture organization is very controlled. They have developed a competing values value matrix using factors dominant characteristics, organization leadership, employees' management, organizational glue, strategic emphasis, criteria for success [20]. Some studies show that the term court culture is not used, but they have discussed similar concepts. For example, J. Eisenstein and H. Jacob have used the term 'courtroom work group' relationship among defense lawyer, persecution lawyer, and court judges [3]. R. Goffee and G. Jones have used a two-dimensional diagram using sociability and solidarity. Sociability suggests the extent of friendliness among people. In contrast, solidarity, whereas solidarity extent to which people have understood their goals and shared commitment. creating a family-type atmosphere [5] like R. Goffeee and G. Jones, B. J. Ostrom and R. A. Hanson have classified court culture into dimensions of solidarity and sociability, representing four types of culture, i.e., communal, networked, autonomous, and hierarchical [19]. B. J. Ostrom and R. A. Hanson (2009) narrate court culture as people's beliefs and behaviors responsible for resolving them [19]. S. Heath has done a detailed study using Leverick & Duff's (2002) passive and proactive court culture indicators. These indicators are broadly categorized into three parts, i.e., expectations and understandings, practices and incentives, and workgroup relationships. S. Heath has interviewed court stakeholders and compared two courts, whether proactive or passive [9]. A.Hucklesby defined court culture as "A set of informal norms which are mediated through the working relationships of the various participants." [13]. In contrast, G.C. Diana relates it to "values developed in each jurisdiction as a result of the evolution of criminal justice ideology and guiding philosophies over time" [6]. Y. Huang explains that court culture includes creating the court system, the formation of legal thinking, training of judges, etc. [12].
The study of G. Hofstede is considered a pioneer study in the area of culture. In that four dimensions of culture, individualism versus Collectivism, Large versus Small Power Distance, Strong versus Weak Uncertainty Avoidance, and Masculinity versus Femininity [10]. To measure these broader dimensions in the context of the culture of countries, G. Hofstede has considered items related to the role of the family in work situations, the importance of harmony, the nature of the employer-employee relationship, meaning of status difference, respect for old age, ways to grievances redressal, formalization of organizations, implicit models of organizations, the meaning of time, the appeal of precision and punctuality, tolerance of deviant behaviors and ideas, career expectations, etc. [10]. K.S. Cameron and R. E. Quinn relate the culture with the people's ideology carry inside their heads. In an organizational sense, culture is an unwritten rule that everyone follows in an organization [1]. M. Wu further added a new dimension named Confucian work dynamics in the study of Hofstede [27]. Later on, A.A. Moemeka has discussed individualism, collectivism, and communalism as the broader dimensions of a society. Communalism emphasizes community welfare instead of the interests of individuals [17]. F. Trompenaars and C. Hampden-Turner have elaborated the culture concept and identified seven dimensions of the culture: universalism versus pluralism, individualism versus communitarianism, specific versus diffuse affectivity versus neutrality, inner-directed versus outer-directed, achieved status versus ascribed status and sequential time versus synchronic time [24]. Renowned anthropologist E.T. Hall discussed the eight dimensions of overtness of messages, locus of control and attribution for failure, non-verbal communication, expression of reaction, cohesion and separation of groups, people bonds, level of relationship commitment, and flexibility of time [8].
Previous studies have tried to differentiate culture based on different categories, but in the Indian context, courts have uniformity of court rules and hierarchy. Hence, the conceptualization of court culture in the Indian context has to be explored, keeping in mind the local conditions. For that, I have analyzed court culture using study 1.
Study 1: Exploring Court Culture
Study 1 is conducted to develop a detailed understanding of court culture from the semistructured interviews of lawyers, clients, court staff, and judges. The interview questions focused on the indicators of court culture.
Detailed interviews were conducted with lawyers (n=15), judges (n=4), court staff (n= 14), and clients (n=15) from the seven district courts of UP and UK by using the non-probability purposive sampling method. Interviews were conducted either in Hindi or English from 51 respondents and lasted from 10 to 20 minutes. The researcher referred to an interview schedule also asked more questions based on the respondents' responses. Interviews were conducted till the point of conceptual saturation [4;6]. Notes were made while conducting the interviews.
Thematic analysis was used to analyze the interviews. Interviews were recorded, coded, and analyzed based on themes and patterns. Based on the observation and interviews, I identified court culture indicators such as justice sensitivity, work orientation, innovativeness, individualismcollectivism, manipulation, outcome orientation, openness, power distance, pride, power distance, fatalism, ambiguity avoidance, masculinity, and infrastructure. These dimensions can be broadly grouped under characteristics, beliefs, opinions, and practices. I have conceptualized the court culture as the system of vastly standard features and practices of any organization and beliefs and views of its judges, lawyers, and court staff.
Study 2: Item Generation
Study 2 generated 106 scale items based on the hints from existing literature and in-depth interviews of 51 legal professionals. I considered the items based on the existing literature and field visits. I referred to established methods for scale development [13]. The items were screened by three research scholars, one professor, and ten advocates fluent in Hindi and English and rated each item on a scale of 1 to 5 for relevance and clarity and optional comments. Thirty-one items with a low mean value in relevance were removed, and two items with a low mean value in clarity were modified, leaving 75 items.
A pilot study was conducted by taking responses from 105 respondents, including lawyers (n=100), judges (n=2), and court staff (n=3). After the pilot study, the analysis was done by checking mean, communality, and inter-item correlation. After the pilot study, 51 items were kept for study 3.
Study 3: Scale Characteristics and Factor Structure
The objective of study 3 is to assess the newly developed scale's characteristics and factor structure. Legal professionals, including judges, lawyers, and court staff from UP (n=434) and UK (n=83), responded to the survey. Google forms were used to collect the data either from face-to-face meetings or via electronic messages. Total 517 valid responses were received. 47.5% of the respondents were graduates, and 52.5% were postgraduate or above, 100 percent were male, average age 35.09 years, average court experience 8.65 years, average distance of their home from the court 11.50 kilometres.
I used 75 items to measure court culture while using Hindi translation of scale because of the native language of UP and UK's geographical location is Hindi. Participants were asked to tell their opinion on a scale of 1 to 5 (1= Strongly disagree, 5=Strongly agree). Items were translated into Hindi with the help of a bilingual translator, and then another bilingual translator translated them back to English to verify the accuracy of the translation.
Internal Consistency
The score of Cronbach's alpha for the scale is 0.91, which is within the acceptable limits as per the recommendation of J.F. Hair et al. and J.C. Nunnally [7;18]. I conducted reliability statistics after assessing the factor structure re. Cronbach's alpha of all the factors is within or near acceptable limits.
Factor Structure
SPSS statistics software was used to perform EFA for the data of 51 items and checked the KMO test of sampling adequacy with a score of .92 indicating adequate sample size (>.50) and a significant value of chi-square for Bartlett's test of sphericity χ2=7866.532, p<.001 [25].
I performed EFA with principal axis factoring and with direct oblimin rotation. After multiple iterations, 11 items were dropped due to low communality, lower inter-items correlation, or crossloading issues [26]. I conceptualized 13 factors, but I had no solid theoretical background to support this conceptualization. EFA extracted seven factors. Based on the items' face value, I named the factors as per table 1, and these seven factors explained 53.003% variance.
Discussion
I have taken a grounded theory approach while conducting these three interlocking studies. This paper makes a theoretically essential contribution in the field of legal studies. I have explored the concept of court culture through literature and field visits. Court culture scale is developed using a qualitative and quantitative approach. This scale contains seven factors: outcome orientation, manipulation, discipline, individualism & collectivism, work orientation, pride, and professionalism. Overall, the scale appears to be conceptually strong and psychometrically valid. Study 1 explored the construct of court culture by conducting in-depth, open-ended interviews of legal professionals. In study 2, items were generated, and study 3 assessed the characteristics and factor structure.
The study certainly has some limitations. The data has been collected from the two Hindispeaking states of India. It would be helpful to understand the validity of this scale across different cultural settings and geographical locations. Moreover, data is collected from 100 percent males, and in future studies, data should be collected from female legal professionals.
The court culture scale will be helpful to understand the culture of courts. This paper will enhance the understanding of court culture and allow legal researchers and court policymakers to understand the culture of courts systematically. Data about court culture can help improve the court systems. | 2021-08-27T17:01:22.043Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "8c990d0df4de4639fe9f0034c7bb09b4a4cacfde",
"oa_license": "CCBYNC",
"oa_url": "https://psyjournals.ru/files/121207/psylaw_2021_n2_Kumar.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a65e241d4c2911db48881aa3498274e1d89b3025",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
233652975 | pes2o/s2orc | v3-fos-license | Study on the physicochemical characteristics and dust suppression performance of new type chemical dust suppressant for copper mine pavement
Copper mine road dust is the major source of dust in mine operations. The dust produced on the road surface is a great hazard to the workers. Aiming at the road dust of an open-pit mine, this paper conducts a physical and chemical analysis of a new type of chemical dust suppressant. It is prepared by using sodium polyacrylate as a binder, sodium carbonate as a moisture absorbent, polyethylene glycol as a water-retaining agent, and alkyl glycoside as a surfactant. Physical and chemical characteristics and dust suppression performance of dust suppressant were tested. The results show that the dust suppressant has a pH of 11.03, a viscosity of 18.5 mPa·s, and a surface tension of 28.1 mN/m. The content of heavy metal ions contained is less than the maximum concentration defined by “The norms for the integrated treatment of copper mine acidic waste water.” Under the same temperature condition, the greater the humidity, the stronger the hygroscopicity. Especially when the humidity is 30%, the hygroscopic effect is contrary to water. The dust suppressant also has good anti-evaporation properties, and it could maintain a moisture content of 4% to 5% after being placed at room temperature for 10 days. Compared with water, the dust suppressant has better performance of wind erosion, water erosion, and compression resistance. Under the same conditions, the loss rate of water is 2 times that of the dust suppressant, and the pressure of the dust suppressant sample is about 3 times that of water. The dust suppressant has a much higher dust removal efficiency for all dust and respirable dust than water under the same conditions. Finally, the test results and mechanism of the dust suppression mechanism of the dust suppressant are described and analyzed, which shows that the dust suppressant studied in this paper has good performance and is suitable for road dust prevention.
Introduction
Dust is an environmental hazard. Pneumoconiosis caused by dust seriously threatens the health of workers and reduces the production efficiency of enterprises (Cui et al. 2011;Chen equipment have no obvious effect on the prevention and control of road flour dust (Reeks et al. 2014;James et al. 2003), and currently, the novel and effective method is to adopt chemical dust suppressant for prevention and control (Zhang et al. 2018).
Many experts have carried out research on chemical dust suppressants. Goodrich et al. (2008) studied the performance of synthetic dust suppressants such as magnesium chloride and glycerin and explored their harm to plants. Medeiros et al. (2012) studied dust suppressants produced by glycerol oligomerization under acid and alkaline catalysts, thus obtaining a dust suppressant solution with high visc osity and a good dust suppression effect. Ge et al. (2006) used microwave irradiation to graft acrylic acid onto chitosan, so as to increase the reaction rate and prepare an effective superabsorbent resin. Stabnikov et al. (2013) conducted performance research on a dust suppressant with calcium chloride as the main component and obtained the conclusion that the dust control rate of the dust suppressant was 99.802%. Omori et al. (1993) studied the performance of dust suppressants synthesized with functional acrylic resins. Gulia et al. (2019) studied the dust suppression effect of calcium magnesium acetate, magnesium chloride, and calcium chloride acting alone or in combination on roads. Chu et al. (2012), Stabnikov et al. (2013), and others studied the performance of chemically synthesized biopolymer dust suppressants. Sanders (1997) and Edvardsson et al. (2012) studied dust suppressants made from lignin derivatives and drew the conclusion that their inhibitory effect is excellent. Liu et al. (2000) studied the performance of a MPS dust suppressant which has two mechanisms of wetting and adhesion. Han et al. (2020a, b) studied a dust suppressant by measuring the pore structure and fractal characteristics of different metamorphic coals. Liang et al. (2016) focused on the curing performance of chemical dust suppressants. Tan et al. (2005) studied dust inhibitors using soluble starch, sodium silicate, and glycerin as raw materials which have good wetting, anti-grinding, and anti-evaporation properties. Han et al. (2020b) studied a dust suppressant through the effect of a surfactant and ionic liquid on the physical and chemical properties of acidified coal. Zhao (2005) studied a dust suppressant made of carboxymethyl s t a rc h , g l y c e r i n , s o d i u m s i l i c a t e , a n d so d i u m dodecylbenzene sulfonate. After spraying on the road surface, it shows the effect of making the road surface flat and smooth, obviously reducing fine dust and pumice, improving the road surface quality, and enhancing the anti-roller performance. Wang et al. (2019a) studied the effect of a multi-nozzle atomization interference dust reduction system between hydraulic supports on a fully mechanized mining face. Li et al. (2019) and Ma et al. (2018) studied the performance of fire extinguishing adhesives in coal mines. Li et al. (2019) studied the dust control of wall-mounted cyclone fans. Wang et al. (2019b, c) established a mathematical model of SMD multivariate nonlinear prediction of X-shaped spinning nozzles to investigate the effect of dust suppression.
In general, most dust suppressants currently studied have their own applicable conditions, so they have certain limitations. Inorganic salt can improve the ability of water to wet dust, but it will corrode metal equipment and cause soil salt alkalization. Superabsorbent resin enters the interior of the dust to moisten the dust source, thereby inhibiting the diffusion of dust, but it can only provide a short-term effect, and the ability to resist the wind is poor. Therefore, it is necessary to develop a novel mine compound dust suppressant with high comprehensive benefits.
Therefore, in response to this shortcoming, a new type of compound chemical dust suppressant is developed, the main components of which are the binder, moisture absorbent, water retention agent, and surfactant. The binder is sodium polyacrylate. Compared with polyacrylamide, sodium carboxymethyl cellulose, and hydroxypropyl methyl cellulose, sodium polyacrylate has a higher viscosity. Sodium carbonate is selected as the hygroscopic agent. The hygroscopic agent determines the anti-evaporation and hygroscopic properties of the dust suppressant. The dust in this study is weakly acidic. Therefore, the alkaline material sodium carbonate with strong water retention is selected as the hygroscopic agent. The moisturizing agent is polyethylene glycol. The commonly used moisturizing agents are glycerol, ethylene glycol, and polyethylene glycol. Compared with ethylene glycol, polyethylene glycol is non-toxic and non-irritating. Polyethylene glycol has better moisture retention. Alkyl glycoside is chosen to be surfactant because of its good solubility, strong alkali resistance, and electrolyte resistance. Also, it has good solubility, resistance to strong alkalis and electrolytes, good compatibility with the skin, non-toxic, non-irritating, and easily biodegradable. It can be stored for a longer time in a larger temperature range and has the function of humidification.
In this study, the dust of a surface copper mine road is taken as the research object. First, a dust suppressant was prepared, and then perform the physical and chemical characterization of the dust suppressant was performed, including the determination of the pH, viscosity, surface tension, and environmental protection. The performance test of the dust agent includes analyzing the hygroscopicity, anti-evaporation, wind erosion resistance, pressure resistance, corrosion resistance test, and a scanning electron microscope experiment. Finally, through the analysis of the dust suppression mechanism, the dust suppression effect of the dust suppression agent is comprehensively evaluated.
Preparation of dust suppressant
According to the best formula of dust suppressant obtained through the orthogonal experiment (Huang et al. 2021), 500 ml of dust suppressant is configured for subsequent physical and chemical characteristics and dust suppression performance for use. The formula of the dust suppressant is as follows: the mass concentration of sodium polyacrylate is 0.08%, the mass concentration of sodium carbonate is 15%, the volume concentration of polyethylene glycol is 2%, and the volume concentration of alkyl glycosides is 0.15%.
Determination of physical and chemical index characterization of dust suppressant
In this section, the pH value, viscosity value, surface tension, and environmental protection of the dust suppressant are measured. The pH value and environmental protection determine whether the dust suppressant can meet the national industrial wastewater discharge standards, and the viscosity value and surface tension have an influence on the dust suppression effect of the dust suppressant.
Determination of pH
The pH meter was calibrated with the prepared standard buffer solution with pH 4 and pH 9.18, and then the pH value of the dust suppressant solution was measured with the pH meter. The experiment was carried out three times, and the average value was taken as the pH value of dust suppressor.
Determination of viscosity value
The DV2T EXTRA viscometer was used to test the viscosity of the prepared solution. At a unit concentration, the higher the viscosity of the bonding factor, the better the bonding effect. The viscosity was measured three times and we took the average value as the viscosity value.
Determination of surface tension
The BZY-201 type surface tension meter was used to test the surface tension of the dust suppressant solution, testing three times and taking the average value as the surface tension value of the dust suppressant solution.
Environmental protection test
Heavy metals were detected in the dust suppressant solution, in order to ensure that the content of heavy metal elements contained in the dust inhibitor solution would not pollute the environment, heavy metal detection of the dust inhibitor solution is required. The contents of selenium, chromium, cadmium, lead, and arsenic in the solution were detected by an ICP-MS plasma mass spectrometer in order to check the heavy metal ion content.
Performance test of dust suppressant
In this section, various performance indicators of the dust suppressant are tested, and the dust suppression effect of the dust suppressant is simulated. The pros and cons of the various performances of the dust suppressant have an impact on the dust suppressant effect, and the simulation test of the dust suppressant effect most intuitively shows the final effect of the dust suppressant.
Hygroscopicity test
In order to test the hygroscopic performance of the dust inhibitor, this study compared the hygroscopic effect of the dust inhibitor and tap water. Three samples were prepared for each solution and the final result was averaged. The same amount of dust was added to the 6 watch dishes, then dust inhibitor was added to the first three watch dishes. Tap water was added to the last three to make them the same moisture content. The dust used in this experiment was taken from a copper mine. The dust contains some copper elements, iron elements, and aluminum elements, and has a high proportion of small particles, showing weak acid (Huang et al. 2021). After the dropwise addition was completed, the dishes were dried in an oven at 105°C for 12 hours and then taken out. Hygroscopic test was conducted using a temperature and humidity test box. The experiments were conducted at 20%, 30%, 40%, 50%, 60%, and 70% humidity, and the temperature was uniformly set to 25°C. Under each humidity setting for 6 hours of moisture absorption, the weight of the dust sample was weighed once every hour to obtain its moisture absorption effect that had changed with time.
Evaporation resistance test
The dust samples were handled in the same way as in hightemperature anti-evaporation experiments. Firstly, 30 g of dust was weighed into the watch glass, then sprayed the prepared dust suppressant solution was sprayed evenly on the dust surface according to a spraying amount of 2 L/m 2 . Then, the dust sample was put in a normal temperature environment, and the temperature and humidity of the air were monitored. The quality of dust samples and environmental data were recorded every day to verify the anti-evaporation of the dust suppressants.
Wind erosion resistance test
The erosion of wind is one of the important sources of external force for the destruction of road dust. In this experiment, a wind speed of 3-15 m/s was simulated by a fan to verify the anti-wind erosion effect of the dust under different wind speeds after spraying the dust suppressant. Glass sheets were chosen of equal size, the same amount of dust was spread on the surface of the glass sheet, and the dust suppressant and tap water were sprayed on the dust surface with the amount of 2 L/m 2 , respectively. These samples were put in a drying cabinet at 105°C for 12 hours and weighed. Finally, the dust samples were placed in an environment with different wind speeds and created wind at different speeds for 30 minutes to calculate the final loss rate. To reduce errors, three experimental samples were prepared for each experiment and the final results were averaged. The greater the loss rate, the weaker the wind erosion resistance. The calculation formula of loss rate is as follows (1): where E is the loss rate (%), w 1 is the mass of the dust sample before blowing (g), and w 2 is the mass of the dust sample after blowing (g).
Water corrosion resistance test
Under the erosion of rainwater, road dust will destroy its own soil structure, resulting in small particle size dust that is unconstrained and easily dispersed in the air. In order to test the water erosion effect of the dust suppressant, this experiment first placed 30 g of dust in a watch glass, sprayed the dust suppressant and tap water uniformly on the dust surface with a spray amount of 2 L/m 2 , and then placed it in a drying cabinet at 105°C for 12 hours. The sample was weighed after drying, and the dust sample was soaked in water so that the water could submerge it. Next, it was taken out after soaked for 10 minutes. Then, it was put in the drying cabinet again to dry until the water had evaporated, then weighing and calculating the loss rate after the first water erosion. We repeated the experiment, soaked the dust sample dried in the previous step into the water again, then took it out and dried it after 10 minutes, then weighing. This was repeated 8 times to get the dust loss rate of the dust sample under the repeated erosion of tap water.
Compression test
The rolling of vehicles is also one of the external factors that destroys road dust. The stronger the pressure resistance of the dust, the less likely it is to be destroyed by the vehicle. The smaller the generation of dust with a smaller particle size, the less likely it is to propagate. The purpose of this experiment is to verify whether the dust sprayed with dust suppressor has a certain compressive strength to resist the rolling of a vehicle without losing its own dust suppression factor. First, we took 30 g of dust was taken, sprayed with the dust suppressant and tap water evenly on the surface. The concentration was 2 L/m 2 , and then, it was placed in a drying cabinet at 60°C for 12 hours. A blade was used to remove the dust sample from the watch glass. A block of roughly equal area was cut out, and its pressure was measured using a manometer.
Corrosion test
In order to measure the corrosiveness of the dust suppressant to metal materials, the corrosiveness was measured. A salt spray corrosion test box was used to detect the corrosion ability of the dust suppressant on the metal material by means of salt spray corrosion. Tap water and the dust suppressant solution were respectively introduced into the salt spray corrosion test box for corrosion detection. In this experiment, 6 carbon steel plates with standard metal corrosion test pieces in accordance with ISO3574 were used for the experiment. The sample size was 40 × 13 × 2 mm 3 . The carbon steel sheet was immersed in alcohol for cleaning. After cleaning, the steel sheet was taken out, blew dried, and weighed. Finally, the samples were put into the corrosion test box and conducted the experiment for 10 hours. After the experiment, the sample was taken out, and the steel sheet was immersed in a 20% aqueous solution of diammonium citrate. After 10 minutes, it was taken out, washed with ethanol, and finally dried and weighed. By calculating the exposed area and density of the sample, the corrosion rate was obtained. The calculation formula is given as follows (2): where m 0 is the initial mass of the sample (mg), m 1 is the end mass of the sample (mg), S is the surface area of the sample (cm 2 ), T is the experimental time (h), and ρ is the density of the sample. In this study, a comparative experiment of the dust suppressant and tap water was carried out. Each group of experiments measured three samples, and the average value was taken as its corrosion rate. In this study, the test piece S = 0.001252 m 2 , ρ = 7.84 g/cm 3 , and T = 10 h.
Test results and analysis of physical and chemical indicators pH test results
The pH of the solution was 11.03, which meets the requirements of pH between 7 and 12 in the China National Industrial Wastewater Discharge Standard. Therefore, the pH of the dust suppressant solution complies with national standards and will not cause a serious impact on machinery or the environment, etc.
Measurement result of viscosity value
The average value of the three results of the solution for the viscosity was 18.5 mPa·s. The viscosity of the solution at this viscosity value can not only play a cohesive role, gather dust particles, and increase the particle size, but also will not slow down the penetration rate of the solution due to the excessive viscosity. Thus, it can exert a good dust suppression effect.
Surface tension measurement results
The experimental results show that the average surface tension of the solution was 28.1 mN/m. The smaller the surface (a) Humidity is 20%.
(f) Humidity is 70%. tension, the stronger the permeability of the solution, which accelerates the wetting speed of the dust. Comparing the orthogonal experiment results, the surface tension of the developed dust suppression solution has reached the effect of quickly wetting the dust, enabling the dust to be captured faster and suppressing flight of the dust.
Environmental test results
The environmental protection test results were as follows: Cr < 1.5 mg/L, As < 0.63 mg/L, Se < 0.01 mg/L, Cd < 0.1 mg/L, and Pb < 1 mg/L, which complies with the China National Industrial Wastewater Discharge Standard Requirements. Therefore, the dust suppressant can be discharged normally.
Performance test results and analysis
Test results of moisture absorption The experimental results are shown in Fig. 1. It can be seen from Fig. 1 that when the humidity is 20%, both the dust suppressant dust sample and the tap water dust sample are in a humidity releasing state, indicating that 20% humidity is too small, which will promote moisture to evaporate under both the conditions of the dust suppressant and tap water. When the humidity is 30%, after 6 hours, the dust sample sprayed with the dust suppressant maintains a certain moisture absorption rate, and the dust sample sprayed with tap water is in a humidity releasing state. It can be seen that under this humidity condition, the dust suppressant exerts its own hygroscopic function. With the increase of humidity, both the dust suppressant dust sample and tap water dust sample show better and better moisture absorption effect. The higher the humidity, the greater the moisture absorption rate, and the moisture absorption rate is first fast and then slow. When the humidity is 40%, the final moisture content of the dust sample sprayed with dust suppressant is 0.06%, and the final moisture content of the sprayed tap water is 0.03%; when the humidity is 50%, the final moisture content of the dust sample sprayed with dust suppressor is 0.08% and the final moisture content of sprayed tap water is 0.04%; when the humidity is 60%, the final moisture content of the dust sample sprayed with dust inhibitor is 0.15%, and the final moisture content of the sprayed tap water is 0.06%; when the humidity is 70%, the final moisture content of the dust sample sprayed with dust inhibitor is 0.3%, and the final moisture content of the sprayed tap water is 0.1%. Therefore, it can be concluded that the hygroscopic effect of the dust suppressant is significantly better than that of tap water. The dust suppressant solution can effectively absorb the moisture in the air, increase the moisture content of the dust, and achieve a good dust suppression effect.
Evaporation test results
The experimental results are shown in Figs. 2 and 3.
It can be seen from Figs. 2 and 3 that the ambient temperature is basically maintained at about 25°C, and the rate of decrease of the dust sample moisture content changes with the change of humidity. When the humidity is high, the moisture content decreases slowly. When the humidity is low, the moisture content decreases rapidly. However, the humidity of the dust sample under the tap water does not change significantly when the humidity changes. After the moisture content decreases to almost zero on the second day, the moisture content no longer changes significantly, and the dust suppression effect is basically not achieved. The dust sample under the dust suppressant still maintains a moisture content of about 5% on the 10th day, and has the dust suppressant effect, indicating that the dust suppressant has played a significant effect.
We put dust samples that were processed in the same way into the drying cabinet at 30°C, 40°C, 50°C, 60°C, and 70°C , evaporating continuously for 6 hours, then weighing the Fig. 4. It can be drawn from Fig. 4 that the dust sample under the dust suppressant has a significant anti-evaporation effect compared to tap water. As the temperature increased, the moisture content of the dust sample showed a downward trend. However, the moisture content of the dust sample under running water dropped rapidly. When the temperature was 30°C, the moisture content of the dust sample was basically 0 after 4-5 hours. At 40°C, the dust sample was basically unchanged after 4 hours. At 50°C, the moisture content of the dust sample was basically zero after 3 hours. At 60°C and 70°C, the dust content did not continue to change after the moisture content became zero after 2 hours. For the dust samples under the dust inhibitor after 6 hours, with the temperature rising, the final moisture content showed a downward trend, i.e., from 30°C to 70°C, the moisture contents were 9%, 7%, 6%, 5%, and 3%, respectively. It can be seen that at higher temperatures, the dust suppressant can still keep the dust at a certain humidity.
Wind erosion resistance test results The experimental results are shown in Table 1. From Table 1, the dust sample sprayed with dust suppressant has a loss rate of only 0.004% at a wind speed of 3 m/s, while the loss rate of sprayed tap water at this wind speed is 1%. With the increase of wind speed, the dust loss rate of spraying dust suppressant increased slowly, while the dust loss rate of spraying tap water increased obviously. When the wind speed increases to 12 m/s, the loss rate of tap water reaches 38.9%, while the loss rate of dust suppressant is only 0.01%. Thus, the loss rate of dust samples sprayed with tap water is much greater than the loss rate of sprayed dust suppressants. When the wind speed is 15 m/s, the loss rate of the dust sample sprayed with dust suppressant becomes larger because the wind speed is too fast, which causes some small particles in the dust sample to be separated from the dust body and blown away by the wind. As a result, the loss rate has increased. However, compared with the dust samples sprayed under with tap water, the loss rate of dust samples sprayed with dust suppressant is much lower than that of the dust samples sprayed under tap water. Therefore, the dust suppressant exhibits very good resistance to wind erosion.
Water corrosion resistance test results
The experimental results are shown in Fig. 5.
It can be seen from Fig. 5 that the loss rate of the dust sample sprayed with the dust suppressant increased relatively slowly. However, the dust sample sprayed with tap water began to rise rapidly after the third experiment. After 8 experiments, the loss rate of the dust sample sprayed with the dust suppressant was less than 5%. The loss rate of the dust sample under running water showed a rapid growth trend, and the growth rate accelerated in the later period, and the final loss rate was as high as 27%. Therefore, the dust suppressant has very good water corrosion resistance.
Compression resistance test results
The result of the experiment is that the pressure of the dust sample sprayed with dust suppressant is 275 kPa, and the pressure of the dust sample sprayed with tap water is 73.6 kPa. Therefore, it can be concluded that compared with the dust samples under running water, the dust suppressant has good compressive performance and can withstand the rolling compaction of transport vehicles with a larger load on the premise of no damage to the soil.
Corrosion test results
The experimental results are shown in Table 2.
Research on dust suppression mechanism of dust suppressant
The results of the scanning electron microscopy are shown in Fig. 6. The experimental results of the dust suppression effect of the dust suppressant and the results of the scanning electron microscope show that the dust suppressant has a good dust removal and dust reduction effect (Huang et al. 2021). The results of the experiment are as follows: the dust removal efficiency of dust suppressant to total dust is 97.62%, the dust removal efficiency of water to total dust is 42.06%, the dust removal efficiency of dust suppressant to respirable dust is 88.97%, and the dust removal efficiency of water to respirable dust is 48.53%. In these two kinds of dust experiments, the dust removal efficiency of the dust suppressant solution is much higher than that of water. From the results of scanning electron microscopy, it can be seen that spraying the dust suppressant has a good consolidation effect, forming a compact structure on the surface of the dust suppressing agent, which can effectively resist wind erosion and is not easy to generate secondary dust, the dust-like surface dust after spraying tap water. There are many small-size dusts dispersed, and these dust particles are easily separated from the dust body under the action of external force and dispersed in the air, causing dust. The scanning electron microscope results are shown in Fig. 6. The dust suppressant studied in this paper is composed of binder, moisture absorbent, and water-retaining agent and surfactant. Their respective dust suppressing mechanism has an important influence on the choice of dust suppressant formulation and the effect of dust suppressant.
Dust suppression mechanism of binder sodium polyacrylate The dust suppression mechanism of the binder sodium polyacrylate is mainly reflected in its thickening effect. There are two main thickening mechanisms: neutralization thickening and hydrogen bond thickening. Neutralization and thickening by the same-sex electrostatic repulsion of carboxylate ions, the molecular chain stretches from a spiral to a rod shape, thereby increasing the viscosity of the water phase. Hydrogen bond thickening is the combination of polyacrylic acid and water molecules to form hydrated molecules, and hydroxyl and polyacrylic acid will form hydrogen bonds. The hydrogen bonds will cause the molecular chains of polyacrylic acid to be unwound in water to form a network structure. The viscosity is increased, as shown in Fig. 7. The greater the viscosity, the easier it is for the solution to stick dust particles together, so as to reduce the amount of small particle size dust and reduce the possibility of it being dispersed in the air under the action of power.
Dust suppression mechanism of hygroscopic agent sodium carbonate When sodium carbonate is exposed to the air for a long time to absorb moisture in the air, it will react with carbon dioxide to form sodium bicarbonate, as shown in Fig. 8. The sodium bicarbonate generated by the reaction will form hard lumps to prevent the evaporation of water, covering the surface of the dust to play a good water retention effect, so that the moisture absorption rate of the dust suppressant is increased. Finally, the dust can maintain a certain moisture content and is not easily damaged by external forces.
Dust suppression mechanism of water retaining agent polyethylene glycol The molecular structure of polyethylene glycol contains a large number of hydroxyl groups. The hydroxyl group is a hydrophilic group, and the hydrogen bond formed with water molecules is a strong intermolecular force. In addition, the hydroxyl group has a large polarity and is easy to combine with water with a large dielectric constant and therefore retains water. When the external humidity is low, it will further absorb water to achieve the effect of water retention, as shown in Fig. 9.
Dust suppression mechanism of surfactant alkyl glycosides Alkyl glycoside molecules have a hydrophilic group and a lipophilic group. Due to the repulsion between the lipophilic group and the water molecule, in order to seek the lowest energy form of existence, the surfactant molecule will first turn the hydrophilic group downwards and the lipophilic group upwards. The forms are arranged on the surface of the aqueous solution, as shown in Fig. 10. Hydrophilic groups are attracted downward by water molecules, but this attraction is weaker than the attraction between water molecules, because the polarity of hydrophilic groups is weaker than water molecules. Lipophilic groups are attracted upward by air molecules. This attraction is stronger than that of air molecules and water molecules. The reason is that the relatively large volume allows the lipophilic group to contact more air molecules. At the same time, the weaker polarity also enables the lipophile group to better integrate with the air molecules and attract each other. Therefore, the downward force decreases and the upward force increases, and the imbalance of the force is improved, so the surface tension is reduced. At the same time, there is the surface of the aqueous solution of surfactant molecules, and the attraction between the molecules is weakened, so that the surfactant can reduce the surface tension. The smaller the surface tension, the smaller the contraction force, the easier it is to spread on the surface, and the easier it is for the dust suppressant solution to wet the dust.
Conclusions
(1) Through the analysis of the physical and chemical properties, the viscosity value of the dust suppressant is 18.5 mPa·s, which is 6.2 times that of water, and the surface tension is 28.1 mN/m, which is only 1/2.6 of water. The pH value and the content of heavy metal ions contained in the dust suppressant both meet the requirements of the National Industrial Wastewater Discharge Standard. Therefore, the dust suppressant can be discharged normally.
(2) The dust suppressant has good hygroscopicity and antievaporation performance. At the same temperature, the greater the humidity, the stronger the hygroscopicity. When the humidity is 30%, it shows an obviously better hygroscopic effect than water. This further proves the hygroscopic performance of the dust suppressant. The dust suppressant also has good anti-evaporation properties and maintains a moisture content of 4% to 5% after 10 days at room temperature and has the ability to suppress dust.
(3) The dust suppressant has good wind erosion resistance, water erosion resistance, compression resistance, and corrosion resistance. Under the same conditions, the water loss rate is much greater than that of the dust suppressant, and the pressure of the dust sample sprayed with the dust suppressant is about 3 times that of the tap water. In the water corrosion resistance test, compared with the 27% loss rate of tap water, the loss rate of the dust suppressant is only 5%. (4) Both the SEM experiment and the dust suppression effect experiment show that the dust suppressant has a good dust suppression effect. Through the research on the dust suppression mechanism of each component of the dust suppressant component, it is found that the binder can increase the viscosity of the dust. The moisture absorbent and the water retaining agent can increase the water absorption rate so that the absorbed dust is not easily damaged by external forces. The surfactant alkyl glycosides can reduce the surface tension and make the dust suppressor easier to wetting dust.
Authors' contributions The authors' individual contributions to the paper are as follows: Zhian Huang was responsible for mechanism analysis and preparation of the manuscript. Yang Huang was responsible for performance test and results analysis of dust. Zhijun Yang was responsible for collection and processing of experimental dust samples. Jun Zhang and Yinghua Zhang was responsible for test of physical and chemical indicators. Yukun Gao and Zhenlu Shao was responsible for results analysis of physical and chemical indicators. Linghua Zhang was responsible for corrosion test and results analysis. | 2021-05-05T00:08:11.208Z | 2021-03-25T00:00:00.000 | {
"year": 2021,
"sha1": "4b9c465f201ec2227dbae112cd74adc73f5415a7",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-162783/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "370e3e0db50b5c64ca8841f3b9322af8eb1b484a",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
73727155 | pes2o/s2orc | v3-fos-license | Analysis of cell-free circulating tumor DNA in 419 patients with glioblastoma and other primary brain tumors
Aim: Genomically matched trials in primary brain tumors (PBTs) require recent tumor sequencing. We evaluated whether circulating tumor DNA (ctDNA) could facilitate genomic interrogation in these patients. Methods: Data from 419 PBT patients tested clinically with a ctDNA NGS panel at a CLIA-certified laboratory were analyzed. Results: A total of 211 patients (50%) had ≥1 somatic alteration detected. Detection was highest in meningioma (59%) and gliobastoma (55%). Single nucleotide variants were detected in 61 genes, with amplifications detected in ERBB2, MET, EGFR and others. Conclusion: Contrary to previous studies with very low yields, we found half of PBT patients had detectable ctDNA with genomically targetable off-label or clinical trial options for almost 50%. For those PBT patients with detectable ctDNA, plasma cfDNA genomic analysis is a clinically viable option for identifying genomically driven therapy options.
Glioblastoma multiforme (GBM), a type of glioma, is the most aggressive type of primary brain tumor (PBT), with limited therapy options and a median survival of 12-15 months [1]. Comprehensive molecular profiling of PBTs can inform more detailed biological classification beyond traditional histopathology [2,3]. Development of therapies directed at molecular targets in gliomas and other PBTs is underway and holds promise as an improvement over current standard therapies [3][4][5]. However, trials of genomically matched therapies for brain tumors require next-generation sequencing (NGS) of a recent tissue sample, thus limiting progress; tissue requirements also limit the ability to identify and track mutation clonality and clonal evolution of tumors [6][7][8] and may miss important heterogeneous genomic events [9]. Additionally, recurrent glioblastomas are rapidly growing tumors, and obtaining a biopsy in order to complete molecular profiling is a time-consuming step.
Genomic profiling utilizing tissue samples obtained from invasive biopsy may not always be clinically feasible and is not without risk of morbidity or mortality [10,11]. Additionally, tissue biopsies may be found to have insufficient quantity or quality of material for NGS profiling. Even when tissue sampling is feasible and sufficient for genomic analysis, tissue-based NGS may fail to capture a complete picture of the cancer's genetic profile due to intra-and inter-tumor heterogeneity [8,[12][13][14][15].
Recently, assays analyzing cell-free DNA (cfDNA) have become commercially available. These tests present an opportunity to genomically profiled patients' tumors through a plasma sample without the need for an invasive tissue biopsy. cfDNA contains fragments of circulating tumor DNA (ctDNA) released into circulation through apoptosis and/or active DNA release [16,17]. Given the short 2-h half-life of plasma cfDNA fragments in circulation and the ability to capture heterogeneity across multiple areas of a tumor, this technology provides an opportunity to assess cancer genomic signatures in real-time [18][19][20].
A prior study of plasma ctDNA yield across a variety of solid tumor types identified ctDNA alterations in less than 10% of patients with glioma [21]. The authors hypothesized that the blood-brain barrier is a physical obstacle preventing ctDNA from reaching peripheral circulation, suggesting limited clinical utility of such technology in this cancer type. A recent study utilizing a comprehensive ctDNA analysis yielded a 51% cfDNA detection rate in patients with advanced primary glioblastoma [22] suggesting that ctDNA detection rate in primary brain tumors may vary by assay performance and/or histopathology and grade. We sought to evaluate the ability of a highly sensitive and specific cfDNA NGS assay to identify genomic alterations in patients with GBM and other PBTs, to further characterize ctDNA yield by histopathologic features, and to begin to explore the spectrum of genomic alterations identified in cfDNA in this clinically tested patient population.
Patients & methods
From October 2014 through to December 2017, 665 samples from 419 consecutive patients with PBTs had clinical samples tested in real time with the Guardant360 R cfDNA digital sequencing (NGS) assay (Guardant Health, CA, USA); whole blood was collected in Streck tubes, sent to the laboratory and processed as previously described [22][23][24]. Cases were retrospectively identified via query of the Guardant360 de-identified database of clinical orders for patients with a diagnosis of GBM or other PBTs as indicated on the test request form completed by the ordering provider. 93 patients had more than one cfDNA test result available, as multiple blood draws were performed for tests ordered clinically at multiple timepoints. Analysis was completed under a Quorum Review Institutional Review Board protocol for deidentified and limited datasets which waived the need for individual patient informed consent.
The Guardant360 assay is a laboratory test commercially available for all advanced solid tumors; therefore, the genes interrogated by this assay were not specifically selected with primary brain cancers in mind but rather encompass genomic alterations commonly observed across the spectrum of advanced cancer. The assay composition was expanded over the course of the study. 65 samples were analyzed with the original 54-gene version including comprehensive sequencing analysis of all exons in 18 genes, critical exon (those known to harbor somatic mutations) sequencing analysis of 36 genes and copy number amplification (CNA) analysis of three genes (EGFR, ERBB2, MET). An additional 199 samples were evaluated with an expanded 68-gene panel, 219 with a further expanded 70-gene panel and 182 with a 73-gene panel, each including additional exons sequenced, CNAs and select fusion events assessed ( Supplementary Figures 1-4). Of note, reported alterations include only those which can be assessed through NGS of fragmented cfDNA; for example, the EGFR vIII mutation, large deletions including 1p,19q and epigenetic alterations including MGMT methylation, were not detectable alterations in any of these assay versions.
Results
The average patient age at the time of first blood collection was 52 years (range 3-88) and 62% were male. Histopathological subtypes included GBM, astrocytoma, oligoastrocytoma, oligodendroglioma, glioma (not otherwise specified [NOS]), medulloblastoma, meningioma and ependymoma (Table 1), with GBM being the most commonly reported diagnosis in the cohort (53%). Tumor types were classified for this study in accordance with the 2016 World Health Organization Classification of Tumors of the Central Nervous System [25].
Overall, somatic alterations were detected in 302 samples (45.4%). When accounting for serial testing, somatic alterations were detected in at least one sample per unique patient in 211 patients (50.4%). Of samples with at least one alteration, the median VAF was 0.33% (range 0.05-41.01%) with an average of 2.14 (range 1-29) alterations identified per sample (Table 2) [22].
Multiple alterations potentially relevant to therapeutic targets were identified in several of these commonly altered genes. Recurrent characterized point mutations were detected in IDH1 (R132H/C/S/G), all identified within the AOT subgroup. The identified characterized BRAF point mutations included V600E, an activating mutation common in melanoma and other cancer types (observed in two patients); N581S, an activating mutation in the protein kinase domain (observed in two patients); Q257R, an activating mutation in the cysteine-rich domain of conserved region 1; and R354*, an inactivating mutation predicted to result in loss of the protein kinase domain. All BRAF alterations were observed within the AOT subgroup as well.
Characterized BRCA1 mutations identified were Q380* in a patient with GBM and R1835* in a patient with glioma NOS, expected to result in loss of both BRCT domains and a portion of the C-terminal BRCT domain, respectively. Both of these alterations were observed at VAFs <1%, consistent with somatic, rather than germline, origin. EGFR characterized point mutations detected were E142*, S177*, A289V (observed in two patients), R309* and R831H; these mutations occur in multiple domains including extracellular and protein kinase domains, and include both activating and inactivating mutations. These characterized EGFR mutations were observed primarily, but not exclusively, in the AOT subgroup (one inactivating mutation in a patient with meningioma NOS). Characterized point mutations in ATM in this cohort included K342* in a patient with meningioma NOS, R3008H (observed in two patients with GBM), and R3012* in a patient with meningioma NOS, all inactivating mutations. Common activating mutations in NRAS were observed in the AOT subgroup only, including G12D, G13R, Q61K and Q61R. Point mutations were identified throughout the TP53 gene across the cohort, including patients with AOT, meningioma and medulloblastoma; inactivating mutations were detected in nonrecurrent locations in the NF1 gene, primarily in AOT but also in meningioma (Figure 4) [26]. Characterized TP53 mutations were most commonly observed in patients with Grade 4 tumors (n = 56, 5, 2, 2 and 18 in Grades 4, 3, 2, 1 and unknown, respectively); this may be related to inherent biology or the increased detection rate observed in higher grade tumors, or some combination of the two.
Among patients with alterations detected, almost 50% (n = 101) had a potentially therapeutically targetable genomic alteration identified; 53 (25%) had an off-label treatment option identified and 98 (46%) had clinical trial options identified based on the genetic alterations observed, based on annotations in accordance with the published guidelines [27].
Discussion
Contrary to other cfDNA studies which postulated that ctDNA would not cross the blood-brain barrier to reach systemic circulation, we found that half of the patients with primary brain tumors had detectable cfDNA alterations with 48.9% of these having a potentially genomically targetable alteration identified. Among patients with GBM, who comprised just over half of this cohort, ctDNA alterations were detected 55% of the time. This suggests that cfDNA analysis for GBM genomic profiling may be appropriate to consider prior to an invasive biopsy (performed solely to obtain tissue for genomic testing) and in patients for whom an invasive biopsy is not feasible or who decline. Alterations were detected even more frequently in patients with meningioma, which is consistent with the absence of the blood-brain barrier present in other subtypes of primary brain cancer [28].
With an average VAF of 0.33% and a minimum VAF of 0.05% in this cohort, this study underscores the importance of utilizing a cfDNA assay with high sensitivity for detection of low-level alterations. As seen in Table 2, the number of alterations and cfDNA VAF were both lower in this primary brain tumor cohort compared with a cohort of all solid tumors undergoing this cfDNA assay. The mechanisms that influence the release of tumor DNA into the bloodstream are not entirely understood, and it is possible that the blood-brain barrier may limit the amount of ctDNA able to enter peripheral circulation from a primary brain tumor. The low VAFs observed in this study suggest that technical assay performance is of particular importance when selecting a commercial cfDNA platform for clinical use in this patient population in order to increase the likelihood of identifying these low-level alterations.
This study demonstrates a higher ctDNA alteration yield in patients with primary brain tumors than previously reported. Additionally, one quarter of samples had a ctDNA alteration detected that suggested eligibility for an off-label targeted therapy regimen. Almost half of patients had a ctDNA alteration detected that suggested eligiblity for a targeted therapy clinical trial. This study suggests that the identification of genomic alterations in the cfDNA of patients with primary brain tumors is feasible. This is promising for the continued development and execution of clinical trials of targeted therapies in this patient population, as the ease, convenience and safety of plasma cfDNA sampling has the potential to make genomic profiling a possibility when tissue is unavailable or unobtainable in the setting of advanced PBT. Some of the alterations identified in this patient cohort do show potential for molecular targeted therapeutics, including BRAF/IDH1/IDH2 mutations, ERBB2/MET/EGFR/PDGFRA amplifications and mutations in DNA damage repair genes. For example, at the time of submission, trials using targeted therapies related to genes and pathways described in detail above (e.g., inhibition of RAF/MEK, EGFR and PARP, among others) were available in PBTs. The option to detect these and other genomic alterations through cfDNA analysis may improve access to clinical trials investigating the use of these agents in the setting of primary brain tumors.
As described above, the exploratory analysis presented here utilizes data from an assay commercially available across solid tumor types. Therefore, it is promising that the yield of clinically relevant genomic alterations using a liquid biopsy approach could be even higher from an assay specifically designed with PBTs in mind. However, this may introduce practical challenges, for example, the difficulty of implementing parallel epigenomic and RNA-based methodologies to assess methylation and splice variants, respectively. Additionally, the evolution of personalized medicine has seen multiple pancancer approval for drugs targeting specific biomarkers (e.g., pembrolizumab for MSI-high tumors, larotrectinib for tumors with NTRK fusions) and continued success applying targeted therapies from one cancer type to another (e.g., anti-HER2 therapy common in breast cancer showing efficacy in colorectal cancer, BRAF/MEK inhibition common in melanoma showing efficacy in lung adenocarcinoma). Trends such as these may support a broader, less PBT-specific approach to include identification of potential basket or umbrella drug trial targets. There has also been promising work done assessing cfDNA from cerebrospinal fluid [29,30], though this sample collection is still more invasive compared with peripheral blood draw. Future studies investigating ideal liquid biopsy assay composition and sample type may be warranted to further explore these questions [20].
It is important to note an underlying limitation of this study. As the cohort was based on samples submitted to a commercial laboratory, clinical information (including pathologic confirmation of diagnosis, or timing of cfDNA collection in relation to therapy regimen) was not available for all patients. Sample collection may have occurred at various clinical time points (e.g., baseline vs stable disease vs progression) which may have affected ctDNA alteration detection rates and VAF. The likelihood of identifying genomic alterations shed by the tumor in plasma cfDNA is highest prior to treatment and at times of progressive disease, rather than when patients are clinically stable or in active treatment when ctDNA release into the blood is suppressed. However, these clinical details are not available for this cohort from a commercial laboratory, as this information is not required for clinical testing.
This preliminary analysis was intended to focus on overall detection rate of ctDNA in patients with PBTs using an available retrospective dataset, and a breakdown by specific molecular alterations would result in too small of numbers to draw meaningful formal correlative conclusions in this preliminary descriptive analysis. An in-depth exploration of the specific alteration landscape would be best conducted in a cohort with samples collected at consistent and clinically appropriate timepoints (baseline active disease and/or progression) to maximize the likelihood of capturing the tumors' genomic signatures through cfDNA. However, the preliminary spectrum of mutated genes in this cfDNA cohort is similar to that of published data from The Cancer Genome Atlas (TCGA) genomic analysis of tissue, including TP53, NF1, IDH1 and EGFR [31,32].
As this data is from clinical cfDNA analysis performed by a commercial laboratory that does not require detailed clinical data to order testing, genomic profiling results of corresponding tumor tissue for patients who may have had this analysis were not available for comparison in this patient cohort. Any potential discordance may be due to the disease stage, treatment history and clinical status of the patients in the current cfDNA cohort. TCGA recruited patients without any prior therapies, while the current cohort enrolled patients who may have been treatment-naive or previously treated. It is known that the spectrum of mutations observed in treatment-naive versus previously treated tumors differs due to tumor evolution following treatment. Other discrepancies in the results of the two cohorts may be related to sequencing coverage of the cfDNA assay ( Supplementary Figures 1-4). For example, the cfDNA assay cannot assess for large deletions, including EGFR vIII, and the detection of amplifications in cfDNA analysis is dependent on the level of ctDNA shed being high enough to distinguish CNAs from the vast quantities of germline cfDNA with normal copy number. Additionally, due to the ability of the cfDNA test to capture genomic heterogeneity across disease burden discordance may be due to detection of alterations that were not observed in tumor tissue testing from a single site biopsy.
A future study of tissue plasma alteration concordance in which paired samples are collected contemporaneously at clinically relevant timepoints per published concordance study criteria [33] would be valuable, though perhaps future science group 10.2217/cns-2018-0015 would be limited by the clinical feasibility of collecting tumor tissue at the time of advanced stage disease when plasma cfDNA analysis is clinically indicated. The cfDNA assay utilized in this study attempts to report only alterations of somatic origin. However, discrimination between alterations of germline versus somatic origin becomes challenging in cases with high tumor burden and/or chromosomal instability [34]. It is also not possible to rule out hematopoietic origin of alterations through sequencing of cfDNA alone [35], and some alterations, like JAK2 V617F, occur more frequently in myeloproliferative neoplasms than in solid tumors. Therefore, similar to tissue-only testing [36,37], tumor-derived origin of alterations identified by NGS of cfDNA cannot be confirmed with certainty.
Conclusion
We believe this is the first analysis to interrogate and present plasma ctDNA yield in a cohort of patients with primary brain cancers by histopathologic subtype. Our findings demonstrate a higher ctDNA detection rate than previously reported, particularly among some specific subtypes of primary brain tumors, and will hopefully reinvigorate future clinical research in this area to more deeply explore the role and potential of cfDNA analysis in PBTs. Additionally, cfDNA analysis results identified either a genomically targetable off-label or clinical trial option for almost 50% of samples with cfDNA alterations detected. These results demonstrate that while not all patients with primary brain cancers have detectable alterations by such testing, plasma cfDNA analysis is a viable and safe clinical option to obtain actionable somatic genomic information for some patients with primary brain cancers which may potentially guide clinical therapeutic decision-making.
Summary points
• Glioblastoma and other primary brain tumors (PBTs) can be aggressive with limited therapeutic options.
• They can be difficult to biopsy, limiting the ability to interrogate genomic alterations in the tumor.
• This has challenged the development of and enrollment into genomically matched clinical trials in PBT oncology.
• Cell-free circulating tumor DNA (ctDNA) has shown utility as a biopsy-free alternative for comprehensive genomic profiling in advanced solid tumors, though published small PBT cohorts have suggested low detection rates.
• To investigate ctDNA yield in PBTs, we analyzed the genomic results from over 400 patients with PBTs undergoing ctDNA NGS analysis with a highly sensitive and specific clinical assay.
• Genomic alterations in ctDNA, including single nucleotide variants and gene amplifications, were identified in half of these patients, a much higher yield than previously reported.
• Genomic alterations identified had matched off-label and clinical trial options for almost 50% of patients with detectable ctDNA.
• This study suggests promise in a biopsy-free option to interrogate genomic signatures and evolution in PBTs, which may provide an avenue to further progress in genomically matched clinical trials.
Supplementary data
To view the supplementary data that accompany this paper please visit the journal website at: www.futuremedicine.com/doi/full/10.2217/cns-2018-0015 | 2019-03-12T13:02:39.268Z | 2019-03-11T00:00:00.000 | {
"year": 2019,
"sha1": "79b2a169caa6f0aa02447d437076b8d1904f1ee6",
"oa_license": "CCBYNCND",
"oa_url": "https://www.futuremedicine.com/doi/pdf/10.2217/cns-2018-0015",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2def82f64bbdf4d2b7421fd2203d1f3f24c351d8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2710179 | pes2o/s2orc | v3-fos-license | CCG parsing with one syntactic structure per n-gram
There is an inherent redundancy in natural languages whereby certain common phrases (or n -grams) appear frequently in general sentences, each time with the same syntactic analysis. We explore the idea of exploiting this redundancy by pre-constructing the parse structures for these frequent n -grams. When parsing sentences in the future, the parser does not have to re-derive the parse structure for these n -grams when they occur. Instead, their pre-constructed analysis can be reused. By generating these pre-constructed databases over WSJ sections 02 to 21 and evaluating on section 00, a preliminary result of no signifi-cant change in F-score nor parse time was observed.
Introduction
Natural language parsing is the task of assigning syntactic structure to text. Initial parsing research mostly relied on manually constructed grammars. Statistical parsers have been able to achieve high accuracy since the creation of the Penn Treebank (Marcus et al., 1993); a corpus of Wall Street Journal text used for training. Statistical parsers are typically inefficient, parsing only a few sentences per second on standard hardware (Kaplan et al., 2004). There has been substantial progress on addressing this issue over the last few years. Clark and Curran (2004) presented a statistical CCG parser, C&C, which was an order of magnitude faster than those analysed in Kaplan et al. (2004). However the C&C parser is still limited to around 25 sentences per second. This paper investigates whether the speed of statistical parsers can be improved using a novel form of caching. Currently, parsers treat each sentence independently, despite the fact that some phrases are constantly reused. We propose to store analyses for common phrases, instead of re-computing their syntactic structure each time the parser encounters them.
Our first idea was to store a single, spanning analysis for frequent n-grams. However, the most frequent n-grams often did not form constituents. Given that n-gram distributions are very long-tailed, this meant that the constituent n-grams covered only a small percentage of n-grams in the corpus.
We then turned our attention to the n-grams that were not forming constituents. First, we found that some actually should form constituents. However, the structure of noun-phrases in the Penn Treebank is underspecified, leading to incorrect derivations in the C&C parser's training corpus (Hockenmaier, 2003). Secondly, we investigated whether the spurious ambiguity of CCG derivations could be exploited to force frequent n-grams to compose into constituents, while still producing a semantically equivalent derivation. Here we encountered problems using the composition rule to create new constituents. Some of these problems were due to further issues with the analyses in the corpus, while others were due to the ambiguity of the n-grams.
The sparsity of n-grams in a corpus of this size meant that we had very few caching candidates to work with. Our approach may be more successful when the caching process is performed using a data set.
Background
We are interested in storing the parse structure for common n-grams, so that the analysis can be reused across multiple sentences. In a way, this is an extension of an important innovation in parsing: the CKY chart parsing algorithm (Younger, 1967). Our proposal is an attempt to memoise sections of the chart across multiple sentences.
Most constituency parsers use some form of chart for constructing a derivation, so our investigation could have begun with a number of different parsers. We decided to use the C&C parser (Clark and Curran, 2007) for the following reasons. First, the aim of the caching we are proposing is to improve the speed of a parser. It makes sense to look at a parser that has already been optimised, to ensure that we do not demonstrate an improvement that could have been achieved using a much simpler solution. Secondly, there are aspects of the parser's grammar formalism, Combinatory Categorial Grammar, that are relevant to the issues we want to consider.
Chart Parsing
The chart is a triangular hierarchical structure used for storing the nodes in a parse tree, as seen in Figure 1. A chart for a sentence consisting of n tokens contains n(n+1) 2 cells, represented as squares in the figure. Each cell in the chart contains the parse of a contiguous span or sequence of tokens of the sentence. As such, a cell stores the root nodes of all possible parse trees for the tokens which the cell covers. This coverage is called the yield of that node. This is illustrated as the linked-list style data structure highlighted as being the contents of cell (1, 3) in Figure 1. The cell (p, s) in the chart contains all possible parses for all of the tokens in the range [p, p+s) for a given sentence. The chart is built from the bottom up, starting with constituents spanning a single token, and then increasing the span to cover more tokens, until the whole sentence is covered.
Combinatory Categorial Grammar (CCG) (Steedman, 2000) is a lexicalised grammar formalism. This means that each word in a sentence is assigned a composite object that reflects its function in the derivation. In CCG, these objects are called lexical categories.
Categories can be built recursively from atomic . Recursive construction of categories means that very few atomic units need to be used. For instance, there is no atomic category for a determiner in CCG. Instead, a determiner is a function which maps from a noun to a noun phrase. Similarly, verbs are functions from some set of arguments to a complete sentence. For example, the transitive verb like would be assigned the category (S \NP )/NP . Here, the slashes indicate the directionality of arguments, stating that an NP object is expected to the right, and an NP subject is expected to the left. An example CCG derivation containing the transitive verb like is: This derivation uses the rules of forward and backward application to build the representation of the sentence. Most of the information is contained in the lexical categories.
The C&C Parser
The C&C parser makes use of this property of CCG by dividing the parsing problem into two phases, following (Bangalore and Joshi, 1999). First, a supertagger proposes a set of likely categories for each token in the sentence. The parser then attempts to build a spanning analysis from the proposed categories, using the modified CKY algorithm described in Steedman (2000). The supertagging phase dramatically reduces the search space the parser must explore, making the C&C parser very efficient.
Motivation
It is important to note that the concepts motivating this paper could be applied to any grammar formalism. However, our experiments were conducted using CCG and the C&C parser for a number of reasons, which are outlined throughout the paper.
In order for our "one structure per n-gram" idea to work in practice, the parsed data must possess two properties. Firstly, there must be a small number of n-grams which account for a large percentage of the total n-grams in the corpus. If this property was not present, this this would imply that most of the ngrams within the text appear very infrequently. As a result, the size of the database containing the memoised analyses would grow in size, as there are no n-grams which clearly are more useful to memoise than others. The result of this would be that the time taken to load the analyses from the database would exceed the time taken to let the parser construct a derivation from scratch.
The second property is that the most frequent ngrams in the corpus must have on average very few distinct analyses. If the most frequent n-grams in the corpus all occurred with a large number of different analyses, then every time we see these frequent ngrams in the future, these multiple analyses will all have to be loaded up from the database. This again would result in more time taken in the database loading than letting the parser construct the derivation from scratch. If the most frequent n-grams in the corpus thus only occur with a very small number of analyses, then the time taken to load the preconstructed structures should be less than the time the parser will take to construct the derivation from scratch.
Analysis
By analysing all of the n-grams within sections 02 to 21 of CCGbank for varying n, we were able to show that, under a very basic analysis, CCGbank satisfies both the properties discussed in Section 3. The results of this analysis can be seen in Table 1.
One interesting result here is the average number of derivations varying n-grams occur with. On its first attempt at parsing a sentence, the C&C parser n-gram size 2 3 4 Avg number derivations 1.19 1.09 1.04 Always form constituents 23% 10% 5% Never form a constituent 73% 89% 93% (Clark and Curran, 2007). Since the average number of derivations for varying sized n-grams is less than the ambiguity introduced during the first attempt at the parsing process, this process of inserting pre-built chart structures can potentially decrease the overall parsing time, as the pre-built structures would introduce less ambiguity to the overall parse compared to what the parser would provide normally.
Constituents
The first idea explored is how well can we do by storing only n-grams which primarily form constituents. Table 2 shows the 10 most frequent bigrams in CCGbank sections 02 to 21 which primarily form constituents. The columns show the number of times the n-gram was seen not forming and not forming a constituent, as well as the number of unique constituent-forming analyses formed. A number of interesting observations can be made here. Firstly, the number of times these bigrams occur drops off very quickly, with the 4th most frequent bigram appearing just under half the number of times the most frequent bigram occurs. This drop off contradicts our first desirable property for the corpus, that there should be a large number of frequent n-grams.
Observing the numbers in the last column of Table 2, it is easily seen that only 3 of the top 10 bigrams occur with less than 5 unique derivations, which goes against our second desirable property, that the most frequent n-grams occur with very few unique derivations.
These two factors indicate that an approach which persists only constituent-forming n-grams in these databases will not perform well, as neither of the two properties discussed in Section 3 are fulfilled.
Non-constituents
Section 4.1 showed that an approach to this problem which utilises only constituent-forming n-grams most likely will not produce the desired speed boost due to the properties mentioned in Section 3 not being upheld.
The next natural direction to take is to the storing of analyses for the non-constituent-forming ngrams. The use of CCG type raising and composition allow us to store non-constituent-forming analyses in these databases for n-grams, yet still be able to use these derivations later on to form correct semantically correct spanning analyses. For example, one of the frequent non-constituent-forming occurrences of the phrase of the in CCGbank is of the company The is forward applied to company before of can be joined with the. Instead, we could construct the following derivation and insert it into the preconstructed database Here we use CCG forwards composition to combine of and the into a constituent-forming analysis. This chart structure could be reused with CCG forwards application to construct a span of the original phrase in the following manner Using the forward composed version of the bigram of the, an analysis for the whole phrase was still able to be constructed, even though in the original derivation of and the did not form a constituent. This technique of utilising CCG forward composition and type raising allows us to add these n-grams which primarily do not form constituents, into the database.
Prepositional Phrase Attachment
This technique does not work all of the time, however it does work for many cases. One situation where this technique does not work is is with prepositional phrase attachment. The correct CCG derivation for the phrase on the king of England is X on the king of England If we were to use in this example the same forwards composed derivation of the bigram of the as described earlier for the bigram on the, the wrong analysis would be constructed.
X on the king of England
NP
While an NP was still the resultant overall category assigned to the phrase, the internal noun phrases are incorrect; the named entity the king of England is not represented within this incorrect derivation.
From an implementation point of view, being able to construct and use this forward composed parse structure for of the involves violating one of the normal-form constraints proposed in Eisner (1996) to eliminate CCG's "spurious ambiguity". The constraint which was violated states that the left child of forward application cannot be the result of forward composition, as is the case in our previous example. The C&C parser implements these Eisner constraints, and as such a special rule was added to the parser to allow any chart structures which were loaded from a pre-constructed database to violate the Eisner constraints.
Coordination
In CCG parsing, commas can be parsed in one of two ways depending on their semantic role in the sentence. They are either used for coordination or they are absorbed. Consider the CCG derivation for the sentence shown in Figure 2. The second comma between England and owned is absorbed, as shown in the second last line of the derivation. The first comma, however, between George and the king of England, is used to express apposition. Apposition in CCG is represented using the same coordination structure which and uses; the conjoining combinator. This combinator is denoted as conj or Φ in CCG. The type signature of this combinator is X conj X ⇒ Φ X stating that the CCG category has to be the same on both sides of a conj , and when the functor is invoked, the resultant category is the same. Our ngram pre-construction attempts to memoise analyses based purely on the tokens of n-grams. Because comma appears as a conj , we are unable to use any n-grams which contain commas in our database, as at the token level it is not possible to determine if the comma will be absorbed or will be used in apposition. Table 3 shows the 15 most frequent bigrams in CCGbank sections 02 to 21. The first thing to note about this table is that only two of the top 15 most frequent bigrams primarily form a constituent, again leading to a conclusion that using only constituent-forming bigrams is not the correct approach to the problem. The second point to observe is that seven out of these 15 bigrams contain a comma, which as described in Section 4.2.2, implies these cannot be used in our database. The Σ column shows an accumulative sum of the number of tokens covered in sections 00 to 21 just by using the bigrams in the table. The coverage figures shown in the neighbouring column show this sum as a percentage of the total number of tokens in sections 00 to 21. This shows that by considering just the 15 most frequent bigrams, a coverage of 6.5% of the total number of tokens has been achieved. If a trend like this continues linearly down this list of frequency sorted bigrams and a pre-constructed analysis for the first 1000 bigrams could be memoised, for example, there is a great potential for the parse time to be improved.
Evaluation
The effect of these n-gram databases on the parsing process is evaluated in terms of the overall parsing time, as well as the accuracy of the resultant derivations. The accuracy is measured in terms of F-score values for both labelled and unlabelled dependencies when evaluated against the predicate-argument dependencies in CCGbank (Clark and Hockenmaier, 2002). The parsing times reported do not include the time to load the grammar, statistical models, or our database.
Data
The models used by the C&C parser for our experiments were trained using two different corpora. The WSJ models were trained using the CCG version of the Penn Treebank, CCGbank (Hockenmaier, 2003;Hockenmaier and Steedman, 2007), which is available from the Linguistic Data Consortium 1 . The second corpus is a version of CCGbank where the noun phrase bracketing has been corrected (Vadas and Curran, 2008;Vadas, 2009).
Tokyo Cabinet
Tokyo Cabinet 2 is an open source, lightweight database API which provides a number of different database implementations, including a hash database, B+ tree, and a fixed-length key database. Our experiments used Tokyo Cabinet to store the pre-constructed n-grams because of its ease of use, speed, and maximum database size (8EB Table 3: Constituent statistics about the 15 most frequent bigrams in CCGbank 02 to 21. The columns show the number of times the bigram was seen forming a non-constituent, forming a constituent, and then the number of unique constituent-forming chart structures. The next two columns show accumulatively what percentage of sections 02 to 21 these bigrams alone cover. The last column shows the ambiguity the C&C supertagger associates to each n-gram Figure 3: When creating the trigram database, if a trigram forms a constituent in the chart, it is added to the database maximum database size is important because more data is better for the database construction phase.
Constructing the n-gram Databases
The construction of the final database is a multistage process, with intermediate databases being generated and then refined. The first stage in this process is to parse all of the training data, which in our case is WSJ sections 02 to 21. The parse tree for every sentence is then analysed for constituentforming n-grams. If a constituent-forming n-gram is found and its size (number of tokens) is one for which we would like to construct a database for, then the n-gram and its corresponding chart structure are written out to a database. These first stage databases are implemented using a simple key-value Tokyo Cabinet hash database. The structure of the keys and values in this database are Key = (n-gram, hash of chart) Value = (chart, occurrence counter) The chart attribute in the value is a serialised version of the chart which can be unserialised at some later point for reuse. The occurrence counter is incremented each time an occurrence of a key is seen in the parsed training data. A record is also kept in the database for the number of times a particular n-gram was seen forming a nonconstituent, for use in the filtering stage discussed in later Section 6.4. This process of n-gram chart serialisation is illustrated in Figure 3. When parsing the sentence A B C D E, the trigram B C D formed a constituent in the spanning analysis for the sentence. Because it formed a constituent, the trigram is added to the first stage trigram database.
Frequency Reduction
When constructing the initial set of databases over a body of text, a large number of the n-grams which were memoised should not be kept in the final databases because they occur too infrequently, or because the number of times they are seen forming a non-constituent outweighs the number of times they are seen forming a constituent. As such, a frequency based filtering stage is performed on the initial set of databases to produce the final database.
An n-gram was chosen to be filtered out differently depending on whether or not it was seen forming a non-constituent during the database development phase. Equations 1 and 3 describe the predicates which need to be fulfilled in order for a particular n-gram not to be filtered out. In these inequalities, C is a mapping from chart structure to frequency count for the current n-gram, the 0th index into C is the non-constituent-forming frequency count, and X and Y are parameters to the filtering process.
If an n-gram was seen forming a non-constituent during the initial database development phase, then Equation 1 is used. If an n-gram was never seen Figure 4: Illustration of using the n-gram databases.
The trigram B C D is loaded from the pre-constructed database, and blocks out the corresponding cells forming a non-constituent during the development phase, then Equation 3 is used.
The values given to the X and Y parameters in the filtering process were determined through a trial and error process, training on sections 02 to 21 and testing on section 00 of the noun phrase corrected CCGbank. For all of our results, X was set to 0.05 and Y was set to 15.
Using the n-gram Databases
Once the n-gram database has been constructed, it is used when parsing sentences in the future. For every sentence that is parsed, the parser checks to see if any n-gram contained within the current sentence exists within the database, and if so, uses the memoised analysis for the n-gram. This process is illustrated in Figure 4. This n-gram check is performed by iterating top to bottom, left to right through the chart for the current sentence. A consequence of this is that if two n-grams overlap and both exist in the database, then only the first n-gram encountered will have its analyses loaded in from the database. Once the analyses are loaded into the current chart for the n-gram, the corresponding cells in the current chart are blocked off from further use in the parse tree creation process (CKY), as illustrated in Figure 4. It is due to this cell blocking that the pre-constructed charts for the 2nd overlapping n-gram are not also loaded.
Results
A set of experiments were conducted using CCGbank sections 02 to 21 as the corpus for developing our database. This corpus was parsed using a variety of statistical parsing models. Section 00 was then used for evaluation. Table 4 shows our preliminary results. The first two parsing models used were trained on the original CCGbank (WSJ derivs and hybrid), and the second two models were trained on the noun phrase corrected CCGbank corpus described in Vadas and Curran (2008) (NP derivs and hybrid). The databases used to obtain these results contained only constituent-forming n-grams.
These results show a non-significant change in speed nor F-score. One positive aspect of this nonsignificant change is that performance did not decrease even though additional computation is needed to perform our database lookups and chart insertion. The C&C parser is already very fast, and the amount of time taken to perform the chart loading and insertion from the databases happens to be very similar to the time taken to construct the derivations from scratch.
Another experiment was then performed in order to assess the potential of using non-constituent- Table 5: Memoised structures were constructed for the most frequent derivations for varying non-constituentforming bigrams, which were then used and evaluated against section 00 of the noun-phrase corrected CCGbank forming n-grams for memoisation. The bigrams of the and in the are the two most frequently occurring non-constituent-forming bigrams in CCGbank sections 02 to 21. In order to assess the viability of using non-constituents in our database, our experiments here used only the most frequently occurring analyses for these two bigrams. If no improvement in performance is observed using the most frequently occurring bigrams, then the idea is not worth pursuing further. The results of these experiments can be seen in Table 5. As was the case in our constituent-forming experiment, no significant change in performance was achieved; positive or negative.
Conclusion
Through the analysis of this one structure per ngram idea using CCG, combined with a preliminary set of empirical results, we have shown that memoising parse structures based on frequently occurring n-grams does not result in any form of performance improvement. | 2014-10-01T00:00:00.000Z | 2009-12-01T00:00:00.000 | {
"year": 2009,
"sha1": "38e9d98efec2303f49134ffad92cbd5f33289216",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "38e9d98efec2303f49134ffad92cbd5f33289216",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
222305090 | pes2o/s2orc | v3-fos-license | Reducing Unnecessary Brain Computed Tomography Scan in a Tertiary Center
Acute bacterial meningitis is a rapidly progressive disease that causes substantial morbidity and mortality [1]. The rationale behind diagnosis is the identification of the causative organism via Lumbar Puncture (LP) [2]. This is an invasive procedure used to remove Cerebrospinal Fluid (CSF) from the subarachnoid space [3]. In rare cases, however, the withdrawal process can hasten brain herniation because of Increased Intracranial Pressure (ICP) as well as the continuous leak of CSF through the needle opening into the subarachnoid membrane [3].
INTRODUCTION
Acute bacterial meningitis is a rapidly progressive disease that causes substantial morbidity and mortality [1]. The rationale behind diagnosis is the identification of the causative organism via Lumbar Puncture (LP) [2]. This is an invasive procedure used to remove Cerebrospinal Fluid (CSF) from the subarachnoid space [3]. In rare cases, however, the withdrawal process can hasten brain herniation because of Increased Intracranial Pressure (ICP) as well as the continuous leak of CSF through the needle opening into the subarachnoid membrane [3].
Clinicians routinely use brain Computed Tomography (CT) prior to LP to identify ICP. The indiscriminate use of imaging unnecessarily exposes patients to radiation and delays antibiotic initiation [4]. Furthermore, brain herniation has been reported in normal CT scans because of edema that cannot be detected radiologically [5].
A number of studies in accordance with guidelines has demonstrated that abnormal CT scans are rare in children without clinical signs of ICP [6,7]. These signs include disturbed consciousness, hemodynamic instability, fixed dilated pupils, focal neurological signs, or seizures. There is limited research in Saudi Arabia regarding the proportion of abnormal brain CT scans in children with suspected meningitis.
The present study thus seeks to evaluate the frequency of CT scan abnormalities in children with suspected meningitis prior to LP as well as clinical predictors for such abnormalities. The secondary aim of this study is to reduce the number of unnecessary brain CT scans and, by extension, radiation exposure. This project, along with further research, will help us improve pediatric care.
PATIENTS AND METHODS
The present study was conducted in the pediatrics department at Arryan Hospital/Dr Sulaiman Al-Habib Medical Group (HMG) in Riyadh, Saudi Arabia. HMG is a private tertiary center and a Joint Commission International-accredited hospital. The study was approved by the HMG Institutional Review Board (Approval No: RC19.04.31).
This retrospective study aims to evaluate the frequency of CT abnormalities in children with suspected meningitis prior to LP and to identify clinical predictors of such CT scan abnormalities, with a secondary aim of reducing unnecessary brain CT scans and radiation exposure. Our inclusion criteria were patients of both sexes aged 2-15 years with suspected meningitis prior to LP. The diagnosis of meningitis is based on the patient's clinical presentation, which includes fever, headache, lethargy, neck stiffness, altered mental status, and nonblanchable rashes [8]. Our exclusion criteria were patients younger than 2 years, those with comorbidities (e.g., immunodeficiency), and those who underwent CT scans on suspicion of other neurological disorders.
Article History
Received 22 June 2020 Accepted 28 August 2020
Keywords
Brain CT scan meningitis lumbar puncture brain herniation pediatric A B S T R A C T Brain Computed Tomography (CT) is routinely requested before Lumbar Puncture (LP) to rule out increased intracranial pressure. However, normal brain radiography does not abate the risk of herniation and unnecessarily delays the course of treatment. Thus, the primary aims of this study were to evaluate the frequency of brain CT scan abnormalities in children with suspected meningitis and to assess under what circumstances such changes are expected, with a secondary aim of reducing unnecessary brain CT scans. A retrospective study was conducted on 86 children with suspected meningitis before LP. Patients who underwent CT scans on suspicion of other neurological disorders were excluded. CT scan in 94.2% (n = 81) cases were reported normal. 40% (n = 2) cases with abnormal CT scan had altered sensorium as compared to 7.4% (n = 6) with normal CT scan (p = 0.01). Furthermore, 46.5% of neuroimaging requests were not indicated. Herniation was not reported in our study. We conclude that indiscriminate brain CT scans have a limited role without clinical indications. In agreement with previous research, we recommend that requests for brain CT prior to LP be made on particular indications, as per clinical guidelines. Future prospective studies and re-auditing of practice continue to expand this body of evidence.
The data were obtained from an electronic hospital record system (Volunteers for Intercultural and Definitive Adventures) and classified on a Microsoft Office Excel 2010 spreadsheet (Microsoft Corporation, Redmond, WA, USA). Our parameters included age, sex, CT scan results, and clinical indications from the imaging request. The results of CT scans according to the interpretation of the radiologist were divided into two groups: Group I, representing normal CT scans, and Group II, representing abnormal CT scans (e.g., focal abnormalities or brain edema). Data were displayed as descriptive statistics, with mean ± Standard Deviation (SD) for continuous data and number/percentage for categorical data. Statistical analysis was performed using SPSS version 19 (IBM Corporation, Armonk, NY, USA). Student t-test and Chi-square test were used to measure continuous and categorical data, respectively. Logistic regression analysis was incorporated to estimate the impact of risk factors on abnormal CT scan result. Statistical significance was measured with a p-value <0.05.
Lastly, clinical indications from imaging were audited based on recommendations provided by the American Academy of Pediatrics (AAP) and National Institute of Health and Care Excellence (NICE). Clinical indications include disturbed consciousness, hemodynamic instability, fixed dilated pupils, focal neurological signs, and seizures.
Description of the Study Population
The records of a total of 93 patients with suspected meningitis were initially considered during the study period. Among these, seven patients were excluded. A total of 86 patients met the necessary inclusion criteria and were recruited for the study (Figure 1).
The age of participants ranged from 2 to 15 years, with a mean of 7.31 ± 3.58 years and a sex frequency of 65.1% (56/86) male and 34.9% (30/86) female. Most patients presented with seizures (43%) and lethargy (23.1%). Baseline clinical characteristics and CT scan findings are shown in Table 1.
Classification of Brain CT Findings
The CT scans have been split up according to radiological findings. Normal brain imaging includes non-pathological brain changes or those with brain atrophies.
Abnormal CT scans have brain edema, meningeal enhancement, or focal lesions such as abscess, hemorrhage, or mass effect. The distribution of the patients according to brain CT scan findings is illustrated in Table 2.
Clinical Predictors of CT Scan Abnormalities
Logistic regression was incorporated to adjust for different confounders. Altered mental status was the only significant predictor of brain CT scan abnormalities in our study (Table 3).
Indications of Brain CT Scans Prior to LP
The study revealed that 46 (53.5%) of the brain CT scans performed were indicated according to AAP/NICE guidelines, with a mean of 6.77 ± 3.25 years. Forty (46.5%) scans, which constitute the rest of the brain CT scans, were not indicated. Of the patients who underwent LP, none had a brain herniation. The frequency of CT scan abnormalities based on clinical indication is illustrated in Figure 2.
DISCUSSION
The trend of performing brain imaging prior to LP is on the rise, even in patients with no clinical signs. The information available in pediatric care is limited. Thus, this study intends to provide an analytic overview of brain CT scans performed prior to LP in children with suspected meningitis. Demographic variables among the study group were statically nonsignificant.
The indiscriminate use of CT scans in children provides limited information. Our study demonstrated that the vast majority of scans performed were normal or unchanged (94.2%) and that the observed frequency of abnormal scans prior to LP was low (5.8%). Several studies have concordant results. For instance, the Vancouver study described normal brain CT scans for 41 patients screened prospectively [9]. Similarly, Seleem et al. [10] reported a small number of abnormalities (0.06%) for 101 patients assessed.
Meningitis is associated with a variable degree of brain edema that is either mild in the absence of ICP signs or is not detected radiologically [5]. Clinicians are thus advised to consider clinical assessment as a baseline prior to LP and to reserve imaging for those indicated. Accordingly, our study found two patients with brain edema who underwent LP without complications in the absence of ICP signs. Hasbun et al. [11] similarly recommended clinical features to identify those who are unlikely to develop brain herniation.
We analyzed the following clinical variables to determine their relationship to brain CT scan abnormalities: age, sex, new-onset seizures, focal neurological deficits, and altered consciousness. No significant demographic differences were seen in our data. Seizures were the most commonly observed findings in our patients; however, they did not signify abnormal brain CT scans. In contrast, Chen et al. [12] reported notable neuroimaging abnormalities in 61 (26%) patients with seizures. A transient increase in ICP within 30 min of a seizure is a well-documented phenomenon; LP is therefore contraindicated during this time [12]. Altered consciousness was found to be significantly associated with brain CT scan abnormalities in our study. Correspondingly, Vafaie et al. [13] demonstrated a vital relationship between CT changes and decreased levels of consciousness. In a prospective study of 113 patients, Gopal et al. [14] likewise concluded that altered mental status strongly predicted CT scan abnormalities.
Finally, as part of a quality improvement project, we reviewed the practice of requesting CT scans and whether it has met with clinical guidelines issued by the AAP and NICE. The study revealed that 46.5% of CT scans were not indicated. Of these, two displayed image abnormalities; however, none had brain herniation after puncture. Studies revealed significant association between the estimated radiation doses provided by cerebral CT scans and the subsequent incidence of leukemia, brain tumors, and rate of cataract formation [15]. Therefore, it is advised that LP safety be assessed based on clinical decision rather than a radiological one. Blood culture and antibiotics should not be delayed if a CT scan is required [7].
Nonetheless, these findings have to be seen in light of several limitations. The study was dependent on the availability of welldocumented and detailed notes. Inaccuracy of information could have led to the underestimation of CT scan indications. Future prospective studies with larger sample sizes are thus recommended.
CONCLUSION
The majority of brain CT scans in children with suspected meningitis prior to LP were normal, and altered sensorium was significantly correlated with CT brain abnormalities. Indiscriminate brain imaging is thus not advisable. The study recommends guideline adherence to select those most likely to benefit from brain imaging. Further prospective studies are advisable to expand this body of evidence.
CONFLICTS OF INTEREST
The authors declare they have no conflicts of interest. | 2020-10-13T05:46:11.530Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "03981c11d3182c798595a7e382e3d800876ebe87",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125944787.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e361f05f216ad8a5135ea3ce022373fbd735ce4c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5913769 | pes2o/s2orc | v3-fos-license | Is the diffuse gamma background radiation generated by galactic cosmic rays?
We explore the possibility that the diffuse gamma-ray background radiation (GBR) at high galactic latitudes could be dominated by inverse Compton scattering of cosmic ray (CR) electrons on the cosmic microwave background radiation and on starlight from our own galaxy. Assuming that the mechanisms accelerating galactic CR hadrons and electrons are the same, we derive simple and successful relations between the spectral indices of the GBR above a few MeV, and of the CR electrons and CR nuclei above a few GeV. We reproduce the observed intensity and angular dependence of the GBR, in directions away from the galactic disk and centre, without recourse to hypothetical extragalactic sources.
Introduction
The existence of an isotropic, diffuse gamma background radiation (GBR) was first suggested by data from the SAS 2 satellite (Thompson & Fichtel 1982). The EGRET instrument on the Compton Gamma Ray Observatory confirmed this finding: by removal of point sources and of the galactic-disk and galactic-centre emission, and after an extrapolation to zero local column density, a uniformly distributed GBR was found, of alleged extragalactic origin (Sreekumar et al. 1998). Above an energy of ∼ 10 MeV, this radiation -to which we shall refer throughout simply as "the GBR"-has a featureless spectrum, shown in Fig. 1, which is very well described by a simple power-law form, dF/dE ∝ E −β , with β ≈ 2.10 ± 0.03 (Sreekumar et al. 1998).
The origin of the GBR is still unknown. The published candidate sources range from the quite conventional to the decisively speculative. Perhaps the most conservative hypothesis for the origin of an isotropic GBR is that it is extragalactic, and originates from active galaxies (Bignami et al. 1979;Kazanas & Protheroe 1983;Stecker & Salamon 1996). The fact that blazars have a γ-ray spectrum with an average index 2.15 ± 0.04, compatible with that of the GBR, supports this hypothesis (Chiang & Mukerjee 1998). The possibility has also been discussed that Gemingatype pulsars, expelled into the galactic halo by asymmetric supernova explosions, be abundant enough to explain the GBR (Dixon et al. 1998;Hartmann 1995). More exotic hypotheses include a baryon-symmetric universe (Stecker et al. 1971), now excluded (Cohen et al. 1998), primordial black hole evaporation (Page & Hawking 1976;Hawking 1977), supermassive black holes formed at very high redshift (Gnedin & Ostriker 1992), annihilation of weakly interactive big-bang remnants (Silk & Srednicki 1984;Rudaz & Stecker 1991), and a long etc.
However, the EGRET GBR data in directions above the galactic disk and centre show a significant deviation from isotropy, correlated with the structure of our galaxy and our position relative to its centre ). This advocates a local (as opposed to cosmological) origin for the GBR. Indications of a large galactic contribution to the GBR at large latitudes were independently found by Dixon et al. (1998) by means of a wavelet-based "non-parametric" approach that makes no reference to a particular model. and Moskalenko & Strong (2000) also found that the contribution of inverse Compton scattering of galactic cosmic ray electrons to the diffuse γ-ray background is presumably much larger than previously thought. In this paper we go one step further and explore in detail the possibility ) that the diffuse gamma-ray background radiation at high galactic latitudes could be dominated by inverse Compton scattering of cosmic ray (CR) electrons on the cosmic microwave background radiation and on starlight from our own galaxy. In Section 2 we briefly review the GBR data and the evidence for its correlation with our position in the Galaxy.
The CR-proton and CR-electron spectra are briefly reviewed in Section 3. The origin, spectrum and composition of non-solar cosmic ray protons and nuclei have been debated for almost a century. The measurements now extend over some 30 orders of magnitude in flux and some 15 orders of magnitude in energy, up to an astonishing E ∼ 3 ×10 11 GeV (Bird et al. 1995, Takeda et al. 1998, Berezinskii et al. 1990 and references therein). Above ∼ 5 GeV, this spectrum has also a power-law form E −β , with two small variations in the "index" β at the so-called "CR knee" and "CR ankle". The local spectrum of CR electrons, shown in Fig. 2, is much harder to measure; it is only known up to ∼ 10 3 GeV and, above ∼ 5 GeV, it is also well described by a simple power law.
In Sections 4 and 5 we discuss relations between the indices of the GBR and the CR electron and proton spectra. In so doing, we make few and very simple assumptions: that the mechanism accelerating CR hadrons and CR electrons is the same (a moving magnetic "mirror"), that the locally-measured electron spectrum is representative of its average form throughout the Galaxy, that above a certain energy, inevitably, the electron spectrum is modulated by inverse Compton scattering on starlight and on the microwave background radiation, and that the GBR is dominated by the resulting Compton up-scattered photons. This allows one to derive, successfully, the GBR index from the electron index and the electron index from the proton index. The GBR index, as observed by EGRET, is uncannily directionally uniform. We interpret this fact as strong support for our simple assumptions.
In Section 6 we tackle a more difficult and potentially controversial subject: the origin and magnitude of the GBR. In a sense, our proposed explanation -that the GBR originates from inverse Compton scattering in our own galaxy -is more conservative than any of the previously suggested origins.
The non-conventional aspect of our hypothesis is that, in order to reproduce the observed intensity of the GBR, we must assume the scale height of our galaxy's CR-electron distribution to be almost twice the traditionally-accepted upper limit. Because of this, in Section 6, we briefly review the basis of the conventional wisdom and our critical view of it, whose main points are the following. Moskalenko, Strong and their collaborators have developed a very detailed understanding of the CR, radio and γ observations of our galaxy. To fit the data, their models require a freely parametrized reacceleration of electrons, presumably by the motion of turbulent magnetic fields (e.g., Seo & Ptuskin, 1994). introduce a cutoff z h for the height above the galactic plane above which cosmic rays freely escape. They find an upper limit z h < 12 kpc, on the basis of a fit to the 10 Be/ 9 Be ratio observed by Ulysses (Connell 1998). This result is "soft": twice the upper limit would still be compatible with the ensemble of data (Lukasiak et al. 1994).
Moreover, the galactic CR proton distribution extracted from a fit to EGRET γray data, actually favours ) an ad hoc distribution of CR sources that is not as well localized in the disk as the conventional supernovaremnant sources are (Webber 1997), even if z h = 20 kpc or more. This point, and the necessity to invoke CR reacceleration, indicate that scale heights of the CR electron distribution in excess of the 12 kpc "upper limit" may not be out of the question. Our results are optimized by a scale height of roughly 20 kpc. Such a large scale height is not in contradiction with radio synchrotron-emission from our galaxy if the galactic disk and its magnetic field are embedded in a larger magnetic halo with a much weaker field.
In studying the possibility that the diffuse GBR is not extragalactic, one has two choices. The first is to extend to high galactic latitudes the elaborate models (with many parameters, reacceleration, and ad hoc modifications of the CR-proton and CR-electron energy and source distributions) that have been developed to describe the intricate nature of the observations at low galactic latitudes Moskalenko & Strong 2000). The second is to adopt our very naive set of hypotheses and employ a simple cosmic-ray model with, by conventional standards, a large scale height for CR-electrons. Models of this type (Dar & Plaga 1999), wherein cosmic ray sources are directly injected at high galactic latitudes, have actually been proposed 1 .
In Section 7 we discuss the magnitude and angular-dependence of the two dominant contributions to the GBR within our model: inverse Compton scattering of galactic CR-electrons off the cosmic background radiation and starlight. In Section 8 we compute the small additive effect of sunlight, and in Section 9 we estimate the contribution from external galaxies, which is also sub-dominant. In Section 10 we compare our predictions with the data on the intensity and the angular dependence of the GBR. The results are very satisfactory and, within our model, lead to the conclusion that the GBR can be dominated by the emission from our own galaxy. We summarize our conclusions and predictions in Section 11.
The GBR data
We call "the GBR" the diffuse emission observed by EGRET by masking the galactic plane at latitudes |b| ≤ 10 o , as well as the galactic centre at |b| ≤ 30 o for longitudes |l| ≤ 40 o , and by extrapolating to zero column density, to eliminate the π 0 and bremsstrahlung contributions to the observed radiation and to tame the modeldependence of the results. Outside the mask, the GBR flux integrated over all directions in the observed energy range of 30 MeV to 120 GeV, shown in Fig. 1, is well described by a power law: (1) The overall magnitude in Eq. (1) is sensitive to the model used to subtract the foreground (Sreekumar et al. 1998;), but the spectral index is not. The EGRET data are given in Sreekumar et al. (1998) for 36 (b, l) domains, 9 values for each half-hemisphere. The spectral index is, within errors, extremely directionally uniform, as shown in Fig. 3, where we have plotted the EGRET results as functions of θ, the observation angle relative to the direction to the galactic centre (cos θ = cos [b] cos [l]). The normalization is less homogeneous, but in directions well above the galactic disk and away from the galactic-centre region it has been found to be consistent with a normal distribution around the mean value: thus the claim of a possible extragalactic origin (Sreekumar et al. 1998).
In Fig. 4 we have plotted, as a function of θ, the EGRET GBR counting-rate above 100 MeV. This figure clearly shows, in three out of the four quarters of the celestial sphere, an increase of the counting rate towards the galactic centre. How significant is this effect? Letχ 2 ≡ χ 2 /d.o.f. be the "reduced" χ 2 per degree of freedom. Theχ 2 value for constant flux is 2.6: very unsatisfactory. A best fit of the form F = F 0 + F 1 (1 − cos θ) yieldsχ 2 = 1.3, a very large amelioration (for higher polynomials in cos θ the higher-order coefficients are compatible with zero: the fit does not significantly improve). Note also that at angles with cos θ larger than its mean value cos θ = 0.0246 (θ < 88.6 o ), 10 out of the 12 data points are above the average flux, while at angles with θ > 88.6 o , 18 out of the 24 data points are below the average. The probability for a uniform distribution to produce this large or larger a fluctuation is 1.5 × 10 −4 .
Even in directions pointing to the galactic disk and the galactic centre, EGRET data on γ-rays above 1 GeV show an excess over the expectation from galactic cosmic-ray production of π 0 's (Pohl & Esposito 1998). Electron bremsstrahlung in gas is not the source of the 1-30 MeV inner-Galaxy γ-rays observed by COMPTEL (Strong et al. 1997), since their galactic latitude distribution is broader than that of the gas. These findings also imply that inverse Compton scattering may be much more important than previously believed Moskalenko and Strong, 2000;.
The CR data
The cosmic ray nuclei have a power-law spectral flux dF/dE ∝ E −β with an index β that changes at two break-point energies. In the interval 10 10 eV < E < E knee ∼ 3 × 10 15 eV, protons constitute ∼ 96% of the CRs at fixed energy per nucleon, and their flux is (Berezinskii et al. 1990, and references therein): In the interval E knee < E < E ankle ∼ 3×10 18 eV, the spectrum steepens from β 1 ∼ 2.7 to β 2 ∼ 3.0, flattening again to β 3 ∼ 2.5 above E ankle .
The CR flux of electrons (Prince 1979;Nishimura et al. 1980;Tang 1984;Golden et al. 1984;Evenson & Meyers 1984;Golden et al. 1994;Ferrando et al. 1996;Barwick et al. 1998;Wiebel-Sooth & Biermann 1998), shown in Fig. 2, is well fitted, from E ∼ 10 GeV to ∼ 2 TeV by: The terrestrial and solar magnetic fields and the solar wind modify the electron spectrum below E ∼ 10 GeV, so that the direct observations at those energies may deviate from the local interstellar spectral shape.
Cosmic ray electrons undergo inverse Compton scattering (ICS) off the ambient photon baths: starlight and the cosmic background radiation. The spectral indices of the GBR and electron spectra can be very simply and successfully related , if the GBR dominantly consists of photons whose energy has been uplifted by ICS, as we proceed to show.
The index of the GBR spectrum
The current temperature, number density and mean energy of the CMB are T 0 = 2.728 K, n 0 ≈ 411 cm −3 , and ǫ 0 ≈ 2.7 kT 0 ≈ 6.36 × 10 −10 MeV (Mather et al. 1993;Fixsen et al. 1996). The galactic starlight (SL) distribution is highly non-uniform, its average energy is ǫ ⋆ ∼ 1 eV. Consider the ICS of high energy electrons on these radiations. Assume the shape of the electron flux, Eq. (3), observed at E > 10 GeV, to be representative of the average galactic spectrum. For the energy range of EGRET the Thomson limit is accurate even for ICS on SL, and the eγ cross section is σ T ≈ 0.65 × 10 −24 cm 2 . The mean energy E γ of the upscattered photons, -or ∆E e , the mean energy loss per collision-is: The ICS photon spectrum originating in our galaxy is the sum of CMB and SL contributions: and is a function of the galactic latitude (b) and longitude (l) coordinates. The ICS final-photon spectrum -a cumbersome convolution (Felten & Morrison 1966) of a CR power spectrum with a photon thermal distribution-can be approximated very simply. Using again the index "i" to label the CMB and SL fluxes: where E i e is obtained from Eqs. (4) by inverting E γ (ǫ i ). We postpone to Section 6 the discussion of the model-dependent normalization factors N ⋆ (b, l) and N 0 (b, l): effective column densities resulting from the convolution of the space distribution of CR electrons with those of starlight and of the CMB. Introducing the CR-electron flux of Eq. (3), of the form dF e /dE = A [E/MeV] −βe , into Eqs. (6), we obtain: In the energy-range of EGRET, the CMB and SL contributions have the same spectral index, as do the small sunlight and external-galaxy contributions discussed in Sections 8 and 9.
The photon spectral index of Eqs. (7), which is related to that of the CR-electrons through β γ = (β e + 1)/2, coincides with the measured one, Eq. (1). The electron spectrum of Eq. (3) describes the data in the range E e > 5 GeV, so that Eq. (7) should be valid above E γ ∼ 100 keV, the typical energy of photons up-scattered from the CMB. At E γ > 50 GeV, at the upper end of the EGRET data, σ T in the SL contribution should be replaced by the complete Klein-Nishina cross section, implying a steepening of the spectrum. The corresponding effect for the CMB contribution is at energy above the EGRET energy range.
In deriving Eqs. (7), we have assumed that the locally-measured slope of Eq. (3) is representative of the index of the spectrum of the electrons suffering ICS to produce the GBR, wherever they may be. The spectral index of the diffuse GBR observed by EGRET is independent of direction, as shown in Fig. 3. The statistical test for a flat distribution is surprisingly good:χ 2 ∼ 0.5. This is encouraging support for our working hypothesis of an electron spectrum with a universal shape, and of a simple and dominant mechanism -ICS-to generate the GBR.
The index of the electron spectrum
To relate the spectra of CR electrons and protons, we need an estimate of the protons' spectrum at their source. A source spectrum dF s /dE with index β s ∼ 2.2 is obtained from collisionless shock simulations (Bednarz & Ostrowski 1998) or analytical estimates of acceleration by relativistic jets (Dar 1998). The CR spectrum of nuclei is modulated by their residence time in the Galaxy, τ gal (E). For a steady source of CRs the energy dependence of the observed flux is roughly that of τ gal dF s /dE. Observations of astrophysical and solar plasmas and of nuclear abundances as functions of energy (e.g. Swordy et al. 1990) indicate that τ gal (E) ∝ E −0.5±0.1 , explaining β 1 ∼ β s + 0.5 ∼ 2.7, as in Eq. (2).
Practically all CR acceleration mechanisms invoke an ionized medium that is swept by a moving magnetic field, such as would be carried by the rarefied plasma in a supernova shell (Bhattacharjee & Sigl 2000) or by a 'plasmoid' of jetted ejecta (Dar & Plaga 1999). The magnetic field acts as a moving 'mirror' that imparts the same distribution in velocity, or Lorentz factor γ = E/m c 2 , to all charged particles. To the extent that particle-specific losses (such as synchrotron radiation) can be neglected at the acceleration stage, all source fluxes have the same energy-dependence. For electrons below the anticipated 'electron's knee' at E e = (m e /m p ) E knee ∼ 2 TeV, we expect dF s e /dE ∝ E −βs , with β s ∼ 2.2. Confinement effects preserve this equality for ultrarelativistic electrons and protons: their behaviour in a magnetic maze is the same. But, unlike for hadrons, the 'cooling' time of electrons -that are significantly affected by the ambient radiation and magnetic fields-is shorter than their galactic confinement time, τ gal (E), above a relatively low energy. This implies that the CR electron spectrum is modulated mainly by the ICS, and not by the confinement time.
Electrons lose energy not only by ICS on starlight and the CMB, but also by synchrotron radiation on magnetic fields. All of these processes are essentially the same: scattering off photons, either real or virtual. The energy loss is governed by the rate at which a single electron interacts with the ambient electromagnetic fields, weighted by the corresponding average energy density: P = σ T c [n ⋆ ǫ ⋆ + n 0 ǫ 0 + B 2 /(8π)]. Let R p (an inverse time) be the production rate of CR electrons, assumed to be constant (Berezinskii et al. 1990), and let dn s e /dE be their source number-density spectrum. The actual density dn e /dE in an interval dE about E is continuously replenished and depleted by electrons whose energy is being degraded by interactions. This leads to a steady-state situation in which production and losses are in balance. Using Eq. (4) we obtain: For a relatively uniform galactic CR electron density, Eq. (8) also applies to the local electron flux dF e ≃ (c/4π)dn e . Substitute the spectrum dn s e /dE ∼ E −βs into the flux version of Eq. (8) to obtain: For electrons with E e < (m e /m p ) E knee we deduced that β s ∼ 2.2 . Thus, β s +1 = 3.2, in agreement with the data: Eq.
(3) and Fig. 2. Above the 'electron's knee' at E e ∼ 2 TeV the spectrum should steepen up by ∆β ≃ 0.25, like that of CR hadrons (Dar 1998). The available spectral measurements extend only to E e ≤ 1.5 TeV.
The energy density in the CMB is n 0 ǫ 0 = 0.24 eV cm −3 , coincidentally similar to that in starlight at our location: n ⋆ ǫ ⋆ ∼ 0.22 eV cm −3 . If the local CR and magnetic energy densities are in equipartition, B 2 /(8π) ∼ 1 eV cm −3 , again in the same ballpark. The cooling time of electrons in the ensemble of these fields is: Gy .
The galactic escape time of GeV electrons, which should be similar to that of CR protons τ gal (E) ∝ E −0.5±0.1 (Swordy et al. 1990), has a weaker energy dependence than that of τ cool . At sufficiently low energy, then, τ gal < τ cool , and processes other than Compton-or synchrotron cooling (such as Coulomb scattering, ionization losses and bremsstrahlung) become relevant. The slope of Eq. (9) should change as the energy is lowered. The spectrum of Fig. 2 shows such a change, but it occurs at E < 10 GeV, a range in which local modulations would mask the effect.
The scale height of CR electrons
The radio emission of galaxies seen edge-on -interpreted as synchrotron radiation by electrons on their local magnetic field-offers direct observational evidence for CR electrons well above galactic disks (e.g. Duric et al. 1998). For the particularly well observed case of NGC 5755, the exponential scale height of the synchrotron radiation is O(4) kpc. If the CRs and the magnetic field energy are in equilibrium, they should have similar distributions, and the exponential scale height h e of the electrons ought to be roughly twice that of the synchrotron intensity, which reflects the convolution of the electron-and magnetic-field distributions. The inferred value h e ∼ 8 kpc for NGC 5755 may not be universal for spirals, since h e is very sensitive to the density and distribution of CR sources, gas and plasma in each particular galaxy. Moreover, the magnetic field may be in equipartition with cosmic rays only where the interstellar plasma is dense enough. It is quite possible for the CR electrons to be confined in a large magnetic halo with a field much smaller than that in the disk. For these reasons we must discuss the observations of our own particular galaxy.
Traditionally CR electrons and nuclei were assumed to have a distribution that snugly fit that of the visible part of the Galaxy -where their conventional sources lie-implying a scale height above the plane of the disk of O(1) kpc (Broadbend et al. 1989). As the data and their analysis became more elaborate, scale heights more than one order of magnitude larger were discussed (e.g. ). Since electrons lose energy to the ambient radiation close to their sources, which have traditionally been located in the disk, not very well understood CR-reacceleration phenomena have had to be invoked (e.g. Seo & Ptuskin 1994). Even with reacceleration, a conventional distribution of cosmic-ray sources fails to describe the observed GBR ).
Over the years, Moskalenko, Strong and their collaborators have developed what is presumably the most elaborate and detailed understanding of the CR, radio and γ observations of our galaxy Moskalenko and Strong, 2000;Strong et al. 1997;. A crucial parameter in their models is the scale z h of the CR distribution orthogonal to the galactic plane, defined as the height above which CRs freely escape, as in a leakybox model. conclude that z h lies between 4 and 12 kpc. The limits are based on the comparison of the 10 Be/ 9 Be ratio observed by Ulysses (Connell 1998) with model predictions as a function of z h , being all other parameters fixed at their adopted values. The dependence of the 10 Be/ 9 Be ratio on z h , shown in Fig. 9 of and reproduced here as Fig. 5, is very weak for z h > 10 kpc. At z h = 20 kpc, the prediction would be only some 1.3 standard deviations below the Ulysses central value, and even z h = 40 would be viable: the average of all previous and somewhat less precise observations, compiled in Lukasiak et al. (1994) and shown in Fig. 5a, would be in agreement with z h = 20 or 40 kpc. For all these reasons and the ones stated in the introduction, we shall not refrain from considering scale heights above the 12 kpc upper limit quoted by .
The CBB and SL contributions to the GBR
The spectral index of the GBR, derived in Section 4, is independent of the details of the spatial distribution of starlight. We have argued that the EGRET GBR data support the simple hypothesis of an electron spectral index that is independent of location. The predicted GBR index is then also independent of the magnitude of the electron spectrum as a function of position. In this section we use a simplified model of the electron and starlight distributions to compute the magnitude and angular dependence of the CMB and SL contributions to the GBR.
We adopt h e = 20 kpc (a value obtained from a rough fit of our results to the angularly-averaged fluence of the GBR) for the Gaussian scale height of the CR electron distribution of our galaxy in the direction perpendicular to the galactic plane. For the distribution in ρ -the radial coordinate orthogonal to the galactic axis-we adopt a Gaussian scale height ρ e = 35 kpc; the results are quite insensitive to this parameter. The EGRET GBR data are not precise enough to be "invertible", that is, for the actual high-latitude CR-electron distribution (Gaussian, exponential or otherwise) to be disentangled; a fact to be rediscussed anon, in view of our results. The distance of the solar system to the galactic centre is d ⊙ ≃ 8.5 kpc. The factor N 0 (θ, φ) in Eq. (6), which describes the angular dependence of the GBR photons due to ICS on the (uniformly distributed) CMB, is: where r is the distance in the direction along the line of sight.
It is difficult to model in detail the contributionn from ICS on starlight (Hunter et al. 1997, Sreekumar et al. 1998). But we are only concerned with this light at high galactic latitudes, since the diffuse GBR of interest to us is that measured by EGRET by masking the galactic plane and centre. We make a coarse estimate by approximating the Galaxy's starlight as that produced by a source at its centre with the galactic luminosity L ⋆ = 2.3 × 10 10 L ⊙ ≃ 5.5 10 55 eV s −1 (Pritchet & van den Bergh 1999). The starlight contribution in Eq. (5) is then of the same form as Eq. (11), with N 0 traded for N ⋆ by the substitution: For the CMB and starlight contributions to the GBR, averaged over the EGRET unmasked domain, we obtain, by integration of Eqs. (5), (6), (11), (12): For scale heights h e and ρ e similar to the ones adopted (20 and 35 kpc, respectively), the CMB and SL contributions are comparable in magnitude, the first scales approximately linearly with h e while the second is rather insensitive to this parameter. The contribution to the CMB from sunlight and external galaxies, discussed in Section 8 and 9, adds corrections of 6% and ∼10% (respectively) to Eq. (13), the total result is shown in Fig. 2. The fitted value of h e is imprecise: the starlight to CMB ratio is proportional to ǫ ⋆ /ǫ 0 raised to a very poorly determined power, 0.10 ± 0.05.
We can use our assumed Gaussian distribution of electrons in a halo, with vertical and radial scale heights h e and ρ e , to compute the diffuse γ-ray luminosity of our galaxy, which in our model is dominated by ICS on CMB and SL photons. Using Eqs. (5), (6), (11), (12) we obtain, for the luminosity in γ-rays of energy above E: where u ≡ 1 − h 2 e /ρ 2 e . A future γ-ray telescope, such as GLAST, could possibly see the corresponding glow of Andromeda's halo.
Sunlight contribution to the local GBR
We are only at a distance l ⊙ = 1.5 × 10 13 cm from the sun. This entails a small but non-negligible contribution to the locally-observed GBR, resulting from ICS off photons in the heliosphere. The corresponding photon flux is described by Eq. (7), with the substitution of ǫ i by the mean energy ǫ ⊙ ≈ 1.35 eV of solar photons, and of N i by N ⊙ , the solar-photon column density along the line of sight. Let θ ⊙ be the angle between the line of sight and the direction to the sun. Then: For a uniform cos θ ⊙ distribution during the EGRET data taking, the average column density is N ⊙ = π L ⊙ /(16 c l ⊙ ǫ ⊙ ), resulting in a sunlight-induced GBR flux: This contribution is roughly 6% of our galaxy's result, Eq. (13). At E γ > 75 GeV, the spectrum of Eq. (16) should steepen, since ICS should then be described by the Klein-Nishina cross section, and not by its low energy Thomson limit.
9 Extragalactic contribution to the GBR To estimate this contribution, some concepts and numbers need to be recalled. Hubble's constan' is H 0 = 100 h km s −1 Mpc −1 , with h ∼ 0.65; Ω m and Ω Λ are matter and vacuum cosmic densities in critical units: Ω ≡ Ω m + Ω Λ ; y ≡ 1+z is the redshift factor. In a Friedman model, the time to redshift relation is dy/dt = −H 0 f(y) y, with f(y) ≡ [(1 − Ω) y 2 + Ω m y 3 + Ω Λ ] 1/2 . The luminosity density of the local universe (Ellis 1997) is ρ L = (2.0 ± 0.4) × 10 8 h L ⊙ Mpc −3 . The combination ρ L /L ⋆ provides an estimate of the average number density of 'Milky-Way-equivalent' galaxies. If the main sources of CRs are young supernova remnants or gamma-ray bursts, the CR production rate ought to be proportional (e.g. Wijers et al. 1997) to the star formation rate R SFR [y], recently measured up to redshift z ≃ 4.5 (Steidel et al. 1998).
The energy of CMB photons up-scattered by electrons at 'epoch y' is proportional to T(y) = y T 0 and it is subsequently redshifted by the same factor; hence the spectra from distant galaxies should have the same energy dependence as from our galaxy. The situation for SL photons is more complicated. Young galaxies are bluer than older ones, but this effect is overcompensated by the expansion redshift from a relatively low y, onwards. Yet, at the energies observed by EGRET, and for the redshift values of O(1) that dominate the extragalactic contribution, all these blueand red-shifts simply relocate the photon energy, while roughly maintaining the slope of the spectrum. For the sum of all galaxies, we estimate: where dL γ /dE is to be obtained from the luminosity of a Milky-Way-like galaxy, Eq. (14). For R SFR [y] we interpolate the summary values of Steidel et al. (1998). In writing Eq. (17) we have ignored the fact that, above E ∼ 10 GeV, absorption by e + e − production on the IR-to-UV background becomes relevant (Salamon & Stecker 1998), so that the extragalactic contribution should be quenched.
Detailed comparison with the EGRET data
Our predictions for the magnitude of the GBR and its directional dependence on b and l are shown in Figs. 5 and 6. In Fig. 5 we display separately the contributions from ICS off CMB and SL photons in our galaxy, as well as the uniformly distributed sunlight and extragalactic components. In Fig. 7 we compare the total GBR flux: obtained by summing Eqs. (7), Eq. (16) and Eq. (13), with the EGRET data. Our result is a satisfactory fit to the observed magnitude and angular trend of the GBR (χ 2 = 0.98), a vast improvement over the result for a constant (extragalactic) ansatz, for whichχ 2 = 2.6. Although this agreement would be more meaningful, had we used a more realistic model of starlight, a more careful treatment may be premature, for the EGRET error bars are large enough to accommodate considerable variations in the input modelling. In a previous analysis , for instance, we obtained a similarly good fit with an assumed constant-density, spherical CR-electron halo of radius 25 kpc, for which the results have the advantage of being simple analytical functions.
We have neglected various putative extragalactic contributions to the GBR. Blazars, because of their beamed emission, may not be very relevant. But CR electrons injected directly into intergalactic space by active galactic nuclei, radio galaxies or gamma ray bursters, may give rise to a contribution of comparable magnitude and shape to that of the CR electrons in external galaxies. These or other potential sources of GBR photons may imply that our parameters h e and ρ e have been overestimated. But this effect cannot be very large, given our success at describing the non-trivial angular dependence of the EGRET data.
Conclusions and predictions
We have presented a simple understanding of the relation between the spectral indices of cosmic-ray protons, electrons and the GBR. Accepting the possibility that the CR-electron distribution in our galaxy may have a scale height larger than conventionally believed, we have also argued that the bulk of the GBR could originate in our own galaxy. Our modelling is extremely simplistic, but quite successful.
The predictions specific to our scenario are: • The GBR should reflect the asymmetry of our off-centre position in the Galaxy.
• The halo of Andromeda should shine in gamma rays above a few MeV, with a luminosity comparable to that in Eq. (14). Likewise, very nearby star-burst Galaxies, such as M82, and radio galaxies with large CR production rates, such as Cygnus A, may be visible in gamma rays.
• If the CR-proton and electron acceleration mechanisms are the same, the existence of a knee in the observed proton spectrum translates into a related result for the power index β e of the electron spectrum, which should steepen above E ≈ 1.6 TeV by ∆β ∼ 1/4.
• The GBR spectrum should not have the sharp cutoff, above E ∼ 100 GeV, expected (Salamon & Stecker 1998) for cosmological sources. But it should nonetheless steepen around 10-100 GeV, because of the anticipated "knee" in the electron spectrum and of the energy-dependence of the Klein-Nishina cross section.
These features of our scenario should be testable when the next generation of cosmic-ray and γ-ray satellites (AMS-02 and GLAST) are operational, hopefully by 2005. In spite of their maturity, cosmic-ray physics and γ-ray astrophysics are still young, and thriving. Figure 1: Comparison between the spectrum of the GBR, measured by EGRET (Sreekumar et al. 1998), and the prediction for ICS of starlight and the CMB by CR electrons. The slope is our central prediction, the normalization is the one obtained for h e = 20 kpc, ρ e = 35 kpc. (Evenson & Meyers 1984;Golden et al. 1994;Ferrando et al. 1996) as measured by Prince 1979 [crosses]; Nishimura et al. 1980 [squares]; Tang 1984 [circles]; Golden et al. 1984 [triangles]; Barwick et al. 1998 [stars]. The slope is the prediction, the magnitude is normalized to the data. (c) (d) Figure 4: EGRET data, organized as in Fig. 3, for the dependence on θ of the GBR intensity above 100 MeV. Figure 5: 10 Be/ 9 Be ratio for the diffusive reacceleration models of . (a) As a function of energy for z h = 1, 2, 3, 4, 5, 10, 15 and 20 kpc. (b) As a function of z h at 525 MeV/nucleon, the mean interstellar value for the Ulysses data, whose 1σ limits are the dashed lines. The data points in (a) are from Lukasiak el al. 1994 (square, Voyagers 1,2; open circle, IMP 7/8; triangle, ISEE 3) and Connell 1998 (filled circle, Ulysses). | 2014-10-01T00:00:00.000Z | 2000-05-04T00:00:00.000 | {
"year": 2000,
"sha1": "324fd00fcc70762ee49bb52cbf3e7a1f293e27f1",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/323/2/391/4074331/323-2-391.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0a14b027bdcb6011de952b10543eb1a3aa3c7f3f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
197807626 | pes2o/s2orc | v3-fos-license | THE SPECIFICITY OF THE INVESTMENT IN LAND AS IN REAL ESTATE
The main purpose of this article is to identify the motives of investing in land and arable land and determine the choices of such investment. The empirical research, based on the method of expert evaluation, fills the gap of the investment in land observed in scientific literature. The expert evaluation has allowed to develop a profile of a land investor, identify the determinants of the value of land and arable land and clarify the motives of investing in land in a small open economy (the trends in one small economy may reflect similar trends in other small open economies, for instance, Latvia or Estonia). The novelty of the article lies in the disclosure of the general and Lithuania-inherent land investment risks and assessment of the impact of the main land value determinants. The practical implications of the article lie in submission of the guidelines to real estate policy and practice makers, investors, real estate developers, buyers and other parties concerned that want to get a better understanding of the expediency or inexpediency of the investment in land.
Introduction
Investment in real estate is a growing investment area, selected by investors with the aim to diversify their investment portfolios and protect them from unexpected and unwanted value fluctuations. As real estate plays one of the most important roles in general economics, the methods and mechanisms of real estate funding are thoroughly analysed at European and global levels. The recent financial recession forced to look for the ways to reduce welfare costs: states cut down on their investment in public infrastructures (roads, renovation of buildings, etc.). Real estate markets are increasingly considered to be beneficial and able to flexibly and effectively meet consumer needs, this way promoting the recovery of national economies.
The current trends of land grabbing call for the need to comprehensively research land investment expediency, trends. Although scientific literature is rich in the studies to focus on the issues of real estate development (the development of international and local real estate markets was analysed by Wyman, Seldin, and Worzala (2011), Tiwari and White (2014), Hin, Ho, and Addae-Dapaah (2014), Faulkner (2016), Dong and Sing (2017), etc.; the dynamics of the investment in real estate were studied by Gholipour Fereidouni and Masron (2013), Patterson (2013), French (2015), etc.; the impact of the global financial crisis on real estate prices was researched by van der Heijden, Dol, and Oxley (2011), Hegedüs, Lux, and Sunega (2011), Scanlon and Elsinga (2014), etc.; real estate funding forms were addressed by Kemp (2007), Griggs and Kemp (2012), Squires et al. (2016), etc.; the impact of tax policies on real estate possession was analysed by Oxley and Haffner (2010), Figari et al. (2012), etc.) investment in land thus far have earned insufficient scientific attention, especially at the national level.
The novelty of this article lies in the comprehensive analysis of the opportunities to invest in land in the global and local contexts, identification of the theoretical and practical motives to invest in land, identification of the general risks and the risks faced by Lithuanian land investors, and assessment of the factors that have the most significant impact on the value of land. Previous land studies mainly focused on the logic of the general investment in land (Knuth, 2015), investment in argicultural land (Gunnoe & Gellert, 2011;Gunnoe, 2014), and the role of financial institutions in land funding (Bergdolt & Mittal, 2012;Buxton, Campanale, & Cotula, 2012). Nevertheless, the investment in land as in a specific type of real estate is hardly analysed. This gap in the scientific literature proposes the following scientific problem: is investment in land expedient and what are the opportunities to invest in land?
The purpose of this article is to identify the motives of investing in land and arable land and determine the choices of such investment. For fulfilment of the defined purpose, the following objectives were raised: 1) to review the theoretical peculiarities of land investment; 2) to select and introduce the methodology of the research; 3) to empirically assess land investment problems and opportunities.
The methods of the research include systematic and comparative literature analysis, and expert evaluation.
The practical implications of the article lie in submission of the guidelines to real estate policy and practice makers, investors, real estate developers, buyers and other parties concerned that want to get a better understanding of the expediency or inexpediency of the investment in land.
The limitations of the findings are linked to the lack of the experts who can objectively assess land-related investment decisions as well as to the narrowness of the mathematical-statistical estimations as the statistical databases available for data extraction contained the data on the prices of arable land (eur/ha) for the period not earlier than 2011.
Review of the theoretical peculiarities of land investment
The recent changes in land usage have earned much attention in the discussions of global environment and economics. Although real estate is described as an immovable property, such as land and its permanents attachments (e.g. buildings), land differs from other kinds of real estate because it can serve as an independent object of investment even without related infrastructures, while other kinds of real estate (e.g., commercial and trade premises, housing) are always related to land.
The discussions about land often cover the issue of what the term "vacant land" actually refers to. It should be noted that vacant land is not the same as raw land as the latter refers to the land which has not been affected by human activities. Vacant land, on the contrary, is understood as raw land improved by human activities, but currently vacant (i.e. vacant land can possess different communications, utility infrastructures, roads, etc.) (Grant, 2016). Land investment commonly refers to the investment in vacant rather than raw land. In this case, the concept of land covers not only vacant land plots for construction purposes, but also arable land (Williams, 2013a).
Land possesses another economic characteristic, which distinguishes it from the other kinds of real estate: land usage is of a derivative nature, i.e. land can be used not only as an asset, but also as a capital necessary for a particular human economic activity (for instance, agriculture, production, consumption, investment, recreation, etc.) because any human activity requires a geographical location. Economic activities are projected to be carried out in a particular geographical area, in which real estate may gain different forms subject to the planned or already performed economic functions (e.g., infrastructure, agriculture, green areas in cities, etc.).
As land is a scarce resource, its supply is limited. With reference to the World Bank (2018), over the period 1961 to 2015, the share of arable land (in hectares) per capita decreased from 0.371 to 0.194 at the global scale. While assessing the situation in Lithuania, it should be noted that till 1992, i.e. till restoration of the country's independence, the statistical data on land or its usage were not accumulated. During the period from 1992-2015, the share of land per capita in the country decreased only insignificantly: from 0.78 to 0.748 hectares. In spite of the fact that the situation in Lithuania is comparatively stable, the global statistics show that the percentage change in the number of population is much faster than the percentage change in land availability. As a result, the further growth in the number of population (with reference to the estimates of the United Nations, the word's population will number nearly 10 billion in 2083 (Rosenberg, 2017)) is going to determine even greater scarcity of land resources and land price surges in real estate markets. The global land price growth tendencies may as well as cause an increase in land prices in Lithuanian real estate market.
The changes in land usage are always determined by the trends of economic development. The historical experience contains the examples of the progressive raw land transformation into agricultural, urbanistic or industrial territories. The transformations of this type are conditioned by such socio-economic factors, as the growth in the number of population, food production, income, wood production and recognition of land ownership. Knuth (2015) describes the interest in land as land ownership-related rights, responsibilities and restrictions. Land cadastres usually hold such data as geometrical descriptions of land plots, ownership interests, control, land plot value and possible improvements (Enemark, 2001).
The main feature, which distinguishes land from other types of real estate, is independency of land as of an investment object because land is not dependent on related objects (e.g. buildings), while the latter are always dependent on land. In addition, land is characterised by the derivative nature of its usage, when the demand for land is determined by the need of a geographic area for human economic activities and by land's scarcity. Land value is determined by the real economic and physical usage of land and land-related property. The opportunities to use land in the future also affect (in this case, it depends on land usage policies and planning regulations) the value of land. Efficiently functioning land management and land value systems help to create an effective land market, while efficient land usage and land development systems contribute to the effective land usage management. As the demand for land as for a capital asset and natural resource is not going to decrease in the future, land investment and land investment funding are becoming increasingly topical issues nowadays.
Investment in land has historically been the basis of asset accumulation for large companies, farmers and other well-off people. It is treated as a long-term investment. When land prices are rising, land investment is said to be the safest and most reasonable way to invest free money (Williams, 2013b). Land investment can also be profitable when land supply is limited. However, the real value of land investment depends on investment-related risks, which means that land investment can turn out to be profitable when attractive business opportunities are envisaged, when an investor can earn income from rent, when land can be pledged for a loan, when land ownership can help to reduce the amount of taxable income or when an investor can earn profits from land resale. Even vacant land plots can generate sufficient cash flows. The analysis of the scientific literature has allowed to identify the main land investment motives (see Table 1).
The data in Table 1 show that the motives to invest in land are financial and non-financial. Apart from the duty to pay land and land-related taxes, a land owner hardly bears any other costs, unlike, for instance, an owner of a office building, who has to take a continuous care of the technical and representative state of the building, indoor and outdoor lighting, arrangement of parking lots, the greenery around the building, etc. (Williams, 2013b). What is more, land as an investment object does not require any improvements (Grant, 2016). As it was noted by Knuth (2015), Grant (2016) and others, the need of the initial capital for land investment is comparatively low, in particular when investment is made through modern funding mechanisms. Thus, an investor may use personal rather than borrowed funds. The land acquired by employing personal funds becomes an inexpensive long-term investment as an investor does not pay any interest for borrowed funds (Williams, 2013b).
Economics are sensitive to the cyclical changes of upturns and recessions. During the periods of an economic upturn, population's income in countries and regions is inclined to grow, which, in turn, leads to higher overall demand. Under these conditions, higher demands for commercial, industrial and other types of property determine the growth in the demand for land and a higher land value. Hence, it is financially benefitial to buy land during the periods of economic recessions and sell during the periods of economic upturns (Grant, 2016). Predictability of the value of land at different stages of an economic cycle distinguishes land from other types of financial investment (e.g. stock, precious metals, etc.), the value of which is often difficult to predict. Land investment can be distributed according to an expected growth in particular industries. For instance, anticipation of urban development may promt the investment in urban territories, while expectations of agricultural development -the investment in arable land. An investor may also choose the investment in forests, water bodies, etc. The land, located near welldeveloped regions, costs more than the land, located near under-developed or non-developed regions. The value of land for housing and commercial purposes also differs. Hence, the investment in land can be matched with the trends of sectoral development (Knuth, 2015).
While analysing the financial motives of land investment, Williams (2013b) notes that land investors have a strong motivation to sell land as they do have any close emotional connection with it. The author (Williams, 2013b) also states that the people who are strogly emotionally connected with land (e.g. live on it) never find "the right time" to sell it and occasionally look for the ways to optimize land usage, i.e. they are passive rather than active investors. Williams (2013b) and Grant (2016) highlight less intensive competition in the land market in comparison to the competition in the housing or commercial premises markets, i.e. the supply of housing or commercial premises is higher than the supply of land plots. Asset's feature to retain its value High value of land as of a scarce resource As a result, investment in land is considered to be safer than investment in other types of real estate. While analysing the risk of land investment, Williams (2013a) notes that an investor must understand what the land can be used for (e.g. construction of residential buildings, industrial complexes, infrastrusture, etc.) as only in this case the investment can be considered expedient. The expedience of land investment is often characterised by land's topography (i.e. exposure to landslides, avalanches, floods, etc.) as it can severy restrict building of infrastructures in a land plot and limit the scale of land usage ("zoning restrictions"). According to Eberlin (2017), the effects of "zoning restrictions" are similar to the effects of political, legal and ecological risks. Grant (2016) highlights the importance of an agent risk. The author (Grant, 2016) speaks against acquisition of land from the agents who have owned a land plot for a comparatively short period of time, especially if an agent is a real estate developer or construction company. According to Grant (2016), the fact that an agent wants to sell land quickly, may actually mean that its value is low, lower than it is expected, land usage is likely to be restricted in the nearest future, etc. The similar risks can be borne when an investor is buying land for an unreasonably low price.
Land investment is always linked to particular financial risks, i.e. the investment in land can cause expenditure and losses. As land taxes occupy the largest share of land maintenance costs, constantly changing real estate taxation policies and increasing real estate taxes for domestic and foreign agents are the biggest sources of concern for land investors (Eberlin, 2017). The other important types of risk cover the risk of unfulfilled expectations, when an investor assesses the situation with consideration of past rather than future market trends (which, in turn, makes preconditions for real estate bubbles), and land overholding risk, when an investor fails to sell land on most favourable terms (Williams, 2013a).
The authors of this article support the opinion that land is a risky but an attractive investment. One of its advantages is that the quantity of this asset remains stable (i.e., land does not multiply). Land always has its value, which allows to estimate the return on this investment. Investors can choose the investment in land at both the local and global levels. The main reasons that determine the investment in land at one of the above-mentioned levels include the conditions in the market, the differences in the return on investment, legal frameworks (presence or absence of land-investment favourable environment, etc.) and an expected rise in prices for speculative purposes. The prices of land plots can also be influenced by foreign investors who may increase the demand for land by buying land plots in a foreign country. At both the local and global levels liberalization of the trade in land is important not only because of the economic benefits gained by land owners and the rural population, but also due to the necessity to reform the current regulations which violate the principle of the respect for private property established in the Constitution. The reluctance to reform the restrictive regulations speaks about the distrust in the ability of citizens to manage their properties. The investment in arable land could be promoted by employing advanced technologies. In case no new participants enter the market, the current entities will hardly be able to meet the production quotas and requirements of the EU, which will lead to the reduction of the EU funding.
Summarising, land investment not always generates constant benefits and not always pays off in short or medium terms, but it is likely to pay off in the long term. The main financial motives of land investment include low maintenance costs, low initial capital, prognosticated value changes in different stages of an economic cycle, sectoral distribution of investment and the opportunities of portfolio diversification. The risks of land investment are mainly linked to land's topography, "zoning restrictions", possible changes in land usage and instability of political, legal and ecological environment. The increase in land maintenance costs (in particular, the growth of land and real estate taxes), unfulfilled expectations (real estate bubbles), land overholding, cash flow changes and diseconomy of scale are the main types of land investment risk.
Research methodology
In order to implement the purpose of the research, the method of expert evaluation (interviews and a questionnaire survey) was employed. The experts of the real estate market were represented by: -Marius Dubnikovas, who is currently in charge of Business Development Manager position at "Compensa Life Vienna Insurance Group SE", with more than 15 years of professional and practical experience in the areas of real estate valuation and finance. He started his career as the President of Lithuanian Financial Brokers Association, and subsequently followed the position of Client Investment Manager at "Finasta Ltd. ". The expert is also the Chairman of the -Dr. Vytautas Azbainis, who has gained his experience in drawing up real estate investment projects and land plot detailed plans during 13 years of professional career. Since 2005 he has held the position of the director of "Vilniaus Namas Ltd. ". In 2014, he defended the dissertation "Real Estate Market Cycle Management and Modelling"; -Romualdas Paulauskas, who has accumulated more than 15 years of experience in the real estate sector. Currently, he is the Head of "OBER-HAUS Real Estate Ltd.", Panevėžys Department. His professional insights are published in popular Lithuanian newspapers "Verslo žinios", "Lietuvos rytas", "Vakarų ekspresas", etc.; -Emilijus Gedvilas, who is a broker at "Akorus Real Estate". The expert has been purposefully working with land investment, purchase and sales of real estate, and the development of real estate objects for about 4 years. The aim of expert evaluations is to obtain the data from a person who is considered an experienced professional in particular area. With reference to Makridakis, Wheelwright, and Hyndman (1998), in accordance with the objectives of a study and with consideration of the level of experts' competence, expert evaluation should involve from 10 to 100 experts. Meanwhile, according to Augustinaitis et al. (2009), in order to maintain the accuracy and reliability of expert evaluation, at least 5 experts should be involved (this recommendation was based on the findings of the empirical research). In this study, the focus falls on the competence rather than number of the experts involved. With reference to Augustinaitis' et al. (2009) recommendations, 5 experts were involved.
The logical sequence of the research was as follows: 1. Expert interviews with Marius Dubnikovas and Saulius Vagonis; 2. Questionnaire survey. The expert interviews were employed with the aim to identify the main land investment motives and explain the trends of the land market. During the interviews, the experts were asked the following open-type questions: -What are the main motives to invest in land as in real estate?
-What land price tendencies are predicted for the future and what reasons will determine the changes in the land market? The questionnaire survey was employed with the aim to define land investment opportunities by focusing on land price determinants in Lithuania. On the basis of the survey results, the insights in land investment prospects in Lithuania were made.
The questionnaire, submitted to the experts, consisted of 3 parts. The first part was intended for creation of a profile of a land investor. The experts were provided with the questions concerning the characteristics of a subject, the level of risk tolerance and the channels of land investment (i.e. direct or indirect (through intermediaries) investment).
The second part was devoted to identification of land value determinants. The experts were asked to rank the general (interdependence of financial markets, economic, social, legal, political, demographical, institutional and construction factors) and microeconomic (an object's characteristics, environmental factors, the factors of an investor's behaviour) land price determinants by their importance at the Likert's scale. The third part of questionnaire was developed for the assessment of land investment tendencies in Lithuania. The systematized content of the questionnaire has been presented in Table 2.
The data was processesed with SPSS and "Microsoft Excel" software. Reliability of the expert evaluations depends on the experts' knowledge and number. Assuming that the experts are sufficiently precise, it can be stated that a larger number of the experts involved increases reliability of the expertise. The degree of an expert's competence is valuated by employing the coefficient of competence. The special attention must be drawn to interpretation of the values of Cronbach alpha coefficient. Cronbach alpha coefficient indicates whether a questionnaire reflects the subject matter with sufficient accuracy. Some researchers, for instance, Nunnally and Bernstein (1994), argue that Cronbach alpha coefficient must not be lower than 0.7, while other researchers, for instance, Malhotra and Birks (2003), state that the lowest critical limit of a questionnaire's reliability is 0.6. Hence, the choice of the lowest critical limit is a subjective matter which depends on the nature and qualitative aspects of a particular study. Aim -identification of the most influential land value determinants may provide the opportunities to recognise the signs indicating plausible land price changes in the future
The results of the empirical research
In the first stage of the research, the experts Marius Dubnikovas and Saulius Vagonis were interviewed. Marius Dubnikovas submitted the following answer to the question "What are the main motives to invest in land as in real estate?": 1. Price increase in the land market. Land price trends in Western countries propose that the value of land in Lithuania should also increase. The period 2016-2017 saw growing prices of land plots in particular segments and locations. Leaning on the theory of expectations (i.e. on conviction that land prices are going to increase), real estate developers, investors and speculators cause land supply shrinkage, which, in turn, leads to a notable price growth and formation of positive expectations; 2. The intention to earn from the development of real estate. As housing (in particular, apartment) prices are growing up, the segment of individual houses and cottages (which due to high supply has hardly ever captured any price growth) is becoming increasingly attractive. Earning higher income, the population can afford to buy or build a higher-class housing, which, in turn, spotlights the segment of land plots (home ownerships); 3. As an investment object, land is a hedge against inflation because the value of arable land has historically been growing faster than inflation rate. Thus, arable land is treated as an effective hedge against inflation and a measure to preserve the value of capital. The main motive of land investment is speculative, i.e. land is acquired with the aim to develop real estate projects/objects and earn from resales or price increase. The investment in arable land is basically made with the aim to rent it to farmers or build infrastructures for the development of particular businesses.
According to Saulius Vagonis, land investment motives depend on an investor's aims. The first motive is linked to the expectations of land price growth in the future. An investor often invests in land because land plot prices (especially, in suburbs and countrysides) are lower than housing or commercial premise prices. Another motive is passive investment: land does not require any regular maintenance, while many other types of real estate need it. If investors seek a stable return, they prefer investment in arable land, which ensures stable cash flows from rent. For earning of active income, investors choose land as a construction element, which allows to earn from the development of real estate projects.
Next, the experts submitted their answers to the question "What land price tendencies are predicted for the future and what reasons will determine the changes in the land market?" As it was noted by Marius Dubnikovas, the future should see an increase in land prices, although land investment funding to a large extent depends on basic interest rates, the changes in which form land demand trends.
Saulius Vagonis expressed the opinion that the prices of arable land will largely depend on the EU support policies, while non-arable land plot prices will be determined by the general economic situation and the trends of urban development.
In the second stage of the research, the experts were asked to complete the questionnaire. While interpreting the results of the expert evalution, only the concepts and factors with mean ranks equal to or exceeding 3.5 were considered significant. The value of Cronbach alpha was equal to 0.98 1 , which confirmed that the questionnaire reflects the researched dimension with appropriate accuracy.
The results of the expert evaluation allowed to form a profile of a land investor: a typical land investor is a lowrisk assuming individual, business enterprise or household, commonly investing without intermediaries.
The general determinants of the value of land as of an investment object have been systematised in Table 3 (the determinants with mean ranks from 4 to 5 were considered significant, from 3.5 to 3.99 -less significant, equal to and lower than 3.4 -insignificant).
The data in Table 3 show that the value of land as of an investment object is mainly affected by economic and political determinants. With reference to the data of the Bank of Lithuania (2017), the inflation rate in Lithuania in 2017 amounted to 3.7%. In 2018, it is predicted to decrease to 2.6%. The growth in the wage level (+6.5% in 2017, and 5.7% in 2018) exceeds the increase in labour productivity. As a result, increasing labour costs have a magnifying effect on the price rate. Higher income of the population also puts pressure on prices due to the growth of domestic demand. Over the period from 2016 to 2017, the country's real GDP increased by 3.3%; in 2018, it is predicted to increase by 2.8%. The country's main macroeconomic indicators show that Lithuania is undergoing the period of economic growth. With reference to the Bank of Lithuania (2017), the growth will continue in 2018, but will slow down in 2019 due to the impact of such risks as the US geopolitical conflicts, Chinese credit cycle changes and unsustainable price rates in some global financial and asset markets.
The adoption of the Directive on Credit Agreements for Consumers Relating to Residential Immovable Property (or Housing Credit Directive) (2014) has established equal conditions of competition for bank and non-bank institutions. The newly-issued (as of July 1, 2017) the Republic of Lithuania Law on Real Estate Related Credit has also affected the behaviour of real estate investors.
Interdependence of financial markets (globalisation and innovativeness which manifest through the development of financial innovations and technologies), legal factors (legal regulation of property, regulation of the transfer of property rights, real esate taxation), the volume of mortgage loans, migration rate and governmental stability in the country are attributable to the group of less significant determinants. According to the experts, the largest part of institutional, demographic and construction sector determinants are insignificant, which proposes that the value of land is mainly determined by economic, political, legal factors and interdependence of financial markets.
In order to verify the links between the value of arable land and such strongly correlated determinants as migration rate, unemployment rate and demographic aging coefficient (the changes in the population's age characterised by an increase in the number of elderly people or a decrease in the number of young people) or wage rate and at-risk-to-povery rate, we estimated Pearson's correlation coefficients and developed the equations of the multiple regression for different Lithuanian districts for the period from 2011 to 2016 (see Appendix). The choice of the period under consideration was determined by the availability of the statistical data for Lithuanian districts. It should be noted that the data on at-risk-to-poverty rate were available only for age groups, cities/villages and the division of the capital and central/western regions of the country, but unavailable for particular districts, which caused limitations of the research.
The estimations have revealed that wage growth led to an increase in the value of arable land in all Lithuanian districts over the period under consideration. Unemployment rate negatively affected the prices of arable land in Kaunas, Marijampolė, Panevėžys, Šiauliai, Utena and Vilnius districts (6 out of 10 districts), while the values of the demographic aging coefficient positively correlated to the prices of arable land in all Lithuanian districts apart from Vilnius district. It can be concluded that the mathematical estimations confirmed the results of the expert evaluation stipulating that wage rate has a significant impact on the prices of arable land, but differed from the experts' opinion on the impact of unemployment rate and demographic aging coefficient. In addition, the experts assessed the general situation in the country without consideration of the conditions in particular districts, while the mathematical estimations did consider the districts because the prices of arable land may significantly vary depending on a district (for instance, as of 2016, the price of arable land amounted to 2787 Eur/ha in Vilnius district, 1835.39 Eur/ ha in Utenos district, and so forth).
Economic factors are extremely important as they determine not only the value of land, but also the cycles of the entire real estate market. They indicate where and when investors may choose the most favourable options of investment. Land in developing economies can be a cheap and attractive investment, but higher profits earned from real estate transactions require longer terms. In growing economies, investors are likely to afford more expensive real estate objects and land plots. Political factors have a significant impact on the value of land when the governments are changing, the political situation in the country is not stable, the legal framework is confusing and land investors are charged unreasonably high taxes.
The results of the expert evaluation have also revealed that the most influential microeconomic determinants include land plot location, market prices of other real estate objects, expectations of future cash flows, expectations of real estate price changes and an investor's rationality/ irrationality (with mean ranks equal to 5). The value of land is slightly less affected by land plot size, financial expediency of the invesment in a land plot, the level of regional economic development, the impact of an investor's environment (with mean ranks equal to 4.8), the factors of possibly best land usage (land's suitability for agricultural activities, attractiveness for construction, etc.) (with mean rank equal to 4.6), present decision-making by following past trends (with mean rank equal to 4.4), real estate rental rates, and correspondence between and an object's characteristics and an investor's taste and needs (with mean ranks equal to 4).
Such determinants as land quality, maintenance costs and neigboring natural environment (with mean ranks equal to 3.6) were recognized as less significant, while current condition of a land plot and pollution (with mean ranks equal to 3.2) -as insignificant. The results of the expert evaluation lead to the conclusion that microenvironmental determinants more significantly affect the value of land than macroenvironmental determinants because vast majority of microenvironmental determinants (except current condition of a land plot and pollution) were recognized as significant.
The authors of this article are of the opinion that classification of land into the categories of land and arable land is one of the factors that determines the differences in the price and popularity of these two kinds of land among investors. The prices of arable land largely depend on a region (two opposites in Lithuania in this regard are Aukštaitija region and Žemaitija region), fertility as well as EU subsidies for land owners and renters. In case arable land is located in an infertile region which, however, is characterised by a rich landscape, the land can be included into the list of tourism-favourable or heritage territories. In the latter case, changing of the purpose of land may provide more opportunities for investors to earn the return on their investment. The value of commercial and residential land plots is significantly affected by their geographical location, landscape, population, employment rate, municipal policies and infrastructural development.
In Lithuania, arable land is available to foreign investors in accordance with certain legal provisions. Over the period of the last ten years, this land has been attractive to Scandinavian investors -a part of them have exploited an opportunity to acquire land plots for the establishment of businesses, factories, farms, etc. As previously mentioned, the prices of arable land partly depend on the EU subsidisation which causes the prices to increase or decrease.
The main differences and opportunities to invest in arable land in local and global context depend on the conditions of institutional regulation in every state. As due to the CAP policies, the article is more oriented to the context of the EU states (Lithuania is a member of the EU), we will present the peculiarities of the investment in arable land in the EU. The presumption that land transactions (purchase-sale) and well-functioning land market play significant roles in economic development is supported by the following arguments: first, land transactions provide the access to land to the most efficient farmers who currently have less land than they would need to; second, land transactions allow land exchange and so contribute to the development of the non-agricultural labour market; third, they facilitate the use of land as collateral when accessing credit markets (Ciaian, Kancs, Swinnen, Herck, & Vranken, 2012). The investment in arable land in the EU is commonly made with a view to renting the land to farmers.
In a local context, there may exist quantitative regulations. In most EU member states, land transactions are comparatively free and unrestricted to either natural or juridical persons. Nevertheless, some authorities, for instance, France, Germany or Sweden, require the approval of governmental institutions, while Lithuania and Hungary have such restrictions as possession of the maximum quantities of land allowed. In France, SAFER (the Sociétés d' Aménagement Foncier et d' Etablissement Rural) monitors land transactions and prohibits arable land purchasessales for speculative purposes. In Germany, the sales of an arable land plot larger than the minimum setpoint have to be approved by Genehmigungsberhörde. In addition, in the cases of land consolidation, a neighbour-farmer has the right of land purchase priority against an external purchaser. In Sweden, land can be acquired without meeting such standards as education or previous experience in the agricultural sector. For acquisition of land plots in less populated areas, potential land buyers need to submit special permissions with indication of their education, previous experience in land management or in rare casesthe intentions to live on the land to be acquired. In most countries, renters have the right of priority to acquire the arable land which is the object of the rent (Germany, Belgium, Italy, France).
When assessing the situation in a global context, it should be noted that there are no restrictions for foreigners to acquire arable land in the old EU member states (Belgium, Germany, Ireland, Greece, Spain, France, Italy, the Netherlands, Finland, Sweden and the UK), unless the plots of interest are situated in strategically sensitive areas. For instance, in Greece, foreigners cannot own property rights to the land situated in border areas without a special pre-approval of the Ministry of Defence. However, in new EU member states (Belgium, Czech Republic, Estonia, Latvia, Lithuania, Romania, Slovakia) foreigners could not acquire arable land during the seven-year transition period, in Poland -during the twelve-year transition period. In the above-mentioned new EU member states, foreigners up-to-date have a special legal status and must comply with particular legal norms to acquire arable land.
The new EU member states do not impose any land price regulations, while the governments of the old EU member state impose price regulations on agricultural land markets.
Taxation is another factor that may significantly affect a decision to invest in land as taxes have a direct impact on land demand and supply. Two following types of taxes are currently being levied on land in the EU: 1) land transaction taxes (capital gains tax for sales and registration tax for purchases), and 2) usage (real estate) tax. Overall, land transaction taxes are heterogeneous across the member states, ranging from 1% for low-value land in the UK to 18% for high-value land in Italy. Usage (real estate) taxes are also heterogeneous across the member states, ranging from a tax rate of 0% on farmland to over 15% in some of the southern European countries. In Finland, Greece, Ireland, the Netherlands, Sweden and the UK, there is no usage tax on agricultural land. Looking from the perspective of the investment in non-arable land, we should state that the basic motive of the investment in non-arable land in both a global and a local context is the development of the real estate infrastructure (i.e. a strive to earn higher profits in the future). Land sales and land keeping taxes are different. For instance, in Lithuania, a 15% income tax rate is imposed on land sales, while land keeping taxes range from 0.1 to 4%. Taxes rates (e.g., VAT, income tax rates) depend on the type of a land transaction (inheritance, gift, acquisition) and the purchase-sales price difference.
Conclusions
The results of the expert interviews and expert evaluation have dislosed that land is considered to be a safe investment (the asset which retains its value). Land price rise is stable, and over the period under research it exceeded the rise of other asset prices. Land investors in Lithuania are driven by the following motives: 1) speculative aims to earn a difference from the purchase price; 2) the aims to earn from the rent of arable land; 3) the aims to protect the investment from inflation.
The value of arable land in Lithuania is undoubtedly lower than the EU average. Soaring global population is consuming increasingly larger quantities of agricultural production, which, minding the fact that land is a scarce resource and cannot be multiplied, allows to expect the further land price growth.
The results of the expert evaluation have confirmed the theoretical presumptions that the value of land is mainly determined by economic factors, although the changes in land prices are not neccesarily linked to the changes in the land market. When an investor is choosing a land plot, he/she also considers microenvironmental factors, for instance, availability of land plots in developing regions with the infrastructures being improved. Land plots with unchanged purposes but without infrastructures (for instance, arable land) are not popular among investors, so their prices remain stable. Land productivity (productive land is in higher demand) and the area of a land plot (larger areas are more expensive) are the most influential microenvironmental determinants of the value of arable land. The latter is also significantly affected by such general determinants as the impact of legal restrictions and recent tightenings of laws stipulating that arable land can be purchased by the third party only in case a co-owner, renter or borrower refuses to purchase it. What is more, it should not be overlooked that the supply of arable land tends to decline, and the reserve of vacant land plots in Lithuania is only temporary.
Further studies on the topic under consideration could be related to the changes in the value of arable land after the reduction (termination) of the EU subsidy flows. They also may focus on trends of the investment in arable land in the Baltic States or other EU member states. End of Table A1 | 2019-05-21T13:05:32.731Z | 2019-03-14T00:00:00.000 | {
"year": 2019,
"sha1": "b66f2a8919ddb9e9bd4eb3eb165dc73ce5b25f35",
"oa_license": "CCBY",
"oa_url": "https://journals.vgtu.lt/index.php/IJSPM/article/download/8092/7014",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b66f2a8919ddb9e9bd4eb3eb165dc73ce5b25f35",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
246858939 | pes2o/s2orc | v3-fos-license | QUALITY OF SYNTHESIZED SPEECH: IMPACT OF THE NEWEST CODING APPROACHES QUALITY OF SYNTHESIZED SPEECH: IMPACT OF THE NEWEST CODING APPROACHES
most widely used of them is concatenate synthesis, which is restricted to speech signal and based on combining short speech strings to form a longer one. Output of this synthesis is the most naturally sounding synthesized speech. There are three main types of concatenate synthesis: unit-selection synthesis, which uses large database of recorded pieces of speech, such as words, phrases, sentences, etc. This synthesis produces voices, which are mostly indistinguishable from naturally-produced ones. Second type is diphone synthesis. The database used for this purpose consists of all diphones found in particular language. In contrast to former approach (unit-selection synthesis), overall quality of diphone synthesis is generally worse. Finally, last approach is domain-specific synthesis; its database consists of pre-recorded words and phrases, which makes it restricted to certain scripts.
Introduction
In recent years, synthesized speech achieves massive increase of interest in the case of development and utilization.The reason might be the fact that speech is the most natural human form of communication and therefore there are efforts to imitate human voices.Systems used for speech synthesis offer wide range of utilization, because of their level of maturity, which allows them to be integrated for example in a place where other way of communication can not be used or in the human computer interaction systems involving higher number of modalities.Therefore the synthesized speech is implemented in many applications of daily life where this kind of speech replaces real human speaker.The synthesized speech is mainly deployed, for example, in systems providing reports containing frequently changing and routine information (weather forecast, timetable), in systems offering different dialogue situations (games) or reading various scripts (SMS-reader, e-mail reader).
In contrast to naturally-produced speech, synthesized speech represents artificially made speech, i.e. given text utterance spoken by computer.It is created by unifying pieces of speech, recorded by speaker and stored in speech database.These systems are also termed as speech synthesizers.They are based on transformation technology called text-to-speech systems (TTS).In order to realize this transformation, TTS consists of many algorithms and modules.Fig. 1 shows schematic representation of text-to-speech system.
In principle, functions of the TTS system can be divided into the following parts: G Text analysis (normalization) -performs analysis of the text, which is separated into sentences.Numbers, abbreviations, symbols are replaced by their own word transcription, G Phonetic analysis -transforms the text to voice (phonemes), G Prosodic analysis -applies prosodic language characteristics to the selected phonemes, such as melody, speaking rate, volume, emphasis, pauses, accent, etc. G Synthesis of the speech -generates speech signal from given sequence of prosodically-modified phonemes.
Nowadays, there are three different approaches available to create this type of speech.The currently most widely used of them is concatenate synthesis, which is restricted to speech signal and based on combining short speech strings to form a longer one.Output of this synthesis is the most naturally sounding synthesized speech.There are three main types of concatenate synthesis: unitselection synthesis, which uses large database of recorded pieces of speech, such as words, phrases, sentences, etc.This synthesis produces voices, which are mostly indistinguishable from naturallyproduced ones.Second type is diphone synthesis.The database used for this purpose consists of all diphones found in particular language.In contrast to former approach (unit-selection synthesis), overall quality of diphone synthesis is generally worse.Finally, last approach is domain-specific synthesis; its database consists of prerecorded words and phrases, which makes it restricted to certain scripts.
Other approach is formant synthesis (widely deployed in past), which is based on the fundamental frequencies of amplitude spectrum of voice (formants).Systems deploying this synthesis generate artificial, robotic sounding speech (with constant quality), which cannot be confused with naturally-produced speech.Lastly, articulation synthesis represents new approach, which deals with straight human vocal track imitation, i. e. overall speech generation process.Synthesis is focused on providing isolated sounds, phones, simple words, etc.This approach has been poorly investigated at the cost of its complexity.
Definitively, synthesized speech should be indistinguishable from the human actual speech.It should characterize the most reliable copy not only in case of quality as well as in speaking style.There are efforts to ensure that synthesized speech will be the most natural, not fatiguing, not monotonous, and does not make efforts with respect to listening or comprehension [1].
For determining the output subjective quality of TTS systems (voice output devices), an application-oriented listening-only test ITU-T Recommendation P.85 [2] is recommended to be used.In general, ITU-T Recommendation P.85 is based on opinions of group of test subjects (at least 24 people), who listen to given synthesized samples and fill out the questionnaires.This recommendation defines the following rating scales: overall impression, acceptance, listening effort, comprehension problems, articulation, pronunciation, speaking rate and voice pleasantness.Assessment is based on rating called MOS (Mean Opinion Score), which represents the average values representing opinions of testing subjects or efforts needed to listen to synthesized speech expressed on the 5-point quality scale varied from bad (1) to excellent quality (5).The speaking rate uses 5-point scale varied from too slow (1) to too fast (5) and the acceptance uses only 2-point scale (yes -no).Each sample is played twice to each test subject.In first phase subjects answer questions on the information found in samples (e.g.train number, price the item).In second phase subjects are asked to assess the speech quality using one or more rating scales.For assessing the quality, two types of questionnaires, namely type I (Intelligibility) and Q (Quality) are used.Although, this method has been criticized for its shortcomings [3], [5], [25]; it is still frequently used for overall assessment of the speech output of TTS systems; but when such output is impaired by transmission degradations, a slightly modified version of this method or classical test according to ITU-T Recommendation P.800 [4] are mainly deployed.
In general, the quality of synthesized speech is evaluated in terms of intelligibility (how well the listener understands given samples) and naturalness (overall speech quality assessment).SUS (Semantically Unpredictable Sentences) belongs to the group of famous intelligibility tests.The semantically nonsense sentences with correct syntax are presented to subjects and their task is to correct the presented sentences.Each utterance is played only once.The most widespread naturalness test is MOS see details above (ITU-T Rec.P.85).Other example of naturalness test is Paired Comparison test (PC), where each sample is presented to subjects in two variants.Listener task is to choose one, which he prefers.Common to all these methods is that they are based on listener's judgments, which makes them inappropriate in terms of time and finance.Authors in [5], [6], [7] investigated the performance of the methods used for subjective assessment of quality of synthesized speech, especially the accuracy and reliability of approach defined in ITU-T Rec.P.85.In [5], the approach presented in ITU-T Rec.P.85 was compared with other available methods (test of intelligibility (SUS) and test of naturalness (MOS)) for evaluation of text-to-speech systems.Their aim was to investigate whether this approach provides the better performance than SUS and MOS test.Results showed that SUS test provides more rigorous measure of which systems were more intelligible than the other tests.However, the SUS revealed more errors which could be grouped.Overall, the ITU test is more suitable for testing intelligibility of specific application than a general purpose test.In particular, the reliability of this standard for evaluation of text-to-speech systems was investigated in [6].Authors examined how the ranking of TTS is changing across different text genres and listening sessions.Outputs were compared with pair-comparison test (PC), using above mentioned aspects.In terms of reliability, both tests (P.85, PC) showed very similar results (from absolute score and ranking perspective).In terms of selectivity, there were minor differences between the systems across genres.In [7], the authors have compared naturally-produced speech and synthesized speech with respect to type of the speaker (male, female).Overall, female human voice was rated more persuasive and livelier than synthetized voice.Moreover, synthesized speech spoken by female speakers was rated worse in contrast to male synthesized voice.Finally, they have observed gender stereotyping effects where the results revealed that female listeners assessed male voices more favorably than vice-versa.
In order to make evaluating the perceived quality of synthesized speech more effective is necessary to have instrumental tools.Such tools should be able to predict the quality as it would be judged in an auditory tests by test subjects.At this moment, there are not available standardized models (tools) for objective quality assessment of synthesized speech.However, there are ongoing research efforts dealing with this issue, e.g.works presented in [8][9][10].In order to design a new instrumental quality measure for text-to-speech systems (for both male and female synthesized speech), authors try to combine different approaches.In [8] model is based on hidden Markov models (HMM) trained on naturallyproduced speech.In [9], HMM-based comparison of features extracted from synthesized signal with parametric description of the synthesized speech signal (parameters from ITU-T Rec.P.563 and parameters related to vocal expression patterns) is used in this approach.In [10], the approach presented in [9] was evaluated on auditory test databases from the Blizzard Challenges 2008 and 2009.
For instance, in [18], intrusive model PESQ was applied to assess the quality of synthesized speech.Authors concluded that PESQ model can be used for evaluation of synthetized speech without usage of subjective tests.On the other hand, PESQ can not be deployed for small size of diphone samples.The behavior of nonintrusive model P.563 in case of assessment of synthesized speech is investigated in [8], [19][20][21][22], [25].Based on the results presented in [19], P.563 is better for predicting impact of transmission channel on quality of naturally-produced voice, however it has lower accuracy in prediction of the overall voice quality.Furthermore, P.563 achieves low correlation with subjective quality ratings for synthesized speech (especially in case of female synthesized voices [22]).In [20], the authors provide an explanation for this low correlation which can result from the proposed optimization of feature combinations and mapping functions in order to improve a performance of P.563 model for predicting the quality of synthesized speech.In [21] the performance of the original and modified P.563 model was also tested on synthesized speech data obtained in Blizzard Challenges 2007 and 2008.Experimental results have revealed that the algorithm, using the proposed modifications attains noticeable improvements in comparison to the original one.
Finally, there are also available studies dealing with the impact of various speech quality impairments (like noisy-type degradations, low bit rate codecs, etc.).In [23], Sebastian Moeller focused on the following issue: whether the impact of the transmission channel on the quality of synthesized speech is different from the impact on naturally-produced speech.The investigation was focused on e.g.noisy-type degradations which affected the quality of both synthesized and naturally-produced speech in the same amount; and on low bit rate codecs, which had a bit different impact on the quality of both kinds of speeches.Noisy codecs (e.g.G.726, G.728) cause more significant impact on the overall quality of synthesized speech than the artificially sounding codecs (e.g.G.729, IS-54).The signalbased comparative models, such as PESQ, TOSQA (Telecommunication Objective Speech Quality Assessment) have been applied for prediction of the quality of synthesized and naturally-produced speech impaired by low bit rate codecs.Variances in results between this models and auditory test are more considerable for synthesized than naturally-produced speech.Basically, PESQ and TOSQA are also capable to predict the quality of transmitted synthesized speech to certain degree.PESQ provides a good approximation of the quality degradation to be expected from circuit noise, whereas TOSQA model underestimates the quality at high noisy levels [24].In [25], the authors also compared the results from various auditory tests with the predictions provided by three single-ended models (P.563, Psytechnics, ANIQUE+) using naturally-produced and synthesized voices.The samples used in this study were transmitted through different telephone channels (same impairments as used in study published in [23]).Test realized in [25] revealed that these models provide distinct correlation with results of auditory tests in the case of particular experiments.
The rest of the paper is organized as follows: Section 2 describes the investigation of impact of the newest coding approaches on speech quality in case of naturally-produced and synthesized speech usage (experimental description).In Section 3, the experimental results are presented and discussed.Finally, Section 4 concludes this paper.
Description of experiment
The signals transmitted through modern telephone networks are impacted by amount of degradations.Traditional, connectionbased networks (analogue or digital) are affected by noise, loss, frequency distortion.Non-linear distortions from low bit-rate codingdecoding processes, talker echoes resulting from the delay, overall delay due to signal processing equipment, or time-variant degradations linked to packet or frames loss are examples of transmission degradations for new types of networks (mobiles or IP-based ones).A combination of all these impairments will be encountered when different networks are interconnected to form a transmission path from the service provider to the user.Thus, the whole path has to be taken into account for determining the overall quality of the service operated over the transmission network.As mentioned above, one of the new impairments introduced by mobile or IP-based networks is non-linear distortion from low bit-rate coding-decoding processes.Currently, this degradation is poorly investigated, especially with respect to its influence on synthesized speech [23].This fact motivated us to investigate the impact of this distortion on speech quality.In particular, here we focus on an impact of newest coding approaches (e.g.Speex, iLBC, EVRC-B, etc.) on speech quality predictions provided by PESQ and P.563 in case of naturally-produced and synthesized speech usage.
Reference signals and experimental scenario
In this experiment, three sentences in Slovak language with length of 12 seconds were used as reference signals.Two synthesized speech signals generated with two different TTS systems (male voices) and one naturally-produced signal (recorded in an anechoic environment; with non professional male speaker) are under consideration.The decision about using male voice came from the previous study published in [7].The tests have proved that the message produced by the male synthetic voice was rated as more favorable (e.g.good and more positive) and was more persuasive, in terms of the persuasive appeal, than the female synthetic voice.These particular differences are perceptual in nature, and more likely due to differences in synthesis quality between male and female voices.
TTS system 1 was diphone synthesizer and TTS system 2 was unit-selection synthesizer.Both systems have been developed at the Institute of Informatics of the Slovak Academy of Sciences.More about those synthesizers can be found in [26].
All speech samples have been normalized to an active speech level of Ϫ26 dB below the overload point of the digital system, when measured in accordance to ITU-T Recommendation P.56 and stored in 16-bit, 8000 Hz linear PCM; background noise was not present.
Subjective quality assessment
As mentioned above, the obtained predictions provided by PESQ and P.563 models were compared with subjective assessments to assess their accuracy.The subjective listening tests were performed in accordance to ITU-T Recommendation P.800 [4].Always up to 9 listeners were seated in listening chamber with reverberation time less than 190 ms and background noise well below 20 dB SPL (A).All together, 25 listeners (11 male, 14 female, age range 21-30 years, mean 24.08 years) participated in the tests.18 of them reported to have no experience with synthesized speech.The subjects were paid for their service.
The samples were played out using high quality studio equipment in random order.Results in Opinion Score 1 to 5 were averaged to obtain MOS-Listening Quality Subjective narrowband (MOS-LQSn) values for each sample.All together, 18 speech samples were used for subjective testing of coding impact.
Experimental results
In this section, we present and discuss the results coming from this investigation.As mentioned above, this study focuses on a comparison of the predictions provided by objective models PESQ and P.563 with subjective scores using naturally-produced and synthesized speech, whereas different current codecs have been applied (ITU-T G.711, ITU-T G.729AB, GSM-FR, Speex, iLBC and EVRC-B) to degrade the quality of the reference signal.
Figure 2 depicts behavior of the investigated codecs on quality prediction provided by two objective models (PESQ, P.563) and by auditory tests for naturally-produced speech.We can see that artificially sounding codecs are rated significantly worse in both models' predictions compared to the auditory test.Whereas for the ITU-T G.711 codec (naturally sounding codec) the predicted quality especially provided by PESQ is in better agreement with the auditory results, as in previous case.Furthermore, P.563 model under-predicts the quality much more than PESQ in all cases.
Figs. 3 and 4 show the results obtained for diphone synthesizer and unit-selection synthesizer, respectively.As can be seen from Fig. 3, diphone voice (sounds less natural than unit and natural voices) was particularly disliked by test subjects.This is probably the reason for such small ratings provided by subjects.On the basis of the presented fact, we decided to omit the diphone voice from the further analysis of the behavior of synthesized speech under coding impairments.On the other hand, the behavior of the diphone voice can be used as an example how higher unnaturalness of the signal can affect the opinions of the test users.Fig. 4 depicts the effect of the investigated codecs on MOS-LQSn and MOS-LQOn predicted by PESQ as well as P.563 models for unit voice.In contrast to naturally-produced speech (see Fig. 2), the predictions of both models are in good agreement -with the exception of some predictions provided by P.563 model, like for ITU-T G.711 codec, etc. -with the auditory ratings.
Moreover, Figure 5 presents a comparison of the behavior of the synthesized speech with the behavior of naturally-produced speech from auditory ratings perspective.As can be seen from Figure 5, there are some differences between subject ratings for the synthesized speech generated by unit-selection synthesizer and naturally-produced speech.The observed differences may be due to differences in quality dimensions perceived as degradations by the test subjects.Whereas the 'artificiality' dimension introduced by the investigated 'unnatural sounding' codecs is additional degradation for the naturally-produced speech, this is not a case for the synthesized speech, which already carries a certain degree of artificiality.
The results presented here are well in line with the results described in [24].The synthesized speech is assessed a little more pessimistically than natural speech for ITU-T G.729 codec, which is shown in Figure 5.12 (p.225, [24]).On the other hand, the synthesized speech is rated a bit more optimistically by subjects than naturally-produced speech for IS-54 codec and its combinations.The effect is much more dominant for its combinations.Unfortunately, we did not investigate this codec as well as its combinations in this study but then the GSM-FR codec was involved in this study which belongs to similar family of codecs.The same behavior as for IS-54 in [24] was also reported here for GSM-FR, probably because of very similar special techniques deployed in both codec-families.Regarding the predictions of PESQ (see Figures 5.15-5.16[24]), which were also investigated in the discussed study, they are more or less in line with our results, particularly for ITU-T G.729 codec (see Figures 2 and 4).Unfortunately, the study published in [24] is mainly focused on the different types of codecs and its combinations.This study can serve as an extension of the study published in [24].
Conclusion
The paper provided a brief overview of assessment of quality of synthesized speech.In addition, a overview of the current stateof-the-art of research dealing with this issue has also been described here, summarizing the experimental studies investigating the performance, accuracy and reliability of existing approaches and models (mainly designed for evaluating the quality of naturally-produced speech, but also new models designed directly for assessing the quality of synthesized speech) to evaluate the quality of synthesized speech.Finally, the paper described the experiment dealing with the impact of current codecs (ITU-T G.729AB, Speex, iLBC, GSM-FR and EVRC-B, ITU-T G.711) on the quality predicted by two objective models (intrusive PESQ, nonintrusive P.563) using naturally-produced and synthesized voices as an input signals.The obtained predictions provided by both models were compared with the ratings coming from the auditory test.The experiment revealed that the investigated codecs have a different impact on the quality of both naturally-produced and synthesized speech.Comparing the performance of both objective models, PESQ algorithm seems to be more appropriate for assessing the quality affected by the newest coding approaches than P.563 algorithm, especially in case of naturally produced speech.Future work will focus on the following issues.Firstly, we would like to investigate the performance of a brand new ITU-T intrusive model for predicting speech quality, namely POLQA under the same conditions as investigated here (as a part of the characterization phase of this model).Secondly, on the basis of the results (15.2 kbps, 20 ms), Speex[31] (4-8 kbps, 20 ms) and Enhanced Variable Rate Codec version B (EVRC-B) [32] (9.6 kbps, 20 ms).In principle the codecs used in this study can be divided into two groups.First group characterizes artificially (unnaturally) sounding codecs, such as ITU-T G.729AB, Speex, iLBC, GSM-FR and EVRC-B, whereas the ITU-T G.711 codec represents second group called naturally sounding codecs.
Fig. 2
Fig. 2 Impact of the investigated codecs on MOS-LQSn and MOS-LQOn's predicted by PESQ and by P.563 in case of naturally-produced speech
Fig. 3 Fig. 5
Fig. 3 Impact of the investigated codecs on MOS-LQSn and MOS-LQOn's predicted by PESQ and by P.563 in case of synthesized speech generated by diphone synthesizer | 2022-02-16T16:27:34.580Z | 2011-12-31T00:00:00.000 | {
"year": 2011,
"sha1": "e94b066af1927a00d04205f1e4370a937eea6487",
"oa_license": "CCBY",
"oa_url": "http://komunikacie.uniza.sk/doi/10.26552/com.C.2011.4.25-31.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3c93d0f5787e90f0c5d9717be91afbce56b2fc39",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
224849722 | pes2o/s2orc | v3-fos-license | Effect of coating few-layer WS2 on the Raman spectra and whispering gallery modes of a microbottle resonator
Multi-layered tungsten disulfide (WS2) coated silicon/silica (Si/SiO2) substrate and SiO2 micro-bottle resonators (MBRs) have been prepared by van der Waals epitaxy method. Raman spectra of WS2-coated MBR show that the out-of-plane Raman mode is sensitive to the polarization of the excitation laser. The quality factor (Q) values of the whispering gallery modes (WGMs) in the transmission spectrum of an MBR decrease by 2 orders of magnitude on coating with WS2. On coating, a cleaner spectrum is obtained along with a concomitant effect of decrease in the number of lossy modes. Fano resonances as well as Autler-Townes splitting (ATS) was observed for the WGMs in the cleaned transmission spectrum. From the simulations it has been verified that the scattered electric field of the WS2 flakes contributes to the observation of the Fano resonances and ATS in the coated MBR spectra.
Introduction
Two dimensional (2D) materials and their heterostructures have attracted considerable attention since the discovery of graphene [1]. The layered transition metal dichalcogenides (TMDCs) [2] fall under the category of 2D materials. These compounds are of the form MX 2 having a structure with a constituent tri layer consisting of a metal (M=Mo, W, Nb) sandwiched between two chalcogen layers (X=S, Se, or Te). These materials exhibit phenomena such as indirect to direct band-gap transitions [3], strong photo-and electro-luminescence [3,4] and are promising materials for Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. optoelectronic devices [5]. Integration of TMDCs with optical cavities can lead to phenomena of low threshold lasing [6], enhanced Raman scattering [7] and strong light-matter coupling [8,9].
One of the actively investigated TMDC is tungsten disulfide (WS 2 ). The bulk WS 2 is an indirect band-gap semiconductor with a band gap range of 1.3-1.4 eV depending on its phase [10]. The properties of WS 2 depend on the number of layers. It becomes direct-band gap material for its monolayer that exhibits strong photoluminescence. Nano antenna enhanced light matter interaction has been reported in atomically thin WS 2 [6,11]. Raman spectroscopy has been used to determine the number of layers in a TMDC [12]. WS 2 exhibits two main Raman-active vibrations, the in-plane E 1 2g mode and the out-of-plane A 1g mode. The A 1g mode indicates the outof-plane displacement of S atoms (OC modes) while the E 1 2g modes corresponds to the relative motion of W and S atoms (IMC modes) [13]. The frequency difference between the two modes increases with the increase in the number of layers and thus can be used to monitor the number of layers in the prepared sample [12]. WS 2 has been integrated with optical microcavities to enhance the strength of light-matter interaction in a fully monolithic cavity with distributed Bragg reflectors [8,14]. One of the important class of microcavities are whispering gallery mode (WGM) microcavities [15]. WGMs are observed in dielectric microstructures with rotational symmetry such as spheres, rings, and toroids [15][16][17]. The light is localized due to the total internal reflection in WGMs [15]. These microcavities offer high quality factor (Q-factor) and low mode volume [16] and are sensitive to size, shape and the surrounding refractive index [18]. In contrast to the spherical microcavity [15], the micro-bottle resonators (MBRs) [19] have a highly prolate spheroidal shape and support WGMs that are extended along the z-axis of the resonator [20]. Figure 1 shows the schematic of an MBR. The outer diameter of the MBR can be fitted with a truncated harmonic oscillator profile: is the diameter at the center of the MBR and ∆k is the curvature of the resonator profile. The MBR is characterized by three quantum numbers (m,p,q) where m denotes the azimuthal quantum number or mode number, p denotes the radial quantum number and q the axial quantum number.
MBRs have higher axial radii compared to microspheres and hence the axial modes are well separated as shown in figure 2(A). Finite element method (FEM) in COMSOL Multiphysics (5.3a) was used for simulation of electric field distribution of MBR. The electric field distribution of different axial modes (q = 1-5) for TM 105 mode has been calculated ( figure 2(B)).
The spectra of MBRs contain large number of WGMs due to its shape. Cleaning of such spectra to obtain relatively small number of WGMs is essential for the applications in sensors for monitoring a specific WGM. It has been reported that the cleaner spectrum of an MBR can be obtained by changing the diameter of the tapered fiber from 2 to 10 µm [21]. In this paper, the cleaned spectra have been obtained by coating the MBRs with WS 2 flakes. The Q-factor has been found to decrease from 10 7 to 10 5 in the cleaned spectra. The simulations indicate that the interaction of the WGMs of MBRs with the Mie scattering of WS 2 flakes gives rise to the Fano [22][23][24]-type and electromagnetic induced transparency [25,26] (EIT)-type resonances as well as the Autler Townes splitting (ATS) in the observed spectra.
The paper is organized as follows. Section 2 gives the experimental details. In section 3.1, WS 2 has been characterized by the techniques of Raman, photoluminescence (PL) and atomic force microscopy (AFM). The effect of polarization on the Raman spectra is also given here. Section 3.
Experimental details
The MBRs used in the present study were prepared by melt and fuse method developed by Murugan et al [19]. In brief, in this thermo-mechanical process, two ends of a cleaved fiber are pushed toward each other, heated at the same time thereby they melt and fuse forming bulge shaped structure. WS 2 has been synthesized on fused silica substrate and MBR by van der Waals epitaxy (VdWE) using tungsten hexachloride (WCl 6 ) as the precursor to react with hydrogen sulfide (H 2 S) gas [27,28]. VdWE has number of advantages over the transfer method [29], such as conformal coating on the MBR and substrate and the sizes are typically in 25 mm × 25 mm. For coating the MBRs with WS 2 , the tails of preformed MBRs were affixed on the top surface of a fused silica plate (25 mm × 25 mm × 1 mm) with the help of a high temperature ceramic epoxy (Ceramix TC, FortaFix) and these MBRs were suspended above another fused silica plate (25 mm × 25 mm × 0.5 mm) as the reference sample. This set up (as shown in figure 3(A)) was placed on top of a quartz boat and kept inside the VdWE quartz reactor for WS 2 to be conformally deposited on the MBRs and the reference fused silica substrate. Raman and PL spectra were recorded using a micro Raman spectrometer (Jobin Yvon, Labram HR 800) with a resolution of <1 cm −1 at the excitation wavelength of 488 nm (Argon ion laser). Polarization dependent measurements were carried out by using a half wave (λ/2) plate at the excitation side and an analyzer on the detector side. The polarization of the excitation laser was changed using a half wave plate. A scrambler is placed after the analyzer to cancel out the errors due to polarization effects of the grating and the detector. The surface topography images of the WS 2 coated Si/SiO 2 substrate were generated by the AFM (NX10 Park) system. For transmission measurements, a tapered fiber with a waist diameter of ∼2 µm was used to couple the light in and out of the resonator. One end of the tapered fiber is connected to the tunable laser source (Agilent 81600B, with tuning range from 1440 to 1640 nm) and the other end was connected to a power meter. The tapered fiber was coupled to the center of the MBR with the help of micro-positioning stages. The experimental arrangement for the light coupling with the MBR is shown in figure 3(B). Transmission measurements were done both before and after the coating of WS 2 on the MBRs to check their Q-factors. The measurements were done with a step size of 0.1 pm. The first order optical modes of WS 2 are denoted by E 1 2g and A 1g . Raman spectra of different layered samples of WS 2 on fused silica substrates were recorded as shown in figure 4(A). According to Liang et al [12], the frequency difference between the A 1g and E 1 2g WS 2 peak points with monolayer and bi-layer are 60.31 and 62.18 cm −1 , respectively; on the other hand for three and four layers it is 63.13 and 63.50 cm −1 , respectively. Table 1 gives the data of the number of layer determination of the samples used in the present study.
Results and discussion
The PL spectra were recorded for the identical exposure time of the excitation laser as a function of the WS 2 layers ( figure 4(B)). It can be seen that the PL intensity decreases for the multi-layered sample. Moreover, the spectrum shows a red-shift on increasing the layer number. With conformal coating of VdWE-grown WS 2 , the majority of WS 2 were bi-layer with a few layer WS 2 flakes on the reference sample. The AFM images recorded for few layered WS 2 shows triangular flakes (figure 4(C)) with a height of ∼6 nm ( figure 4(D)). The monolayer WS 2 has a thickness of 0.8 nm indicating that the sample has ∼7 layers.
Polarization dependent Raman spectra of WS 2 coated
MBR. The Raman signal of WS 2 coated MBR was studied as a function of the polarization angle of the incident beam. Figure 5 shows the Raman intensity at the polar angles of 0 • and 100 • ( figure 5(A)) and 0 • and 180 • ( figure 5(B)) respectively of the excitation laser. The intensity of the E 1 2g and A 1g modes at 0 • has been denoted by I E0 and I A0 , respectively. The intensity ratio of the modes at different angles to that at 0 • has been plotted as a function of the polarization angle and is shown in figure 5(C). It was observed that although the intensity ratio of the E 1 2g mode is almost constant (curve b), the A 1g mode ratio shows an oscillating feature (curve a). The ratio between A 1g mode and E 1 2g mode as a function of the polarization angle is shown in figure 5(D). It is noted that for the incoming (ε i ) and outgoing polarization (ε 0 ) the Raman cross section (α ∑ | < ε 01 |R j |ε i >| 2 ; α being a constant) is finite for OC modes but zero for IMC modes. Here, the incoming and outgoing lights are assumed to have the same helicity. The polarization state of the scattered photon can be obtained from the value of R j which denotes the appropriate Raman tensor [13,30]. The Raman tensors for A 1g and E 1 2g mode are given by The value <ε 0 |R j |ε i > can be found to be equal to a (for A 1g mode) and zero (for E 1 2g mode) and thus the OC modes have the effect of polarization to the incident light. It is to be noted that the Raman intensities for the A 1g is ∝ (acos 2 θ + bsin 2 θ) where θ is the input polarization of the laser and a, b are proportionality constants [31] and Raman intensity of E 1 2g mode is independent of polarization of laser [32]. Some variation in the intensity ratio of the E 1 2g can be observed in figure 5(C). This is because the optics does not have similar reflectivity in the s-and p-polarized states and these change with respect to θ [31]. The intensity ratio feature of the A 1g mode with respect to the polarization angle is similar to that obtained for Si wafer [33]. Thus, the independence of the E 1 2g mode to the polarization of the excitation laser can also be used for identification of the material WS 2 .
Characterization of the MBR.
The characterization of the MBR was done before and after its coating with WS 2 to determine the Q-factor of the WGMs ( figure 6(A)). The transmission spectra exhibit groups of sharp resonance dips corresponding to the same azimuthal quantum number. The Q-factors were determined by fitting a Lorentzian function to the WGM as shown in figures 6(B) and (C). The value of Q is given as the ratio of the peak wavelength (λ 0 ) of the mode to its full width at half maximum (∆λ). The total Q-factor (Q tot ) of a microresonator strongly depends on the material losses (Q −1 mat ) and the scattering losses due to surface roughness of the resonator (Q −1 scat ) and can be written as 1 rad is the radiation loss due to the curvature of the microresonator corresponding to the intrinsic quality factor. The modes in MBR are called higher order axial modes (bottle modes). The bottle modes are the result of broken degeneracy between WGMs with same azimuthal but different axial mode numbers. The free spectral range in MBRs are an order of magnitude smaller than that of microspheres of equal diameter [19] and are highly tunable.
These dense modes cause complications when MBRs are to be used for refractometry sensing.
The property of introducing the effect of scattering Q-factor can be used to attenuate some of the modes in an MBR. The coated MBR has a Q-factor ∼10 5 in contrast to its uncoated counterpart with a corresponding value of ∼10 7 . Although there is a significant decrease in the Q-factor by two orders of magnitude, well separated modes in the transmission spectrum are observed. Thus, by coating few layer WS 2 on the MBR a cleaner spectrum can be obtained ( figure 6(A)). The decrease in Q-factor is due to the overlap of Mie scattering of atomically thin WS 2 layer with the WGMs of the MBR at 1550 nm. The intensity ratio for A 1g , (a) and E 1 2g (b) with respect to the intensity at 0 • at various polarization angles. The fitted sine squared function is also given (---) for the case of A 1g mode. The dashed line in E 1 2g mode is simple spline connecting the measured data for guide to eye. (D) The intensity ratio between the A 1g and E 1 2g modes at various polarization angles with experimental results (♦) and sine squared fit to the data (---). Figure 7 shows a portion of the cleaned spectrum with high resolution scan of the coated MBR. The transmission spectrum (A) shows series of dips of varying shapes. For example, the symmetrical peak (peak II) fits to the EIT profile as shown in figure 7(B) while the asymmetrical dip (dip III) fits with the Fano type resonance ( figure 7(C)). Fitting a spectral line with Fano formula can be a conclusive test for the observed Fano resonances [34]. Resonance III has an asymmetrical line profile and can be fitted to the Fano line shape (equation (1)). The Fano resonances in a MBR is given by [35]
Observation of Fano resonances.
where T (λ) is the transmission at wavelength λ, T ′ is a constant, H ′ is the amplitude, λ 0 is the resonance wavelength, w is the width and s is the Fano asymmetry parameter. Fano resonance is the interference between a resonant scattering process and background. In general, in a typical taperedfiber coupled WGM microresonator, the transmission profile consists of series of repetitive dips, which are Lorentziana state for the system. To produce a Fano-type line shape, we need another state-discrete or continuum to couple with the WGMs. The discrete state can be produced by introducing another WGM with a higher Q-factor [36] and also by coupling with a different resonator's mode by a waveguide [37]. Fano resonances were observed in a self-assembled MBR [35]. WS 2 does not have absorption in the 1500 nm regime. So, the decrease in Q-factor and appearance of different line shapes can be attributed to Mie scattering of the atomically thin WS 2 layer which overlaps with the WGMs of the MBRs at 1550 nm. The transmission profile of the uncoated MBR shows symmetrical Lorentzian profiles confirming that the asymmetrical line shape appears due to the interference of the WGMs of the MBRs at 1550 nm and Mie scattering of WS 2 . Resonance II has an EIT-type line profile; the asymmetry parameter s has been found to be 0.003. The transparency window in EIT has been found to be 0.001 nm.
Autler Townes splitting.
It was observed that the spectrum has features of mode splitting. This is a feature of strong coupling. Some of the resonance modes in the transmission ). The solid curve shows the fit with the ATS profile (equation (2)).
profile split into two associated modes thus giving rise to a transparency window in between. For example, the resonance I (figure 7(A)) shows two associated dips in the transmission spectrum. This resonance can be fitted with AT type of splitting (figure 8).
The resonance mode (I) in the transmission profile splits into a symmetric doublet and gives rise to a wide transparency window. The transmission of ATS has the contribution of two Lorentzian profiles as given by [38] T ATS (λ) = T ′′ + C 1 Γ/2 where T ′′ , Γ are constants and λ c is the center wavelength. As shown in figure 8, the profile has been fitted with equation (2) and the Q factors for the adjacent pair of modes are found to be Q 1 = 2.3 × 10 5 , Q 2 = 1.8 × 10 5 . The effects of ATS in the transmission profile is similar to EIT as they both display a transparency window, i.e. reduction in the absorption spectrum. The transparency window has been found to be 0.013 nm which is an order of magnitude wider than transparency window observed in EIT (0.001 nm) similar to that reported in literature [39]. Here ATS may originate due to lifting of the frequency degeneracy of the eigenmodes, thereby splitting into two resonances due to strong tapered fiber-MBR coupling.
The coherent interaction of the scattering modes of the WS 2 flakes and WGMs of the microcavity can lead to the asymmetric line profile and splitting [40,41]. In order to estimate scattering mode spectrum of the WS 2 flakes, we performed FEM simulations in which the triangular WS 2 flakes with varying side lengths from 100 to 300 nm and a thickness of 6 nm, similar to that observed from the AFM data (figure 9) was simulated. The triangles have been taken to be equilateral. It can be observed that the scattering spectrum show maxima in the 1500-1700 nm range. A red shift is observed with the increase in length of the flakes. WS 2 does not have absorption in the 1550 nm regime [42]. Hence, the decrease in Q-factor and appearance of different line shapes can be attributed to Mie scattering of the atomically thin WS 2 layer which overlaps with the WGMs of the MBRs at 1550 nm. Since Fano profiles are not observed for the WGMs of the uncoated spectrum, it can be concluded that the appearance of these asymmetric profiles is due to the coherent interaction of the scattering E-field of the WS 2 flakes with the MBRs WGMs already present at 1550 nm.
Conclusions
A few layered films of the TMDC material WS 2 have been fabricated on a fused silica substrate and MBR by VdWE method. The number of layers have been determined by using the Raman spectroscopy. The A 1g Raman mode of the sample shows polarization dependence of the excitation light. WGMs were observed in the transmission spectrum of the uncoated and WS 2 -coated MBR. The Q-factor of few-layer WS 2 -coated MBR decreases by two orders of magnitude as compared to that of the uncoated MBR. The coated MBR shows a cleaned spectrum with specific WGM resonances. The WGMs now exhibit resonance dips and peaks that fit with Fano-like asymmetric resonance and ATS, a feature of strong-coupling regime. Simulations show that the Fano resonances appear because of the interference of the WGMs at 1550 nm and Mie scattering of the WS 2 flakes. It is expected that the Fano resonances can be tuned by variation of the layer number of the TMDC material.
The integration of TMDC to a microresonator have prospects in device applications including low threshold microlasing, enhanced Raman sensing and strong light-matter interaction with associated nonlinear effects. The resonance spectrum of a coated MBR is much cleaner with well-defined peaks, which is essential for sensing applications. If required, the Qfactors can be controlled by (a) allowing only few nanocrystals to grow on the microresonator, which will decrease both scattering losses and material absorption and (b) by growing a more uniform coating on the resonators to reduce the scattering losses. | 2020-08-13T10:11:09.011Z | 2020-09-10T00:00:00.000 | {
"year": 2020,
"sha1": "c1031a8c0ce11b0f6b65123a61817d5960963756",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2040-8986/abad50/pdf",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "32f88633ae30fecc7033816baef6ff550ee77f5b",
"s2fieldsofstudy": [
"Physics",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
11919302 | pes2o/s2orc | v3-fos-license | Tag SNPs detect association of the CYP1B1 gene with primary open angle glaucoma.
PURPOSE
The cytochrome p450 family 1 subfamily B (CYP1B1) gene is a well known cause of autosomal recessive primary congenital glaucoma. It has also been postulated as a modifier of disease severity in primary open angle glaucoma (POAG), particularly in juvenile onset families. However, the role of common variation in the gene in relation to POAG has not been thoroughly explored.
METHODS
Seven tag single nucleotide polymorphisms (SNPs), including two coding variants (L432V and N543S), were genotyped in 860 POAG cases and 898 examined normal controls. Each SNP and haplotype was assessed for association with disease. In addition, a subset of 396 severe cases and 452 elderly controls were analyzed separately.
RESULTS
There was no association of any individual SNP in the full data set. Two SNPs (rs162562 and rs10916) were nominally associated under a dominant model in the severe cases (p<0.05). A common haplotype (AGCAGCC) was also found to be nominally associated in both the full data set (p=0.048, OR [95%CI]=0.83 [0.69-0.90]) and more significantly in the severe cases (p=0.004, OR [95%CI]=0.68 [0.52-0.89]) which survives correction for multiple testing.
CONCLUSIONS
Although no major effect of common variation at the CYP1B1 locus on POAG was found, there could be an effect of SNPs tagged by rs162562 and represented on the AGCAGCC haplotype.
The cytochrome p450 family 1 subfamily B (CYP1B1) is a member of the CYP450 superfamily. While its exact function and effect on cells is not clear, the gene is inducible by dioxins and has several endogenous substrates including 17β-estradiol, retinoic acid, and melatonin as well as many exongenous substrates [1]. It was first recognized as a cause of primary congenital glaucoma (PCG) following linkage mapping [2] and candidate gene screening [3] in a panel of 17 Turkish families with recessive PCG. This finding has since been replicated in many ethnic groups with over 80 mutations now reported from many different populations [1,4,5]. In many cases, compound heterozygosity is observed as the cause of recessive disease. The proportion of PCG cases accounted for by CYP1B1 mutations varies significantly between ethnic groups, from around 20% in Australia and Japan to nearly 100% in Saudi Arabia and Slovakian Gypsies [4].
Primary Open Angle Glaucoma (POAG) is the most prevalent form of glaucoma and leads to significant levels of irreversible blindness worldwide. The genetics of this complex trait are not well understood, although many loci and several genes have been reported [6]. The most common known genetic cause of POAG is the myocilin gene (MYOC). Mutations in this gene account for 2%-4% of POAG Correspondence to: Dr. Kathryn P. Burdon, Department of Ophthalmology, Flinders University, Bedford Park, SA, 5042, Australia; Phone: +61 8 82044094; FAX: +61 8 82770899; email: kathryn.burdon@flinders.edu.au in Caucasians [7] and up to 36% in juvenile onset (JOAG) families [8]. CYP1B1 mutations have also been identified in JOAG and POAG patients. Melki et al. [9] reported compound heterozygotes in three French families containing patients with PCG as well as JOAG. Acharya et al. [10] reported nine individuals from India (4 JOAG and 5 POAG) with single heterozygous mutations in CYP1B1 and Kumar et al. [11] reported four mutations in 27 Indian POAG patients, including two who were compound heterozygotes. Lopez-Garrido et al. [12] presented heterozygous mutations in 10 Spanish POAG patients. CYP1B1 has also been suggested as a modifier of POAG in carriers of MYOC mutations [13]. A common polymorphism was associated with cupping of the optic disc, which may be relevant to POAG [14] although other studies found no association of CYP1B1 mutations with disc changes in POAG [15].
Although several studies have reported rare variants in the CYP1B1 gene in glaucoma patients that were not detected in controls [10][11][12][13][14][15], no large scale re-sequencing of normal population has been performed to determine the spectrum of rare variants in this gene. There are 75 reported coding variants in dbSNP, of which 45 are non-synonymous, 11 are insertions or deletions, and 4 are truncating mutations. While the majority have not yet been thoroughly validated as common polymorphisms, the number of reported frameshift and non-synonymous variants suggests that CYP1B1 activity is not compromised by most mutational events, at least in the heterozygous state. Thus, the presence of rare variants in the sequenced glaucoma patients is not surprising. The link between CYP1B1 and POAG is therefore currently circumstantial. Chakrabarti et al. [15] assessed six common polymorphisims in a small cohort of POAG and primary angle closure glaucoma (PACG) patients as well as controls and found no association of any haplotypes with glaucoma status. This study aims to evaluate the contribution of common polymorphisms in CYP1B1 to POAG.
Patients:
Participants were drawn from the Glaucoma Inheritance Study in Tasmania (GIST), the Australian & New Zealand Registry of Advanced Glaucoma (ANZRAG) and the Blue Mountains Eye Study (BMES). The GIST and ANZRAG includes a clinic-based recruitment of glaucoma patients. The GIST aimed to capture all cases of glaucoma in Tasmania (an island state of Australia) and ANZRAG aims to capture cases of advanced glaucoma Australia-wide through ophthalmologist referral [16,17]. In both cases, normal elderly controls were ascertained from nursing home facilities in Launceston, Tasmania (for GIST) and Adelaide, South Australia (for ANZRAG). The BMES is a population based study of individuals aged over 50 years living in the Blue Mountains, west of Sydney, Australia [18]. All participants, including normal controls in all three studies were examined. Glaucoma was defined by concordant findings of typical glaucomatous visual field defects on the Humphrey 24-2 (for GIST and ANZRAG) or 30-2 (for BMES) test, together with corresponding optic disc rim thinning, including an enlarged cup-disc ratio (≥0.7) or cup-disc ratio asymmetry (≥0.2) between the two eyes. Intraocular pressure (IOP) was not considered in the diagnostic criteria. Advanced POAG was defined by a vertical cup:disc ratio >0.95, a best-corrected visual acuity worse than 6/60 due to POAG, or on a reliable Humphrey Visual Field (Carl Zeiss Pty. Ltd., Sydney, Australia) a mean deviation of ≤-22 db or at least 10 out of 16 central squares involved with a Pattern Standard Deviation of <0.5%. The field loss had to be due to POAG, and the less severely affected eye was required to have signs of glaucomatous disc damage and a glaucomatous field defect. Clinical exclusion criteria included: i) pseudoexfoliative glaucoma, ii) pigmentary glaucoma, iii) angle closure or mixed mechanism glaucoma; iv) secondary glaucoma due to aphakia, rubella, rubeosis or inflammation; v) congenital or infantile glaucoma, juvenile glaucoma with age of onset less than 20 years; or vi) glaucoma in the presence of a known syndrome.
All control subjects were required to have no known family history of POAG, as well as a normal intraocular pressure, optic disc and visual field. The population-based BMES control cohort comprised the eldest subgroup of people meeting control inclusion criteria. SNP selection and genotyping: Using the tagger program implemented in Haploview 4.0 [19] tag single nucleotide polymorphisms (SNPs) across the CYP1B1 gene were selected on the basis of linkage disequilibrium patterns observed in the Caucasian (CEU) samples genotyped as part of the International HapMap Project [20]. Only SNPs with minor allele frequency greater than 5% in HapMap were considered. Two coding SNPs (rs1056836 and rs1800440 coding L432V and N453S respectively) and four 3′UTR SNPs (rs162549, rs2855358, rs10916, and rs162562) were force included to capture as much coding variation as possible. In addition, intronic SNPs rs10175368 and rs162556 were selected. These eight tag SNPs captured all alleles with an r 2 of at least 0.8 (mean r 2 =0.96) and were gentoyped in all individuals using iPLEX GOLD chemsitry (Sequenom Inc., San Diego, CA) on an Autoflex Mass Spectrometer (Sequenom Inc.) at the Australian Genome Research Facility, Brisbane, Australia. The 3′UTR SNP rs2855658 failed genotyping and was removed from the analysis. This SNP did not tag any other HapMap SNPs. Statistical analysis: All analyses were conducted using the statistical genetics software packages Plink [21] and Haploview [19]. Hardy Wienberg equilibrium was assessed in all samples and in controls separately. Association was tested under the five genetic models implemented in Plink. These models are the allelic test (allele1 versus allele2), genotypic (11 versus 12 versus 22), dominant (11 and 12 versus 22), recessive (11 versus 12 and 22) and the Cochrane-Armitage Trend test. Association of common haplotypes (>1% frequency) was also assessed in Plink using the conditional haplotype test. All analyses were conducted in the full data set as well as a sub-set of cases with severe disease (from GIST and ANZRAG) compared to elderly (>81 years
RESULTS
In total, 860 cases and 897 examined, normal, unrelated controls were available. Sex and age distribution for the full cohort and each sub-cohort are given in Table 1. Overall, the age of cases is significantly less than the controls, although in the ANZRAG cohort the cases are slightly older. There were no differences in the proportion of each cohort that is female.
All seven SNPs were in Hardy-Weinberg equilibrium. Allele and genotype frequencies by glaucoma status are shown in Table 2. Linkage disequilibrium across the region is high with all seven SNPs falling into a single block, although the correlation between SNPs rs162556 and rs10175368 is low (Figure 1), consistent with values observed in the HapMap data set. Thus using the block definition of Gabriel et al. [23] there are two haplotype blocks as shown in Figure 1.
Single SNP association analysis was conducted for five genetic models in Plink. No SNP was associated under the allelic test, nor in any of the other genetic models (Table 3). When the analysis was restricted to severe cases and elderly controls (≥81 years) only, SNPs rs162562 and rs10916 were nominally associated (p<0.05); however, these results do not survive correction for the number of SNPs assessed. The associations were also nominally significant under the recessive model and trend test. In addition, a logistic regression adjusted for age and sex was conducted. Nominally significant results at the same two SNPs were observed in the severe cohort, but do not survive multiple testing correction (Table 3).
Haplotype analysis was conducted in Plink. No overall association between the CYP1B1 locus and POAG was detected in either the full sample (p=0.140) nor the severe cases and elderly controls (p=0.189); however, one specific haplotype of the seven SNPs (AGCAGCC) was nominally associated in both data sets (Table 4). This is a relatively common haplotype that is slightly under-represented in POAG cases, particularly severe cases. The association does survive correction for multiple testing of the seven common haplotypes in the severe cases (corrected p-value=0.028). The associated haplotype is the most common haplotype to carry a C at SNP rs162562, which was nominally significant in the single SNP analysis of the severe cases. This allele is observed in only one other haplotype (AGCAGTG) which only differs from the associated haplotype at the 6th SNP (rs162556), but is rarer and is not associated with POAG.
DISCUSSION
The association of the CYP1B1 gene with PCG is well understood in populations world-wide, although the mechanism of disease is not. The gene is also associated with JOAG and may interact with the MYOC gene to cause the early onset observed in JOAG families. However, the role of CYP1B1 in later onset POAG is unclear. The majority of studies to date have sequenced the coding region of the gene in a small cohort of POAG patients and most have identified missense mutations not observed in a control cohort. This approach has identified many apparent mutations that may contribute to the risk of POAG in rare cases, but does not provide evidence for a contribution of this locus to most (or even a significant proportion) of POAG cases. In addition, there are many reported missense, frameshift and truncating variants of this gene in non-POAG individuals, many of which have not been reported in the POAG cohorts, making interpretation of the published data in relation to POAG susceptibility difficult.
The present CYP1B1 study is the largest cohort of POAG patients examined to date (n=860) and we were well powered to identify common genetic variants at the level of relative risk of 1.1 or 1.2. In addition, all controls (n=898) are at least 50 years of age and have been thoroughly examined for glaucoma phenotypes. We have taken a tag SNP approach to assess the role of common variation throughout the CYP1B1 locus for an association with POAG in both a general POAG cohort as well as a cohort of severe (typically slightly younger onset) cases compared to elderly (>81 years) examined normal controls. These data do not provide evidence for a substantial role of this locus in POAG, although one haplotype may be protective for severe glaucoma . The odds ratio for the nominally associated haplotype is 0.68 when compared to all other common haplotypes. Power calculations [22] revealed that for a sample of this size (860 cases versus 898 controls) under an additive model we had 88% power to detect a genotype relative risk of 0.68 (or 1.47) for an allele frequency of 0.16 at α=0.007 (allowing for multiple testing of 7 haplotypes). Thus we are adequately powered to detect the effect size observed in this study. The use of tag SNPs in case-control association studies is ideally suited to testing hypotheses of common variation causing a common disease. It will not detect individual rare variants occurring on multiple genetic backgrounds. Thus, this study does not rule out a role for CYP1B1 in POAG, but does indicate that common variation in the gene (including common coding SNPs L432V and N453S) is not associated with POAG in general, but may be associated with severe POAG in a Caucasian population. | 2014-10-01T00:00:00.000Z | 2010-11-04T00:00:00.000 | {
"year": 2010,
"sha1": "28173cbce2d43120439dfe17b483399ead5ea505",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "28173cbce2d43120439dfe17b483399ead5ea505",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254128050 | pes2o/s2orc | v3-fos-license | Structural insights into mechanism and specificity of the plant protein O-fucosyltransferase SPINDLY
Arabidopsis glycosyltransferase family 41 (GT41) protein SPINDLY (SPY) plays pleiotropic roles in plant development. Despite the amino acid sequence is similar to human O-GlcNAc transferase, Arabidopsis SPY has been identified as a novel nucleocytoplasmic protein O-fucosyltransferase. SPY-like proteins extensively exist in diverse organisms, indicating that O-fucosylation by SPY is a common way to regulate intracellular protein functions. However, the details of how SPY recognizes and glycosylates substrates are unknown. Here, we present a crystal structure of Arabidopsis SPY/GDP complex at 2.85 Å resolution. SPY adopts a head-to-tail dimer. Strikingly, the conformation of a ‘catalytic SPY’/GDP/‘substrate SPY’ complex formed by two symmetry-related SPY dimers is captured in the crystal lattice. The structure together with mutagenesis and enzymatic data demonstrate SPY can fucosylate itself and SPY’s self-fucosylation region negatively regulates its enzyme activity, reveal SPY’s substrate recognition and enzyme mechanism, and provide insights into the glycan donor substrate selection in GT41 proteins.
Major comments: -The structure raises a major question that needs to be addressed. For example, how does the dimer behaves in the presence of protein substrates? This needs to be addressed to understand this enzyme's behavior in solution and to rule out whether a monomeric or other form might be present. Their trapped structure might suggest that it is likely an inactive conformation of the enzyme leading to self-fucosylation. Would the dimer be the functional form to recognize protein substrates? Biophysical experiments could show this by incubating SPY with different protein substrates that need SPY TPRs for optimal interaction. For example, they could use PRR5 as a protein substrate. In addition, if the dimer holds in the presence of PRR5, would this be 2:2 (SPY:protein substrate) stoichiometry or 2:1 stoichiometry? -The authors should check the oligomerization state of SPY once it fucosylates itself. Will the selffucosylated SPY still be a dimer? Also, check the oligomerization state of SPY truncations and mutants S21A and S24A. Will these mutants and truncations still be a dimer?
-Define SPYN5 and SPYN7 in the text, not only in the figure legend. The experiments with these mutations (the asparagine ladder mutations) are very nice and exemplify previous biophysical experiments of OGT with similar Asn residues as crucial in recognition of protein substrates. The authors should check if these mutations also kill the activity on other protein substrates such as PRR5 or large protein substrates. Check also the activity of K231 or D266 mutants on protein substrates such as PRR5.
-Are these Asn residues conserved with the Asn residues in the human OGT (see PMID: 33709700)? I guess not but it would be nice to compare the positions of these Asn residues in SPY with the Asn residues in the human OGT.
-The mechanism for SPY is not clear. The authors suggest that H495 is the catalytic base. Is this residue conserved with the human OGT? Note that for the human OGT, it was proposed earlier that a His residue was the catalytic base. Then, two further manuscripts suggested that the alpha phosphate (PMID: 23103942) or a chain of water molecules in which one of them interacts with an Asp residue (PMID: 23103939) could act as potential catalytic bases. This should be discussed in the manuscript and the H495 should be clarified if it occupies the same position or similar positions to His498 or His558 in the human OGT. Therefore, new figures showing a proper comparison with the human OGT active site should be shown in the manuscript. In addition, they should compare all these mechanisms with the ones described for PoFUT1, PoFUT2, FUT8, etc (PMIDs: 26854667, 34868727 and 32080177). SPY as a PoFUT should be compared with very distant PoFUTs such as PoFUT1 and PoFUT2 and other fucosyltransferases (FUT8), which also interact with proteins. Finally, molecular dynamics simulations should be performed with GDP-fucose and SPY-peptide to determine the distance of H495 to the acceptor Ser residue of the peptide during the simulations.
-According to the crystallography table, the structure is of lower resolution since the CC1/2 at high resolution is around 0.2. The cutoff for the highest resolution shell should have a CC1/2 around 0.5. Therefore, the data must be rescaled to render a CC1/2 around 0.5 in the highest resolution shell. This will clearly decrease the current resolution, but it will reflect better the resolution of the structure.
-Show the maps for the Fo-Fc in Supplementary Figure 3. This map is more relevant to see the quality of the density.
-Discussion is too short and should be enlarged, taking into account the catalytic mechanism and also exploiting that this structure is the first one showing interactions between the TPRs and a peptide (here, the N-terminus of SPY).
Minor comments: -Self-glycosylation is not novel since this occurs in many glycosyltransferases such as human OGT, NleB1, etc. See, e.g., PMID: 32411621. Yet, their finding is very interesting. Cite some of these papers.
-For the ordered bi-bi catalytic mechanism, the authors need to perform additional kinetic experiments and/or NMR experiments to demonstrate this mechanism.
-sentence in lines 37 and 38 does not make much sense. "Different from secretion proteins ….". Rewrite this sentence.
-What do they mean with aberrant O-linked monosaccharide glycosylation and its linking to diabetes and other diseases? Aberrant or truncated O-glycans are linked to O-glycosylation in the Golgi or E.R. pathway and not to O-glycosylation in the cytosol or in the nucleus. Clarify this.
REVIEWER COMMENTS
Reviewer #1 (Remarks to the Author): The crystal structure of the Arabidopsis nucleocytoplasmic protein Ofucosyltransferase SPY is reported. The work provides mechanistic insights into SPY function by identifying amino acids involved in protein O-fucosylation. SPY was found to self-modify two serines located near the N-terminus. Deletion of the region containing these residues reduces SPY activity suggesting that self-modification may regulate activity. The results generally support the authors' conclusions. The work provides an important foundation for increasing our understanding of the glycosyltransferase family 41 function and how these enzymes regulate plant development.
We thank the reviewer for the concise summary and insightful comments on our work.
Comments:
Line 28 and lines 249-251. The data does not support that O-fucosylation negatively regulates SPY. Since the role of self-fucosylation was tested by deleting the region that is modified (Fig. 2d), the results only show that AA 1-42 negatively regulate SPY.
To more directly address the role of O-fucose modification, the activity of the S21A, S24A mutants and an S21A / S24A double mutant should be determined.
Our response:
We apologize for having misled this reviewer and thank the reviewer for the suggestion. Our data reveal that SPY Δ1-42 but not SPY Δ1-14 shows higher activity compared with the full-length SPY (Fig. 2d). Meanwhile, we showed that fucosylation modification occur in the region between V15 and Q42 of SPY (Fig. 2b, c). Therefore, we concluded '…SPY's self-fucosylation region negatively regulates its enzyme activity…'. Here, self-fucosylation region refers to V15-Q42 of SPY.
Following this reviewer's suggestion, we further measured the activity of SPY S21A , SPY S24A and SPY S21A/S24A toward DELLA 1-205 (new Fig. 2e). All the three mutants show similar activity compared with wild type SPY. Taken together, our data demonstrate that the self-fucosylation region (V15-Q42) but not O-fucosylation on SPY can negatively regulate its activity. The results have been presented in Page 12 lines 253-258.
Lines 125-127. While the hydrolysis assay supports that C645S SPY is a fully functional enzyme, the evidence would be stronger if C645S SPY was shown to have wild-type activity toward a protein substrate such as DELLA.
Our response: we thank the reviewer for the suggestion. We have included new data showing that mutation of C645 to Ser does not impair SPY's activity toward DELLA 1-250 (new Supplementary Fig. 3b). The data have been described in Page 7 lines 128-134.
Lines 125-127. Consider reinforcing the argument that C645 is a surface residue by highlighting its location on a structure shown in one of the figures. Consider pointing out that C645S is not conserved (Supp Fig 10), which further supports the argument that mutating it will not affect activity.
Our response: we thank the reviewer for the suggestions. We have shown C645S in ball-and-stick model in new Figure 1b to highlight its location. We have also pointed out that C645 in Arabidopsis SPY is not conserved in Page 6 lines 118-121.
Line 284. I cannot see any signal for SPYN5 and N7 in the figure. Is there a signal or is it undetectable?
Our response: We apology for the confusion. We repeated the experiment for more than three times. Even after long exposure, the signal for SPY N5 and SPY N7 were undetectable. We clarified this issue in the revised manuscript (Page 14 line 291). Our response: we thank this reviewer for the suggestion. We have provided a figure in which the SPY missense mutations are highlighted on the structure (new Supplementary Fig. 12). We have also discussed whether the SPY missense mutations would affect the activity in the revised manuscript (Pages 24-26 lines 537-576).
Minor comments:
Line 113. serial should be series Our response: We have corrected the typo and thank the reviewer for pointing this out.
Line 118. Please spell out size exclusion chromatography multi angle light scattering (SEC-MALS).
Our response: We revised it as suggested.
Line 158. Consider large rather than dramatical.
Our response: We revised it as suggested.
Line 201 Unlike should be unlikely.
Our response:
We revised it as suggested.
Line 222. Replace can with that.
Our response: We revised it as suggested.
Our response: We revised it as suggested.
Line 232. … modification occur in the region between V15 and Q42.
Our response: We revised it as suggested.
Our response: We revised it as suggested.
Reviewer #2 (Remarks to the Author):
This is a very nice manuscript that describes the first crystal structure of SPY, a plant fucosyltransferase that fucosylates protein substrates. The relevance of this structure is of utmost importance because it mainly describes for the first time how the TPRs interact with potential protein substrates (in this case SPY itself). This has never been visualized at the atomic level, and this structure exemplifies this. Therefore, this works offers a significant novelty to merit publication in Nature Communications.
However, major and minor comments would need to be addressed to improve the quality of the manuscript.
Our response: We thank this referee for the insightful comments and the recognition of our work.
Major comments: -The structure raises a major question that needs to be addressed. For example, how does the dimer behave in the presence of protein substrates? This needs to be addressed to understand this enzyme's behavior in solution and to rule out whether a monomeric or other form might be present. Their trapped structure might suggest that it is likely an inactive conformation of the enzyme leading to self-fucosylation. Would the dimer be the functional form to recognize protein substrates? Biophysical experiments could show this by incubating SPY with different protein substrates that need SPY TPRs for optimal interaction. For example, they could use PRR5 as a protein substrate. In addition, if the dimer holds in the presence of PRR5, would this be 2:2 (SPY:protein substrate) stoichiometry or 2:1 stoichiometry?
Our response: Thanks for bringing this up. We have done systematical effort using bacterial (E. coli cell), insect (Sf9 and Hi 5 cells) and mammalian (HEK 293F cell) expression systems but unfortunately could not get recombinant PRR5 protein.
Previous study 1 and our data both have shown that SPY could fucosylate DELLA 1-205 ( Fig. 2d). Therefore, we checked the oligomerization state of SPY in presence of DELLA 1-205 using gel filtration (new Supplementary Fig. 6d). Wild type SPY remains a dimer in solution after incubated with DELLA 1-205 . SPY H495A , which is a catalytically dead mutant, also adopts dimeric form both in absence as well as in presence of DELLA 1-205 and GDP-fucose. Altogether, our data reveal that dimer is the functional form of SPY. These data have been described in Page 21 lines 450-459.
To investigate the SPY: protein substrate stoichiometry in the catalytic reaction, we prepared a half-dead heterodimer with one protomer being wild type SPY Our response: We apology for the confusion and thank this reviewer for the constructive suggestions. We defined SPY N5 and SPY N7 in Page 14 lines 287-288. We agree with the reviewer that it will be interesting to investigate if the SPY N5 and SPY N7 lost activity towards PRR5. However, as mentioned above, we could not get recombinant PRR5 protein and thus were unable to perform the in vitro enzymatic assay with PRR5. Our data showed that the activity of SPY N5 and SPY N7 toward itself and DELLA 1-205 were completely abrogated (new Fig. 3b,c). Considering the fulllength SPY and DELLA 1-205 are both large protein substrates and they share low sequence similarity, we hope this reviewer would agree that our data have largely demonstrated the importance of the asparagine ladder.
Following the reviewer's suggestion, we evaluated the activity of SPY K231A , SPY D266A , SPY K231A/D266A . Unexpectedly, mutations of K231 and D266 to alanine did not affect the self-fucosylation but had a significant impact on the fucosylation of DELLA 1-205 (new Fig. 3b,c). SPY K231A and SPY K231A/D266A showed substantially decreased activity while SPY D266A showed increased activity toward DELLA 1-205 .
Meanwhile, compared with SPY K231A , the double mutant showed increased activity.
Taken together, our data demonstrated that K231 and D266 drive SPY's protein substrate selection. These results have been described in Page 14 lines 301-309 and discussed in Page 23 lines 496-503.
-Are these Asn residues conserved with the Asn residues in the human OGT (see PMID: 33709700)? I guess not but it would be nice to compare the positions of these Asn residues in SPY with the Asn residues in the human OGT.
Our response: We thank this reviewer for the constructive suggestion. In the revised manuscript, we performed sequence alignment of the TPRs in Arabidopsis SPY and human OGT (new Supplementary Fig. 5d). Most of these Asn residues are conserved and occupy position 6 in the TPR consensus. Yet at the same time, different from the asparagine ladder in human OGT which stretches across almost the entire TPR region Our response: We thank this reviewer for the constructive suggestions. H498, H558 and Y841 in human OGT were initially proposed as candidate bases 2-5 , but were later observed to locate too far away from the acceptor hydroxyl 6,7 . D554 in human OGT was put on the stage because it indirectly interacted with the acceptor hydroxyl via a chain of water molecules 7 , however, human OGT D544A retained activity 6 . Following this reviewer's suggestion, we performed structural superimposition of Arabidopsis SPY and human OGT using the catalytic domains as references and provided a new figure showing the active sites in the two proteins in parallel (new Supplementary Fig. 5e). The equivalent residues of H498, D554, H558 and Y841 in human OGT are L439, D491, H495 and A664 in Arabidopsis SPY, respectively. Apparently, leucine and alanine could not enable the deprotonation of the acceptor hydroxyl. D491 and H495 are adjacent in the structure while the imidazole side chain of H495 stacks with the D491 carboxylate, indicating that H495 and D491 may function as a catalytic dyad. Previously we showed that mutation of H495 to alanine abrogate the enzyme activity. In the revision, we further test two mutants, SPY D491A and SPY H495F .
Although mutation of H495 renders the enzyme inactive, mutation of D491 does not abrogate the activity. Considering mutation of H495 to alanine would free up more space in the active site while phenylalanine is more hydrophobic and slightly bigger than histidine, it is possible that the effects of the H495 mutations are due to structural reasons. As suggested by this reviewer, we have reprocessed the diffraction data to a resolution of 2.85Å with the CC1/2 in the highest resolution being 0.527. During structure refinement with the new data, we noted that the density map for S21* hydroxyl is poor (Response Fig. 1). It would be more reasonable that the S21* hydroxyl takes an alternative rotamer (rotamer 2) to be properly aligned for attack on the anomeric carbon (Response Fig. 1 and new Supplementary Fig. 5e). On this occasion, the S21* hydroxyl rotamer is not suitable for engaging in a hydrogen bond with either the D491 carboxylate or the H495 imidazole ring. All these data demonstrate that H495 and D491 would be unfit for the role as general base. Further inspection of the 'catalytic SPY'/GDP/'substrate SPY' complex reveals that none of the residues in SPY is within 4 Å of S21* hydroxyl. Altogether, SPY may not harbor the catalytic base. Notably, the acceptor substrate hydroxyl donates a hydrogen bond to one of the α-phosphate oxygen in the donor substrate while the α-phosphate lacks any interactions with positively charged side chains, hence this phosphate could serve as the catalytic base (new Fig. 4e). In fact, a similar substrate-assisted catalysis mechanism has been recently revealed for human OGT based on cumulative crystallographic snapshots and biochemical probes 6 , which coincidentally reinforces SPY's substrate-assisted catalysis proposed here. We understand that more studies on trapping a complex of SPY and intact substrates by reducing the rate of enzymatic turnover in crystallo using artificial substrate analogs probably could eventually elucidate the catalytic mechanism. We hope this reviewer would agree that such efforts could be a future following-up project.
In the revised manuscript, we described the new mutagenesis data, the new mechanistic scenario for SPY (Pages 16-17 lines 348-367), and compared the catalytic mechanism for SPY with the ones for human OGT, PoFUT1, PoFUT2 and FUT8 (Pages 23-24 lines 508-536). -According to the crystallography table, the structure is of lower resolution since the CC1/2 at high resolution is around 0.2. The cutoff for the highest resolution shell should have a CC1/2 around 0.5. Therefore, the data must be rescaled to render a CC1/2 around 0.5 in the highest resolution shell. This will clearly decrease the current resolution, but it will reflect better the resolution of the structure.
Our response: We have reprocessed the diffraction data to a resolution of 2.85Å. The CC1/2 in the highest resolution is 0.527 (new Table 1). Accordingly, we have rerefined the structure using the new data. Particularly, we noted that the density map for S21* hydroxyl is poor and thus corrected the rotamer of S21* hydroxyl to be properly aligned for attack on the anomeric carbon (new Supplementary Fig. 9 and Response Fig. 1).
-Show the maps for the Fo-Fc in Supplementary Figure 3. This map is more relevant to see the quality of the density.
Our response:
We have shown the Fo-Fc map for GDP in new Supplementary Fig. 4. Additionally, we have also shown the Fo-Fc map for the N-terminal loop in new Supplementary Fig. 9.
-Discussion is too short and should be enlarged, taking into account the catalytic mechanism and also exploiting that this structure is the first one showing interactions between the TPRs and a peptide (here, the N-terminus of SPY).
Our response:
We have enlarged the Discussion as suggested.
Minor comments: -Self-glycosylation is not novel since this occurs in many glycosyltransferases such as human OGT, NleB1, etc. See, e.g., PMID: 32411621. Yet, their finding is very interesting. Cite some of these papers.
Our response: Thanks for pointing this out. We have cited three related papers ( 5,8,9 ) and discussed along with our results (Page 20, lines 432-436).
-For the ordered bi-bi catalytic mechanism, the authors need to perform additional kinetic experiments and/or NMR experiments to demonstrate this mechanism.
Our response: We completely agree with the reviewer that it will be interesting to experimentally demonstrate the ordered bi-bi catalytic mechanism. In this case, kinetic experiments need to be performed in the presence of GDP at saturating protein substrate concentration while varying GDP-fucose levels 10,11 . However, the protein substrates for SPY including the DELLA 1-205 and catalytically dead mutants of SPY (SPY H495A and SPY K665A ) precipitate easily at a concentration above 0.2 mM.
Besides, we were unable to develop an assay for sensitively and quantitatively measuring SPY's glycosyltransferase activity. Considering SPY could also hydrolyze GDP-fucose, monitoring the amount of GDP in the reaction system is not a suitable way to measuring SPY's glycosyltransferase activity. Meanwhile, neither the radiolabeled GDP-fucose nor the high-quality antibodies that specifically recognize O-linked fucose are commercially available. Hampered by these factors, at present we are unable to proceed the kinetic experiments as suggested. On the other hand, SPY exists as dimers in solution with a molecular weight of about 200 kD, which is too big for the NMR analysis. We are sorry to say that this is the limitation of our work. In fact, a similar structural feature was observed in the ternary complex of the closest structural homolog, human OGT ( Fig. 4b and Supplementary Fig. 5e). Reinforced by kinetic studies with radiochemical UDP-14 C-GlcNAc as the glycan donor, the ordered bi-bi catalytic mechanism has been demonstrated for human OGT 2 . In the revised manuscript, we described the comparison of SPY and human OGT (Page 16 lines 341-347) and erased the ordered bi-bi catalytic mechanism from the schematic diagram of catalysis (new Fig. 4e).
-sentence in lines 37 and 38 does not make much sense. "Different from secretion proteins ….". Rewrite this sentence. Our response: We revised it as suggested.
Our response: We have corrected the typo and thank the reviewer for pointing this out.
REVIEWERS' COMMENTS
Reviewer #1 (Remarks to the Author): The concerns with the previous submission are addressed in this version but there are a few comments/suggestions.
Comments:
Line 53: GA is the abbreviation for gibberellin not gibberellic acid, which is GA3.
Line 130: Taking rather than Taken Line 134: delete respectively Lines 156-157: This sentence doesn't convey the argument that C645 is surface exposed and located on a loop connecting the N-Cat and C-Cat providing further evidence that the crystal structure of engineered SPYC645S represents the wild type SPY conformation. Also the statement in this location disrupts the narrative. You could consider moving this argument up to the other discussion of this mutation or just delete it since the arguments you make above make a strong case.
Line 291: delete either Line 460-463: Can you rule out that the extra steps used to purify the SPYWT/SPYK665A heterodimer did not reduce its specific activity? This seems possible. To control for this you could use the same strategy to prepare SPYWT homodimer. Also, was the experiment repeated with multiple enzyme preparations?
Line 565: Consistent rather than In consistent.
Reviewer #2 (Remarks to the Author): The authors have done a great job and have responded to all my questions. Therefore, in my opinion, the manuscript in its current form is suitable for NCOMMS.
Nevertheless, I have some minor changes: -Page 21. Replace demonstrating by suggesting. To demonstrate the stoichiometry, authors should perform ITC experiments.
-In Fig. 5d, for the alignment of TPRs, I cannot see the Asn residues forming the ladder in SPY. E.g., Asn300 in the manuscript appears to be Asn302 in the alignment. Double-check this because I have problems identifying the Asn residues mentioned in the manuscript in the alignment.
-The mechanism in Figure 4e does not look right. SPY is an inverting FT, and SPY appears to be a retaining FT in the mechanism depicted by the authors. Check also the fucose moiety because it is wrong.
-Line 349-350: mention Asp, Glu and His as potential catalytic bases in the reaction mechanism. Replace His/Glu by just only His.
-Line 367: finish the sentence as indicated below, "the catalytic base as proposed earlier for human OGT (reference XXX)".
-Line 457 and 458. It is not clear to me whether the authors can say that the dimer stays as a dimer in the presence of the protein substrate because by gel filtration the enzyme and the protein substrate do not coelute together. This is likely due that the protein substrate has a poor affinity for SPY. Reconsider writing this paragraph to tone down the claims.
REVIEWER COMMENTS
Reviewer #1 (Remarks to the Author): The concerns with the previous submission are addressed in this version but there are a few comments/suggestions. We thank the reviewer for the positive comment.
Comments: Line 53: GA is the abbreviation for gibberellin not gibberellic acid, which is GA3. Our response: Thanks for pointing this out. We have corrected it in the revision.
Line 130: Taking rather than Taken Our response: We revised it as suggested.
Line 134: delete respectively Our response: We revised it as suggested.
Lines 156-157: This sentence doesn't convey the argument that C645 is surface exposed and located on a loop connecting the N-Cat and C-Cat providing further evidence that the crystal structure of engineered SPYC645S represents the wild type SPY conformation. Also the statement in this location disrupts the narrative. You could consider moving this argument up to the other discussion of this mutation or just delete it since the arguments you make above make a strong case. Our response: Thanks for pointing this out. We have deleted this sentence in the revision.
Line 291: delete either Our response: We revised it as suggested.
Line 460-463: Can you rule out that the extra steps used to purify the SPYWT/SPYK665A heterodimer did not reduce its specific activity? This seems possible. To control for this, you could use the same strategy to prepare SPYWT homodimer. Also, was the experiment repeated with multiple enzyme preparations?
Our response: We apology for the confusion. In this experiment, we did prepare SPY WT /SPY WT homodimer using the same strategy as that for the SPY WT /SPY K665A heterodimer. We have clarified this issue in the revision (Page 28 lines 609-614).
We have repeated this experiment with more that 3 enzyme preparations and always got similar results.
Line 565: Consistent rather than In consistent.
Our response: We revised it as suggested.
Reviewer #2 (Remarks to the Author): The authors have done a great job and have responded to all my questions. Therefore, in my opinion, the manuscript in its current form is suitable for NCOMMS.
We thank the reviewer for the positive comment.
Nevertheless, I have some minor changes: -Page 21. Replace demonstrating by suggesting. To demonstrate the stoichiometry, authors should perform ITC experiments.
Our response: We revised it as suggested.
-In Fig. 5d, for the alignment of TPRs, I cannot see the Asn residues forming the ladder in SPY. E.g., Asn300 in the manuscript appears to be Asn302 in the alignment.
Double-check this because I have problems identifying the Asn residues mentioned in the manuscript in the alignment.
Our response: Thanks for pointing this out. We have corrected the numbering in new Supplementary Fig. 5d.
-The mechanism in Figure 4e does not look right. SPY is an inverting FT, and SPY appears to be a retaining FT in the mechanism depicted by the authors. Check also the fucose moiety because it is wrong.
Our response: Thanks for pointing this out. We have corrected them in the revision (new Fig. 4e) -Line 349-350: mention Asp, Glu and His as potential catalytic bases in the reaction mechanism. Replace His/Glu by just only His.
Our response: We revised it as suggested.
-Line 367: finish the sentence as indicated below, "the catalytic base as proposed earlier for human OGT (reference XXX)".
Our response: We revised it as suggested.
Our response: We revised it as suggested.
-Line 457 and 458. It is not clear to me whether the authors can say that the dimer stays as a dimer in the presence of the protein substrate because by gel filtration the enzyme and the protein substrate do not coelute together. This is likely due that the protein substrate has a poor affinity for SPY. Reconsider writing this paragraph to tone down the claims.
Our response: Thanks for the comments. We revised it as suggested (Page 21 lines 456-462). | 2022-12-02T15:00:10.361Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "89179e425b0a4755fdeb4240c283c92e4b3b4179",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e39686f5ee5194b6e0beb78e8faff28d66e75fbb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226959136 | pes2o/s2orc | v3-fos-license | Rhinovirus reduces the severity of subsequent respiratory viral infections, which is associated with dampened inflammatory responses
Coinfection by unrelated viruses in the respiratory tract is common and can result in changes in disease severity compared to infection by individual virus strains. We have previously shown that inoculation of mice with rhinovirus (RV) two days prior to inoculation with a lethal dose of influenza A virus (PR8), provides complete protection against mortality. In this study, we extend that finding to a second lethal respiratory virus, pneumonia virus of mice (PVM) and characterize the differences in inflammatory responses and host gene expression in single virus infected vs. coinfected mice. RV prevented mortality and weight loss associated with PVM infection, suggesting that RV-mediated protection is more effective against PVM than PR8. Major changes in host gene expression upon PVM infection were delayed compared to PR8, which likely provides a larger time frame for RV-induced gene expression to alter the course of disease. Overall, RV induced earlier recruitment of inflammatory cells, while these populations were reduced at later times in coinfected mice. Findings common to both coinfection conditions included upregulated expression of mucin-associated genes in RV/PR8 and RV/PVM compared to mock/PR8 and mock/PVM infected mice and dampening of inflammation-related genes late during coinfection. These findings, combined with differences in virus replication levels and disease severity, suggest that the suppression of inflammation in RV/PVM coinfected mice may be due to early suppression of viral replication, while in RV/PR8 coinfected mice may be due to a direct suppression of inflammation. Thus, a mild upper respiratory viral infection can reduce the severity of a subsequent severe viral infection in the lungs through virus-dependent mechanisms. Author Summary Respiratory viruses from diverse families co-circulate in human populations and are frequently detected within the same host. Though clinical studies suggest that coinfection by more than one unrelated respiratory virus may alter disease severity, animal models in which we can control the doses, timing, and strains of coinfecting viruses are critical to understand how coinfection affects disease severity. In this study, we compared gene expression and immune cell recruitment between two pairs of coinfecting viruses (RV/PR8 and RV/PVM) that both result in reduced severity compared to infection by PR8 or PVM alone in mice. Reduced disease severity was associated with suppression of inflammatory responses in the lungs. However, differences in disease kinetics and host and viral gene expression suggest that protection by coinfection with RV may be due to distinct molecular mechanisms.
51
The detection of more than one virus in respiratory samples is quite common, especially 52 among pediatric patients (1-4). There are differences in the outcomes of coinfection -whether it 53 results in increased, decreased, or no effect on disease severity -that likely reflect different virus 54 parings, patient populations, and study criteria. For example, coinfection with influenza B virus 124 Infection by PR8 and PVM induce different gene expression signatures in mouse lungs over 125 time 126 127 To determine potential mechanisms of protection mediated by RV against PR8 and PVM, 128 we undertook a comprehensive transcriptome analysis of mouse lungs (Fig 2). Mice were 129 coinfected with RV two days before PR8 or PVM and total lung RNA was analyzed on days 0, 2, 130 4, and 6 after PR8 or PVM inoculation. Single virus-infected mice were mock-inoculated two days 131 before PR8 or PVM and total lung RNA was isolated at the same time points. Weight loss was 132 monitored daily to test for consistency with our previous morbidity and mortality analyses. RV-133 mediated protection against PR8 was not evident by 6 days post-infection (S1 Fig) though both 134 mock/PR8 and RV/PR8 groups experienced weight loss at a rate similar to our previous study (9).
135 In contrast, complete protection against weight loss was evident in RV/PVM coinfected mice (S1 136 Fig).
186
Many reads mapped to PR8 and PVM genomes from the mice infected with these viruses 187 (Fig 4). Coinfection by RV did not prevent PR8-specific gene expression, but reads mapped to 188 PR8 were significantly lower in coinfected mice at all time points (Fig 4). Similarly, we previously 189 showed that infectious PR8 titers in the lungs were equivalent in mock/PR8 and RV/PR8 infected 190 mice on days 2 and 4 after PR8 inoculation (9). However, coinfection with RV led to earlier
294
The proportions of neutrophils and interstitial macrophages followed the same trends as 295 total CD11b+ cells in PVM-infected mice, with lower proportions of these cells on days 4 and 6 296 in coinfected mice (Fig 7F, 7G). In contrast, interstitial macrophages were increased in RV/PVM, 297 compared to mock/PVM-infected mice, early in infection. This indicates that coinfection by RV 298 stimulates early recruitment of CD11b+ cells, specifically interstitial macrophages, while limiting 299 recruitment of inflammatory cells later in infection. PR8-infected mice had similar trends, however 300 the differences between mock/PR8 and RV/PR8 groups were less dramatic (Fig 7B, 7C).
301 Neutrophil numbers were suppressed in RV/PR8 coinfected mice compared to mock/PR8 infected 302 mice through-out the time course (Fig 7b). The interstitial macrophage proportions in mock/PR8-303 and RV/PR8-infected mice increased over time similarly to the total CD11b+ populations (Fig 304 7C). The lower proportions of neutrophils and interstitial macrophages at later time points in RV-305 coinfected mice corresponded with mRNA levels for chemokines. This was predominantly the 306 case for neutrophil chemokines Cxcl1 and Cxcl2 (Fig 7I, 7K) and macrophage chemokines Ccl2 307 and Ccl7 (Fig 7J, 7L). These chemokines were generally lower in RV/PR8 coinfected mice on day 308 6 and RV/PVM coinfected mice on days 4 and 6 compared to mock/PR8 and mock/PVM infected 309 mice, respectively.
310
There were no clear trends in alveolar macrophage numbers in mock/PR8-and RV/PR8-311 infected mice (Fig 7D), though their proportions were significantly higher in RV/PVM-coinfected 312 mice compared to mock/PVM-infected mice on days 4 and 6 ( Fig 7H). This is likely due to Discussion 320 Previously, we found that inoculation of mice with a mild respiratory viral pathogen, RV 321 or murine coronavirus MHV-1, two days before PR8 provided significant protection against PR8-322 mediated disease (9). In this study, we expanded these results to show that RV-mediated protection 323 was not specific to PR8, but also provided significant disease protection against a respiratory virus 324 from another viral family, PVM. This is in agreement with other studies showing protection 325 afforded by viral coinfection (10-12). Despite the commonality of coinfection resulting in reduced 326 disease severity, there are differences between the virus combinations in the kinetics of disease 327 and viral replication. Coinfection by RV provided more effective protection against PVM than 328 PR8. RV/PVM coinfected mice had little to no signs of disease (Fig 1) and significantly limited 329 PVM replication (Fig 4). In contrast, coinfection by RV prevented mortality, but not morbidity, 330 associated with PR8 infection, and reduced viral gene expression but did not prevent infection by 331 PR8 (Fig 4) (9). Further, RV given concurrently with PVM was as effective as when it was given 332 two days before PVM (Fig 1). In contrast, RV was less effective at reducing the severity of PR8 333 when given concurrently and also exacerbated disease when it was given two days after PR8 (9). 334 We also observed differences in the kinetics of gene expression in response to these virus pairs.
335 Host (Fig 3) and viral (Fig 4) gene expression changes in response to PVM were delayed compared 336 to PR8, thereby giving a larger window for RV-mediated protection. Thus, RV may be inducing 337 antiviral mechanisms that are more effective against PVM, or different mechanisms may be 338 responsible for inhibiting PVM infection and mediating effective clearance of PR8. We used 339 transcriptomic and flow cytometry analyses to identify potential mechanisms that mediate 340 protection against PR8 and PVM in mice that were coinfected with RV.
341
Analysis of RV-inoculated mice on day 0 (two days after RV inoculation) revealed up-342 regulation of 327 genes. These genes were highly enriched in GO categories that involved cell 343 division or chemokine signaling (S2 Table). Despite expression of several chemokine and 344 chemokine receptor genes, we did not observe a dramatic increase in immune cells in the lungs of 345 RV-infected mice on day 0 (Fig 7). Although our flow cytometry results had variability in RV-346 infected mice on day 0 between our studies, we detected a significant increase in interstitial 347 macrophages in the RV/PVM study (Fig 7G
356
In contrast to early recruitment of neutrophils, RV-coinfected mice had reduced numbers 357 of neutrophils and interstitial macrophages later during infection (Fig 7). This decrease in 358 inflammatory cell recruitment could be a result of reduced viral infection (Fig 4) and We analyzed flow cytometry (Fig 7) and viral read count (Fig 4) data resulting from our 508 experiments to identify time-varying differences between mice infected with PR8 or PVM alone 509 or coinfected with RV. To this end, we used negative binomial regression on each response 510 variable with an explanatory model that had a main effect of days post-infections, a main effect of 511 treatment (single or coinfection), and the interaction between the two main effects. Response 512 variables were the number of CD11b cells, neutrophils, interstitial macrophages, alveolar 513 macrophages, and viral read counts; all response variables were normalized versus total cell count 514 except for viral RNA read count which was normalized against total RNA read count. Due to a 515 prior visual investigation of our data, it seemed some of our response variables might be better fit 516 with a quadratic time term. Because of this we fit an alternative model that included an orthogonal 517 polynomial of degree 2 for time. We assessed whether the quadratic model was better than the 518 linear model using a likelihood ratio test and chose the quadratic model if it offered a significant 519 improvement in fit over the simpler linear model. The significance of treatment and time was 520 determined using a type-I ANOVA. To detect differences between treatments at a given time, we 521 also performed post hoc pairwise comparisons of the modeled mean at our observational time 522 points (days 0, 2, 4, and 6) using the emmeans package in R. Supplemental Table 3 | 2020-11-12T09:09:18.039Z | 2020-11-06T00:00:00.000 | {
"year": 2020,
"sha1": "31600b2b1dfbcb565f54ecd145ee6d137a9416e0",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/05/03/2020.11.06.371005.full.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1bbb393c94dc6165d7a75101ba740b5a47fd9dde",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
60441819 | pes2o/s2orc | v3-fos-license | Direct observations of a surface eigenmode of the dayside magnetopause
The abrupt boundary between a magnetosphere and the surrounding plasma, the magnetopause, has long been known to support surface waves. It was proposed that impulses acting on the boundary might lead to a trapping of these waves on the dayside by the ionosphere, resulting in a standing wave or eigenmode of the magnetopause surface. No direct observational evidence of this has been found to date and searches for indirect evidence have proved inconclusive, leading to speculation that this mechanism might not occur. By using fortuitous multipoint spacecraft observations during a rare isolated fast plasma jet impinging on the boundary, here we show that the resulting magnetopause motion and magnetospheric ultra-low frequency waves at well-defined frequencies are in agreement with and can only be explained by the magnetopause surface eigenmode. We therefore show through direct observations that this mechanism, which should impact upon the magnetospheric system globally, does in fact occur.
P lanetary magnetic fields act as obstacles to solar/stellar winds with their interaction forming a well-defined region of space known as a magnetosphere. The outer boundary of a magnetosphere, the magnetopause, is arguably the most significant since it controls the flux of mass, energy, and momentum both into and out of the system, with the boundary's motion thus having wide ranging consequences. Magnetopause dynamics, for example, can cause loss-of-relativistic radiation belt electrons 1 ; result in field-aligned currents directing energy to the ionosphere 2 ; and launch numerous modes of magnetospheric ultra-low frequency (ULF) waves 3,4 that themselves transfer solar wind energy to radiation belt 5 , auroral 6 , and ionospheric regions 7 . On timescales greater than~6 min Earth's magnetopause responds quasistatically to upstream changes to maintain pressure balance 8 . Simple models treating the dayside magnetopause as a driven damped harmonic oscillator arrive at similar timescales [9][10][11] . How the boundary reacts to changes over shorter timescales is not fully understood.
It was proposed that plasma boundaries, including the dayside magnetopause, may be able to trap impulsively excited surface wave energy forming an eigenmode of the surface itself 12 . The magnetopause surface eigenmode (MSE) therefore constitutes a standing wave pattern of the dayside magnetopause formed by the interference of surface waves propagating both parallel and anti-parallel to the magnetospheric magnetic field which reflect at the northern and southern ionospheres. Its theory has been developed using ideal incompressible magnetohydrodynamics (MHD) in a simplified box model, as depicted in Fig. 1a-c along with expected polarisations (panels d, e) 13 . The signature of MSE within the magnetosphere should be a damped evanescent fastmode magnetosonic wave whose perturbations could significantly penetrate the dayside magnetosphere 14 . Although this simple model neglects many factors which might preclude the possibility of MSE, global MHD simulations and applications of the theory to more representative models suggest MSE should be possible at Earth with a fundamental frequency typically less than 2 mHz 14,15 . The considerable variability of Earth's outer magnetosphere, however, might suppress MSE's excitation efficiency 16 . The simulations have largely confirmed the theorised structure and polarisations of MSE, but revealed that the relative phase of the field-aligned magnetic field perturbations differed from the box model prediction by 50°1 5 .
There exist numerous possible impulsive drivers of MSE including interplanetary shocks 17 , solar wind pressure pulses 18 , and antisunward plasma jets 19 , all of which are known to result in magnetopause dynamics and magnetospheric ULF waves in general. However, no direct evidence of MSE currently exists and potential indirect evidence have largely been inconclusive. Spacebased studies have evoked MSE to explain recurring frequencies of both magnetopause oscillations 20,21 and narrowband ULF waves excited by upstream jets 22 , however other mechanisms could not unambiguously be ruled out and this interpretation of the results appears inconsistent with later MSE modelling 14 .
Multi-instrument ground-based searches in the vicinity of the open-closed magnetic field line boundary suggest MSE do not occur 16,23 . While idealised theoretical treatments of plasmapause Fig. 1 Schematic of the magnetopause surface eigenmode in a box model. a Box model equilibrium featuring the magnetopause (black) separating the magnetosheath (red) and magnetosphere (dark blue arrows depict the geomagnetic field bounded by the northern and southern ionospheres coloured light blue). The directions of the field-aligned coordinate system in this model are also shown where R is radial, A azimuthal and F field-aligned. Subsequent panels depict n = 1 b and n = 2 c MSE. The midpoint of the phase is indicated as the black dot, which corresponds to the location of the MSE n = 1 antinode and n = 2 node. Expected MSE polarisations in different regions of the magnetosphere for the magnetopause standoff distance (grey dashed), radial velocity (green), radial (blue) and field-aligned (red) magnetic field components are shown on the right d, e surface waves suggest MSE might be little affected by the ionosphere and thus observable in ground-based data 24 , applications of theory specifically to MSE are currently lacking though and thus it is unclear exactly what their ground-signatures should be.
One reason perhaps why MSE, if it exists, may not have yet been observed is that impulsive drivers tend to recur on short timescales and/or are typically embedded within high levels of turbulence 17,19 . These perhaps disrupt MSE or result in complicated superpositions with various other modes of ULF wave. Evidence for other MHD eigenmodes has relied on multipoint and polarisation observations, comparing these with theory and simulations [25][26][27] . Therefore, multipoint observations of the magnetopause and magnetospheric response to an isolated impulsive driver may be the ideal scenario for unambiguous direct evidence of MSE.
Here we present observations at Earth's magnetosphere of an event which adhered to this strict combination of spacecraft configuration and driving conditions. We show that a rare isolated antisunward plasma jet impinged upon the magnetopause resulting in boundary oscillations and magnetospheric ULF waves. While the driving jet was impulsive and broadband, the response was narrowband at well-defined frequencies. By carefully comparing the observations with the expectations of numerous possible mechanisms, we show that the response to the jet can only be explained by the magnetopause surface eigenmode. We therefore present unambiguous direct observations of this eigenmode, which should exhibit global effects upon Earth's magnetosphere.
Results
Overview. Observations are taken from the THEMIS mission on 7 August 2007 between 22:10 and 22:50 UT, a previously reported interval 28,29 . The spacecraft were ideally arranged in a string-ofpearls configuration close to the magnetopause in the mid-late morning sector and <3°northwards of the magnetic equatorial plane, as depicted in Fig. 2a, b. Subsequent panels in Fig. 2 show time-series observations in the magnetosheath (panels c, d), at the magnetopause (panels e-g), and within the magnetosphere (panels h, i). The dynamic spectra corresponding to these observations are shown in Fig. 3a-g.
Magnetosheath observations. THB was predominantly located in the region immediately upstream of the boundary, the magnetosheath, as evidenced by the dominance of the thermal pressure P th (red) over the magnetic pressure P B (blue) in Fig. 2d. At around 22:25 UT, following an outbound magnetopause crossing, THB observed an antisunward magnetosheath jet 19 lasting~100 s with peak ion velocity~390 km s −1 directed approximately along the Sun-Earth line (panels a-c). An increase in the antisunward dynamic pressure P dyn,x and thus also the total pressure acting on the magnetopause P tot,x = P B + P th + P dyn,x was associated with the jet (panel d). Unlike many magnetosheath jets this structure was isolated with no other significant pressure variations observed for tens of minutes afterwards 19 . The solar wind dynamic pressure was steady during this interval (grey line in panel d), with speed (average and spread) of 609 ± 10 km s −1 and density of 2.7 ± 0.1 cm −3 . Time-frequency analysis (see Methods) revealed the jet was impulsive and broadbandpower enhancements in the total pressure were contained within the jet's cone of influence with no statistically significant peaks at discrete frequencies (Fig. 3a).
Magnetopause observations. The magnetopause passed over four of the spacecraft (THB-E) several times. Examples of such crossings are shown in Fig. 2e, f for THC, with all crossings indicated as the coloured squares in panel g by geocentric radial distance along with the inferred magnetopause position at all times estimated through interpolation (see Methods). At least two large-amplitude (≳0.4 R E ) inward oscillations of the boundary followed the jet. The first oscillation was largest, being observed by all four spacecraft, whereas the amplitude had already decreased by the second oscillation. The wavelet transform of the interpolated magnetopause position (Fig. 3b) shows a narrowband enhancement in power with mean peak frequency 1.8 mHz.
Projections of the normals to the magnetopause, arrived at using the cross product technique described in the Methods section, form a fan azimuthally as shown in Fig. 2a, b. However, there was no systematic separation in direction of inbound (purple) and outbound (orange) normals. Using these normals, timing analysis was performed (described in Methods) for each inward/outward motion of the boundary. During the first inward motion of the magnetopause, concurrent with the jet, the average boundary velocity along the normal and its spread were −238 ± 76 km s −1 and showed signs of acceleration with higher velocities resulting when using later crossings. This magnetopause motion is consistent with the antisunward ion velocities of the observed magnetosheath jet (Fig. 2c). Therefore, this initial magnetopause motion was a result of the jet's impulsive enhancement in the total pressure acting on the boundary. For the subsequent magnetopause motions, the speeds were similar to one another at 24 ± 10 km s −1 , consistent with the 27 km s −1 peak velocities expected for 0.4 R E sinusoidal oscillations of the boundary at 1.8 mHz. Decomposing the boundary velocities into components normal and transverse to the undisturbed magnetopause (see Methods) showed that there was little transverse motion (8 ± 8 km s −1 ). Indeed, the azimuthal component was consistent with zero (−1 ± 12 km s −1 ). No systematic differences between inbound and outbound crossings were present within these results.
At 22:22:30 UT, before the magnetosheath jet, a~250 km s −1 reconnection outflow 29 was observed during a magnetopause crossing (Fig. 2c), however, no further clear evidence of local reconnection occurred during subsequent crossings, likely because the observed magnetic shears were low (mean and spread were 34 ± 22°).
Magnetosphere observations. The magnetopause did not pass over THA and thus it provided uninterrupted observations of the outer magnetosphere in the vicinity of the magnetopause. The magnetic field and ion velocity observations are shown in Fig. 2h, i with corresponding wavelet spectra in Fig. 3c-g. An initial largeamplitude transient was observed immediately following the jet, chiefly in the radial components of the magnetic field B R,sph and ion velocity v iR,sph as well as the azimuthal ion velocity v iA,sph . Longer period ULF wave activity occurred afterwards. The fieldaligned magnetic field perturbation B F,sph showed a 1.7 mHz signal ( Fig. 3e), in approximate antiphase to the magnetopause location ( Fig. 2g, h). While the B R,sph time series appeared to exhibit a similar but opposite signal to B F,sph (Fig. 2h), this did not satisfy our significance test. B R,sph did, however, feature significant oscillations peaked at 3.3 mHz (Fig. 3c). The v iR,sph time series exhibited some small-amplitude complex oscillations on timescales potentially consistent with those observed in the magnetic field and boundary location (Fig. 2i), however the wavelet transform revealed no statistically significant periodicities. A clear 6.7 mHz signal dominated v iA,sph (Figs. 2i and 3g), a higher frequency than those previously discussed. No appreciable variations were present in v iF,sph . Note that none of the statistically significant signals commenced before the magnetosheath jet's cone of influence (white dashed lines in Fig. 3a-g) and therefore these oscillations did not precede the jet.
It is surprising that no obvious radial velocity perturbations associated with the magnetopause motion were present, regardless of whether this motion was associated with an eigenmode. However, through modelling (see Methods) we find that the expected~27 km s −1 amplitude velocity oscillations based on the magnetopause motion would only be detected as 6 km s −1 due to instrumental effects associated with cold magnetospheric ions and the spacecraft potential. The amplitude of 1.0-2.0 mHz band radial velocity perturbations were in good agreement with this, as shown in Fig. 3h.
We investigate the phase relationships between the three signals present in the THA data ( Fig. 3h-k). Similar coherent Jet c Ion velocity at THB in GSM (x, y, z as blue, green, red) and its magnitude (black). A reconnection exhaust is indicated by RX. d Magnetic (blue), thermal (red), antisunward dynamic (green) and total antisunward (black) pressures at THB along with lagged solar wind dynamic pressure observations by Wind (grey). e Magnetic field at THC in GSM (colours as before). f Omnidirectional ion energy flux at THC. g THEMIS magnetopause crossings as a function of geocentric radial distance (coloured squares) with the interpolated magnetopause location shown in black. h Magnetic field perturbations at THA in field-aligned (FA) co-ordinates (radial, azimuthal, field-aligned as blue, green, red). i Ion velocity perturbations at THA in FA co-ordinates (colours as before). Vertical dotted lines indicate times of the magnetosheath jet whereas dashed lines indicate magnetopause crossings phase relationships were found for the two lower frequency signals with B R,sph in quadrature with v iR,sph (means and spreads of −96 ± 4°and −86 ± 4°for the 1.0-2.0 mHz and 2.8-3.5 mHz bands, respectively) and some 50°away from antiphase with B F, sph (−138 ± 5°and −123 ± 8°), as well as the phase between B F,sph and v iR,sph being consistent with 50°out from quadrature (−42 ± 8°and −37 ± 12°). In the 4.9-8.6 mHz band v iA,sph led B A,sph by 82 ± 6°, likely indicating a toroidal field line resonance (FLR, a standing Alfvén wave) 27 .
Solar wind observations. While the solar wind dynamic pressure was steady throughout this period, a number of fluctuations in the interplanetary magnetic field (IMF) were present, shown in Fig. 4b, particularly with several sign reversals in B z,sw . Many of these fluctuations were transmitted to the magnetosheath and observed by THB, as shown in panel a where observations within the magnetosphere have been removed for clarity. It can be seen that some of these sign reversals in fact preceded the magnetosheath jet. While the magnetosheath magnetic field observations were sparse and rather turbulent, there is an apparent near one-to-one correspondence between the sign reversals in the solar wind and magnetosheath observations during the period of interest (see Methods for details of the lagging procedure). Nonetheless, we present an additional 30 min of solar wind data either side of the interval to allow for possible errors. The magnetosheath jet occurred around the time of a magnetic field rotation which changed the IMF cone angle (the acute angle between the IMF and the Sun-Earth line) and thus the character of the bow shock upstream of the THEMIS spacecraft. When the cone angle is below~45°the subsolar bow shock is quasi-parallel, whereby suprathermal particles can escape far upstream leading to various nonlinear kinetic processes 30 . This results in a much more complicated shock region and turbulent magnetosheath downstream, with various transient phenomena that can impinge upon the magnetopause e.g. magnetopause surface oscillations occur more frequenctly under low cone angle conditions likely because of such transients 21 . Magnetosheath jets are just one example, with some of the strongest jets being caused by changes in the IMF orientation from quasi-perpendicular to quasi-parallel conditions 31 , as appeared to be the case during this event. Following this short period of low cone angle IMF, the shock conditions were oblique or quasi-perpendicular for most of the rest of the interval.
The variations present in the upstream solar wind did not appear to be periodic. The statistical significance of the wavelet power compared to autoregressive noise is shown for the three components of the IMF (Fig. 4d-f) as well as for the solar wind density (Fig. 4h) and speed (Fig. 4j). Throughout the extended interval presented, there were very few enhancements in wavelet power for any of the quantities considered that were even locally significant (let alone the more strict global significance we have imposed on the THEMIS observations). Crucially, there were no significant enhancements peaked at (or near) either 1.7-1.8 or 3.3 mHz frequencies (indicated by the horizontal dotted lines). Fig. 3 Observed dynamic spectra and phase relationships. a-g Wavelet dynamic power spectra of the magnetosheath total antisunward pressure a, magnetopause location b, magnetospheric radial c, azimuthal d and field-aligned e magnetic field perturbations, and magnetospheric radial f and azimuthal g ion velocity perturbations. Statistically significant peaks are indicated by black lines. The times of the magnetosheath jet (black dotted) and its cone of influence (white dashed) are also shown. h-k Wavelet band-pass filtered perturbations of the magnetospheric radial velocity (green) and radial (blue) and field-aligned (red) magnetic field pertubations at THA h, j along with their cross phases i, k where cyan is the difference between radial magnetic field and radial velocity, yellow is between the field-aligned magnetic field and radial velocity, and magenta is between the radial and field-aligned magnetic fields Given that the aperiodic IMF variations were present before the jet but the magnetopause motions and magnetospheric ULF waves all occurred directly following it, we conclude that the magnetosheath jet was indeed the driver of the narrowband signals observed by THEMIS.
Eigenfrequency estimates. To aid in our interpretation of the observed signals, we compare their frequencies with estimates of various resonant ULF wave modes applied to this event using the WKB method. From an existing database of numerical calculations within representative models 14 the n = 1 MSE is expected at 1.4 mHz during this interval, with its antinode located at the black circle in Fig. 2b. Spacecraft potential observations from THD and THE were used to arrive at the radial profile of the electron density 32 shown in Fig. 5b (black). See Methods section for details. We combine the resulting density profile with a T96 magnetospheric magnetic field model 33,34 using hourly averaged upstream conditions, an average ion density of 6.8 amu cm −335 , and assuming a power law for the density distribution along the field line using exponent 2 36 . Fundamental field line resonance (FLR) frequencies are then given at each radial distance by where v A is the local Alfvén speed and the integration occurs between the two footpoints of each field line, with the results shown in Fig. 5e. At THA's location this is estimated to be 6.7 mHz (panel e) in excellent agreement with the observed signal in v iA,sph , hence the observed frequency, polarisation and relative amplitudes point towards this signal being an n = 1 toroidal FLR.
Fast-mode resonances (FMRs), also known as cavity or waveguide modes, are radially standing fast-mode waves between boundaries and/or turning points 37,38 . In the outer magnetosphere, the lowest frequency FMRs are quarter wavelength modes resulting from over-reflection of fast-mode waves. It is thought that these may occur for magnetosheath flow speeds ≳500 km s À139 . However, at the local times of the observations Fig. 4 Upstream solar wind observations. a Magnetosheath magnetic field at THB in GSM components (x, y, z as blue, green, red) and magnitude (black).
Observations within the magnetosphere have been removed for clarity. The times of the magnetosheath jet are shown by vertical black dotted lines. b-j Lagged Wind observations of the pristine solar wind b magnetic field GSM components (x, y, z as blue, green, red) and magnitude (black), c cone angle, g density, and i speed. The significance of their respective wavelet spectra are also shown d-f, h, j, where the power has been divided by an autoregressive noise model. Dotted horizontal lines depict frequencies of 1.7-1.8 and 3.3 mHz this was not satisfied for either the ambient or the jet's flow speeds. Nonetheless, we still estimate the lowest possible FMR frequency given by This corresponds to a fast-mode wave propagating (assuming low plasma beta) purely in the ±R direction forming a quarter wavelength mode between the magnetopause r mp and an inner boundary at the Alfvén speed local maximum r ib (at r = 3.2 R E ) 40 . From the Alfvén speed profile for this event we calculate this to be 6.3 mHz, clearly much higher than the two remaining signals which were observed.
Ground magnetometer observations. Unfortunately, there was very poor ground magnetometer station coverage near the spacecrafts' footpoints with only one station available, Pebek (PBK; see Methods section for selection criteria). This station was nearly conjugate with THA, whose footpoint was at (66.3°, −132.0°) geomagnetic latitude and longitude, respectively. The observations are shown in Fig. 6.
A transient, similar to that at THA immediately following the jet, was observed in the H and E components. Its timing was consistent with the~40 s Alfvén travel time from the equatorial magnetosphere to the ground. Similar to the THA observations, following this transient other oscillations also occurred. Timefrequency analysis identified several statistically significant signals. In the H component this peaked at 3.5 ± 0.2 mHz and was contained within the jet's cone of influence. A later signal following the jet's cone of influence was present in the E component at 3.9 ± 0.1 mHz. The former was likely the ground signature of the 3.3 mHz signal observed by THA, however it is not entirely clear if this is also the case with the latter and if so why a change in polarisation occurred. Both these signals in the ground data had corresponding signatures in the Z component, though these were weak and very short lived (only 2 datapoints for each were statistically significant). While a power enhancement consistent with the 1.7-1.8 mHz signal could be seen in the H component, this did not satisfy our significance test. Finally, the 6.7 mHz toroidal FLR at THA might be expected in the H component on the ground due to the approximate 90 rotation of Alfvén waves by the ionosphere 41 . However, its frequency was not well resolved by the coarse data being only 20% lower than the Nyquist frequency. Nonetheless, the FLR was likely the cause of the triangular wave-like oscillations present in this component following the initial transient.
The poor coverage and low resolution of the ground magnetometer data mean it is insufficient in providing additional evidence towards the physical mechanism behind the THEMIS observations.
Discussion
We have presented THEMIS observations of the magnetopause and magnetospheric response to an isolated, impulsive antisunward magnetosheath jet. The~100 s duration jet triggered narrowband oscillations of both the magnetopause at 1.8 mHz and magnetospheric ULF waves with peak frequencies of 1.7, 3.3, and 6.7 mHz. We now compare the observations with several possible interpretations.
(1) Direct driving. The solar wind dynamic pressure was steady throughout this interval and while there were variations present in the IMF, these were aperiodic. The magnetosheath jet's total pressure was broadband and impulsive and it has been established from the magnetopause motion and the start of the wave activity that the jet triggered the observed signals. Since no significant narrowband oscillations at (or near) these frequencies were present upstream in either the solar wind or magnetosheath, we conclude that the observed response cannot have been directly driven. (2) Propagating Alfvén or fast-mode waves. The associated perturbations in v sph and B sph should either be in-phase or antiphase, unlike the observations. Furthermore, neither of these modes can explain the magnetopause motion nor the origin of the narrowband signals given the broadband driver. normals azimuthally is consistent with travelling surface waves, perhaps due to the Kelvin-Helmholtz instability, the lack of a difference between inbound and outbound crossings is not 42 assuming linear waves. There is no evidence from the multipoint interpolated magnetopause position for nonlinear overturning surface waves, pointing instead to a simple wave pattern. Crucially, timing analysis of the boundary (unaffected by assumptions of linearity) revealed the motions were largely directed along the normal to the undisturbed magnetopause, with azimuthal velocities consistent with zero i.e. no transverse propagation. (4) Field line resonance. We have already concluded that the 6.7 mHz signal corresponded to a fundamental toroidal FLR at THA because of the observed polarisation and excellent agreement with the estimated frequency of this mode. The v iR,sph − B R,sph phase relationships for the 1.7-1.8 and 3.3 mHz signals could be consistent with poloidal FLRs 27 . The poloidal mode is known to have slightly lower natural frequencies than the toroidal, however, these differences are typically no more than 15-30% 43 . Therefore, given that the n = 1 toroidal FLR frequency at THA was 6.7 mHz during this event, the much lower frequencies of 1.7-1.8 and 3.3 mHz cannot be explained as poloidal FLRs. Additionally, magnetopause motion is not expected to result from an FLR located several R E Earthward of the boundary. (5) Fast-mode resonance. Observational signatures of radially standing fast-mode waves require ±90°phase differences between v iR,sph , equivalent to the azimuthal electric field via E = −v × B, and B F,sph 25,26 , which were not observed. Exceptions to this perhaps occur in cases of exceptionally leaky or over-reflecting boundaries, however this would not be the case at the local times of the observations due to the moderate flow speeds present 39 . The large-amplitude magnetopause motions with near-zero azimuthal phase velocities are also inconsistent with a fast-mode resonance interpretation. Finally, we estimate that during this event cavity/waveguide modes of any type cannot explain frequencies below 6.3 mHz. The difference between this estimate and the observed lower frequency signals are much larger than the expected errors (~3% 44 ). (6) Pulsed reconnection. While a reconnection outflow was seen before the magnetosheath jet, no clear signatures of local magnetopause reconnection were observed subsequently throughout the event. These are all in agreement with the statistically significant peaks in the wavelet spectra, after the instrumental effects on the ion velocity due to the spacecraft potential were modelled and taken into account. The similarity in observed magnetopause normals for inbound and outbound crossings as well as an azimuthal boundary velocity consistent with zero are both expected for a standing surface wave. The phase relationships between the quantities for both signals were in good agreement with theoretical expectations of MSE 13 in the regions tan k F F > 0 as depicted in Fig. 1e when also taking into account the reported 50°phase shift of B F,sph in global MHD simulations of MSE 15 . Given the spacecraft were just southward of the expected MSE phase midpoint (Fig. 2b) this is exactly the polarisation expected for the fundamental. In contrast, the second harmonic should see the phase relations for tan k F F < 0 in this region. While in the WKB approximation the n = 1 antinode and n = 2 node coincide, this may not be the case in the full solution which could exhibit anharmonicity as is the case with FLRs 36 .
We therefore conclude that THEMIS observed both the n = 1 and n = 2 MSEs as the 1.7-1.8 and 3.3 mHz signals respectively, providing unambiguous direct observations of this eigenmode made possible only due to the fortuitous multispacecraft configuration during a rare isolated impulsive magnetosheath jet. MSE constitute a natural response of the dayside magnetopause, with these observations at last confirming that plasma boundaries can trap surface wave energy forming an eigenmode. Magnetopause dynamics in general have wide ranging effects throughout the entire magnetospheric system and MSE should, at the very least, act as a global source of magnetospheric ULF waves that can drive radiation belt/auroral interactions and ionospheric Joule dissipation.
It remains to be seen how often MSE occur. Future work could search the large statistical databases of magnetosheath jets for other potential events (satisfying the strict observational criteria presented in this paper) to provide further direct evidence. Other impulsive drivers could also be considered including interplanetary shocks and solar wind pressure pulses. However, since MSE are difficult to observe directly, remote sensing methods should be developed. The polarisations of magnetospheric ULF waves from spacecraft observations, as presented in this paper, may be one such method. However, potentially more useful would be ground-based signatures from magnetometers and ionospheric radar due to the wealth of data being produced. Currently, the ground signatures of MSE are not well understood, having received little theoretical attention. However, in this paper we show that MSE can exhibit at least some similar signals to the in situ spacecraft observations within conjugate high-latitude ground magnetometer data. Further investigations using theory, simulations and observations should explore all possible remote sensing methods such that the occurrence rates and properties of MSE more generally can be characterised.
Methods
Data. Observations in this paper are taken from the five Time History of Events and Macroscale Interactions during Substorms (THEMIS) spacecraft 45 in particular using the Fluxgate Magnetometers (FGM) 46 , Electrostatic Analysers (ESA) 47 and Electric Field Instruments (EFI) 48 all at 3 s resolution. We used the Geocentric Solar Magnetospheric (GSM) coordinate system for vector measurements from all spacecraft except THA. For this spacecraft, since we use it to evaluate the magnetospheric ULF wave response, we define a field-aligned (FA) coordinate system. The linear trend of each GSM magnetic field component was determined between 21:45 and 23:30 UT using iteratively reweighted least squares with bisquare weighting 49,50 . This trend was used to define the field-aligned direction F of the FA system and was subsequently subtracted from the magnetic field data. The azimuthal direction A, which nominally pointed eastward, was given by the cross product of F with the spacecraft's geocentric position. Finally the radial direction, predominantly directed radially outwards from the Earth, was determined by R = A × F. The equivalent directions of the FA system in the MSE box model are shown in Fig. 1.
Solar wind observations at the L1 Lagrange point were taken from the Wind spacecraft's 3-D Plasma and Energetic Particle Investigation 51 and Magnetic Field Investigation 52 both at 3 s resolution. In order for this data to approximately correspond to the shocked solar wind arriving in the vicinity of the magnetopause, a constant time lag was applied. First the data were time lagged by 40 min 27 s, the average amount given in the OMNI dataset from the Wind spacecraft to the bow shock nose. An additional 2 min lag to the magnetopause was subsequently added, determined by manually matching up sign reversals in the solar wind magnetic field observations with those in the magnetosheath at THB (Fig. 4a, b). Using Advanced Composition Explorer (ACE) solar wind data instead of Wind did not substantially change any of the subsequent results.
Finally, ground magnetometer data were also used. Ground stations were chosen by computing the locations of the footpoints of the THEMIS spacecraft from a T96 model 33,34 . Only ground stations on closed field lines (according to T96) no more than 1 R E earthward from the observations and within ±1 h of magnetic local time were selected. This, unfortunately, resulted in only one station, Pebek (PBK) in the Russian Arctic. Data from this station were only available at 60 s resolution and are presented in geomagnetic co-ordinates where the horizontal components H and E point geomagnetically north and east, respectively, and Z is the vertical component. The median was subtracted from each component.
Magnetopause motion. To track the location and motion of the magnetopause, the innermost edge of the magnetopause current layer was identified manually from THEMIS FGM data and piecewise cubic hermite interpolating polynomials 53 were used to estimate the radial distance to the boundary from all crossings (shown as the coloured squares in Fig. 2g) at all times, resulting in the black line. This method was chosen because it does not suffer from overshooting and anomalous extrema as much as other spline interpolation methods, thus any resulting oscillations present would be underestimates. Nonetheless, the crucial aspects of the results presented, such as the time-frequency analysis, proved to be largely insensitive to the interpolation method used.
Boundary normals for each magnetopause crossing were also estimated. This was done by taking the cross product of 30 s averages of magnetic field observations either side of each crossing, which assumes that the magnetopause was a tangential discontinuity 54 . This method was used since minimum variance analysis 55 was poorly conditioned throughout the interval (the ratio of intermediate to minimum eigenvalues was~2). The normals were insensitive to the precise averaging period used. Projections of these normals are shown in Fig. 2a, b where we distinguish between inbound and outbound crossings by colour. Magnetic shear angles were calculated from the same averaged magnetic field observations.
Finally, two-spacecraft timing analysis was also performed. Using the ascertained magnetopause normals n, the velocity of the boundary along the normal is given by where r α is the position of spacecraft α during the magnetopause crossing at time t α . This assumes a planar surface with constant speed. For each inward/ outward motion of the magnetopause, the analysis was applied to all spacecraft pairs using both sets of normals. The multiple THC crossings at around 22:37 UT were neglected. Taking the average magnetopause normal over all crossings N as representative of the undisturbed boundary, each determined magnetopause velocity can be decomposed into parallel and perpendicular velocities Replacing N with a normal from a model magnetopause does not significantly affect the results.
Modelling ESA instrumental effects. The ESA instrument can only detect ions whose energy overcomes the spacecraft potential, however the majority of ions in the magnetosphere are cold 32 . During this interval we find the temperature of cold ions to be 18 eV by fitting a Maxwell-Boltzmann distribution to the population observed in the omnidirectional ion energy spectrogram at around 22:45 UT (Fig. 2f). While no spacecraft potential observations were available for THA, those from THC-E suggest a value of~11 V at THA's location (Fig. 5a). A sinusoidal oscillation of the magnetopause r mp = C sin ωt would result in velocity v iR,sph = Cω cos ωt and using C = 0.4 R E we find that protons oscillating at 1.8 mHz would have a peak bulk kinetic energy~4 eV, less than the assumed spacecraft potential. To estimate the effect on the data, we take one-dimensional velocity moments of the Boltzmann distribution corresponding to the cold ions, excluding all energies below the spacecraft potential. This suggests that the expected velocity oscillations of 27 kms −1 amplitude would only be detected as 6 kms −1 by the ESA instrument.
Wavelet transform. Time-frequency analysis of the data was performed using the Morlet wavelet transform 56 , with the resulting dynamic power spectra shown in Fig. 3a-g. At each time all peaks between 0.5-10 mHz whose power and prominence were both above the two-tailed global 99% confidence interval (using the Bonferonni correction 57 ) for an autoregressive AR(1) noise model were identified, shown as the black lines. The magnetosheath jet's cone of influence, the region within time-frequency space that is affected by the jet due to the scale-dependent windowing of the wavelet transform, are also shown as the white dashed lines. Significant narrowband signals were investigated by reconstructing a complex-numbered version of the time series from the Morlet wavelet transform across the bandwidth of each signal only 56 . The real part of the resulting time series is the band-pass filtered data whereas its phase is used to investigate polarisations. Note that it is not necessary for both time series to exhibit statistically significant power enhancements in the same region of time-frequency space for a coherent phase relationship to potentially exist between them within that region 58 .
Spacecraft potential inferred density. The electron density can be inferred from measurements of a spacecraft's potential and in this paper we use an empirical calibration determined for THEMIS 32 . The coefficients of this calibration, however, vary from spacecraft to spacecraft and can slowly drift with time. Unfortunately, the first epoch time for these coefficients was in January 2008. Given the agreement in spacecraft potential observations with radial distance for THC-THE (the only spacecraft for which EFI was deployed shown in Fig. 5a), we simply ensure the inferred densities are consistent between spacecraft. The densities for THD and THE agreed very well, however, THC exhibited some systematic differences in density (Fig. 5b). These differences largely occurred at much smaller L-shells, nonetheless, we neglect THC density observations for this reason.
To arrive at a radial density profile, we bin the spacecraft potential inferred densities from THD and THE by radial distance using 0.1 R E bins, taking the average. The results were subsequently median filtered over 0.5 R E and the profile was extended to the model magnetopause 59 using a constant extrapolation.
Data availability
THEMIS data and analysis software (SPEDAS) are available at http://themis.ssl. berkeley.edu. The OMNI data were obtained from the NASA/GSFC OMNIWeb interface at http://omniweb.gsfc.nasa.gov. Wind data were obtained from the NASA/GSFC CDAweb interface http://cdaweb.sci.gsfc.nasa.gov. | 2019-02-12T15:10:33.000Z | 2019-02-12T00:00:00.000 | {
"year": 2019,
"sha1": "cb5cda064e1be44c1ce222991b38e82e2935f5ca",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-018-08134-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c02f63a8b0d8e7675832c3ae6ab695c97edfe4f4",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
27759525 | pes2o/s2orc | v3-fos-license | How to Measure the Quantum Measure
The histories-based framework of Quantum Measure Theory assigns a generalized probability or measure $\mu(E)$ to every (suitably regular) set $E$ of histories. Even though $\mu(E)$ cannot in general be interpreted as the expectation value of a selfadjoint operator (or POVM), we describe an arrangement which makes it possible to determine $\mu(E)$ experimentally for any desired $E$. Taking, for simplicity, the system in question to be a particle passing through a series of Stern-Gerlach devices or beam-splitters, we show how to couple a set of ancillas to it, and then to perform on them a suitable unitary transformation followed by a final measurement, such that the probability of a final outcome of"yes"is related to $\mu(E)$ by a known factor of proportionality. Finally, we discuss in what sense a positive outcome of the final measurement should count as a minimally disturbing verification that the microscopic event $E$ actually happened.
Introduction
Ever since quantum theory was first put into the form of a complete mathematical scheme, there have been innumerable attempts to explain it and to understand what it is trying to tell us about the world. Depending on what version of quantum mechanics one follows, and how one interprets it, one needs to abandon one or another of the classical ideas we are comfortable with, such as causality, locality, or homomorphic logic [27]. Perhaps the central questioń that everyone faces is the so-called measurement problem, the fact that the theory appears to assert that when someone measures a system, its wave function "collapses". No longer a superposition, it now corresponds to a definite value for the physical property that has been measured and which the system has somehow acquired. Different interpretations explain this "collapse" differently, mostly by trying to explain it away. On the on the one hand, one could think that it is simply the way we update our description when we obtain new knowledge, that the physical properties characterising a system are always well defined, but we only learn about them when measuring. Such interpretations include hiddenvariable theories and Bohmian Mechanics [3,4]. On the other hand, one could think that all the possible outcomes of a measurement actually occur, with the universe branching into multiple realities each time a measurement is performed. This is the many-worlds idea [6]. Or one could argue that the collapse is an illusion stemming from the decoherence that takes place when a system interacts with a measuring device or with an environment.
In this paper we will work in the framework of Quantum Measure Theory [24,17,8,11,25], which generalizes the mathematical concept of a measure-space so as to allow for quantal interference. When refashioned in this language, Quantum Mechanics appears as a generalized probability theory of a type inspired by the path integral. Instead of a wave function, Quantum Measure Theory works with the histories of the system, assigning a real number, the measure µ, to every set of histories. In some special measurement-situations, µ gives the probability of the outcome of an experiment one could perform, but in general one can not identify it with any observable probability.
The idea behind this reformulation is to arrive at an understanding which not only provides probabilities for certain types of laboratory events, but which goes further by offering a framework within which one can speak about the microworld directly, without needing to presuppose concepts such as experiment, observer or measurement. To that end, several schemes have been proposed in which reality is described by a certain mathematical combination of individual histories (individual particle trajectories for example) called a "coevent". Since multiple histories enter into this description of reality, one could be tempted to fit Quantum Measure Theory into the many-worlds interpretation. Or because it works with definite trajectories, one could also be tempted to fit it into Bohmian Mechanics. In fact, however, Quantum Measure Theory doesn't fit into any of these interpretations, and offers a distinctive vantage point from which one can view the measurement problem.
Quantum Measure Theory is also intended to be the right dynamical framework for Quantum Gravity. For the Causal Set approach [5] in particular, it provides a dynamical law which can describe the growth of the causal set, of the universe, without succumbing to the limitations which the Schrödinger equation encounters when a continuous, background time is unavailable. The ability to do without a fundamental notion of measurement or external agent is likewise important to a theory like quantum cosmology, whose field of application is one where no recognizable "observer" could exist [28,18,10,26].
A question that arises naturally in connection with Quantum Measure Theory is whether the measure µ has any experimental significance outside the special context in which it can be interpreted as the Born-rule probability of a particular instrument-event. To the extent that it does, this will enhance its status as an independent way to formulate quantum mechanics, and it will also suggest practical experiments which would test quantum predictions about events of a different type than one usually deals with.
The goal of this paper is to provide a positive answer to the question just raised. We will present schematically an experimental setup that will reveal the measure of any given set of histories (any given event), including events extended arbitrarily in time. As we have said, not every event E can be made to correspond with a projection operator (or member of a POVM) whose expectation value would be µ(E). To compensate for this we will need to couple the system to suitable ancillas and then to perform suitable transformations on them followed by a final projective measurement. But (perhaps surprisingly) we will not require anything more exotic. The procedure we will describe may be thought of as a way to filter which trajectories a particle can have travelled, based on a generalization of the Quantum Eraser [20,13,1]. Whether in Quantum Mechanics we can speak about particle trajectories as we are accustomed to do classically is not something that everyone agrees on [9,30], but our results will illustrate how one can do so consistently in Quantum Measure Theory.
The plan of this paper is the following. First we introduce Quantum Measure Theory ( § 2), then we introduce the system we will study ( §3), and then we explain how to couple our ancillas to the system ( §4) and how to process them so as to obtain the measure we are looking for ( §5 and §6). Finally we will suggest how to interpret our results ( § 7) and conclude with some summary remarks and possible extensions of our work ( §8). This paper is dedicated in memory of David Finkelstein, whose thought continues to guide fundamental physics more than most workers probably appreciate. For RDS especially, David was a mentor and inspiration from graduate school days onward, and from Manhattan to Athens to Atlanta. We like to think that David, who once wrote "I attach observables to histories, not instants", would have been pleased to see how this declaration of his might be put into practice.
Quantum Measure Theory
A measure on a space Ω is a way to assign a number to each suitable subset of Ω. An example of a classical measure is the probability measure on a sample space, or the Lebesgue measure on a Euclidean space which, depending on the dimension n, gives to each measurable subset of R n . its conventional length, area, volume or hyper-volume in Euclidean geometry.
In the classical case, a measure space is defined formally by the triple formed by a set Ω, a set-algebra A over Ω and a function µ : A −→ R + . A setalgebra over a set Ω is a set of subsets of Ω, including the empty set, and closed under complementation, union, and intersection. (In the classical case, one usually requires also closure under infinite sequences of intersections or of unions, making A a σ-algebra.) The function µ is called the measure.
Quantum Mechanics can be understood as a generalized measure theory on the space Ω of possible histories of some physical system. It assigns a non-negative real number to every event, an event being a subset of Ω, in other words a set of histories. The "quantum measure" µ that does this cannot be an ordinary probability measure because there is interference, in consequence of which µ is neither additive nor bounded above by unity. It is a "generalized measure" for which the measure of an event is not simply the sum of the probabilities of the histories that compose it. Instead, the measure of an event is given (in an extension of the Born rule to general events) by the sum of the squares of certain sums of the complex amplitudes of the histories which comprise the event.
As just stated, Ω is the history-space of the physical system in question. By history we mean a complete classical description of the physical reality of our system, for example a particle's history would be its trajectory or worldline, while a field's history would be its configuration in spacetime. Knowing the measures of sets of histories (knowing µ(A) ∀A ∈ A) allows you to make predictions about the system in a similar way to how, in the usual formulation of Quantum Mechanics, knowing the wavefunction allows you to make predictions. Moreover, there exist quantal measures that yield theories more general than Quantum Mechanics, for example non-unitary theories. [17] As we have said, the feature that distinguishes a quantum theory from a classical theory is interference. This means that the measure will enjoy different formal properties than classically. We can define the following set-functions for any generalized measure theory over a sample space Ω: and so on, where A, B, C, etc. are disjoint subsets of Ω.
These functions allow us to distinguish between different types of theories. We will say that a theory is of level k if it satisfies I k+1 = 0. One can show that this condition implies also I m = 0 for every m bigger than k + 1. A classical theory is one of level 1, which is equivalent to saying that there is no interference: µ(A ∪ B) = µ(A) + µ(B). A quantum measure theory is a theory of level 2, i.e. a theory with second order but no higher order interference. An example is ordinary quantum mechanics, but it is not the only class of theories in this category.
Beyond level 2, several researchers have been investigating the possibility of theories residing at level 3 or higher [2,15,14,16,22,19,21,29], but for the moment there has not been any evidence of higher-order interference from the experiments that have looked for it. See for example the three-slit experiments that have put increasingly stringent bounds on third order interference. Such theories are, in any case, outside the scope of this paper.
Any normalized quantum measure can be built by using a decoherence functional D : A × A −→ C on pairs of subsets of Ω which satisfies: The quantum measure in terms of the decoherence functional is: One can check that any measure defined this way is a level 2 measure.
Using these ideas, let us see how, via the path-integral, ordinary Quantum Mechanics can be understood as a level-2 measure theory. In ordinary quantum mechanics the probability of an experimental outcome (probability density for continuous outcomes), let's say a particle being at a position x 0 at a time t 0 , is supposed to be given by the square of the amplitude associated with that event.
For a particle this amplitude is given by the wave function, or equivalently by a path integral over all possible histories ending with the particle at x 0 at t 0 .
In this expression, which implicitly contains the Born rule, the amplitude of each individual history is given by the exponential of iS[x]/ , S[x] being the action evaluated along the trajectory. One can verify that a wavefunction defined via (9) evolves unitarily, obeying the Schrödinger equation with the Hamiltonian associated with the action S. Thus, the wavefunction formalism is in a sense contained in the path integral formalism. Now, we want to show that our double path integral for the probability-density is equivalent to a level 2 measure. For doing so we define the following decoherence functional for a pair of histories: 6Álvaro Mozota Frauca, Rafael Dolnick Sorkin Here x and y denote two histories and we have made explicit the delta function of the final positions that is implicit in (10). (This condition that only histories that end at the same point can interfere, might seem to give a special status to the "collapse time" t 0 . That µ nevertheless be independent of t 0 , implies a consistency condition which holds automatically, thanks to unitarity.) The decoherence functional evaluated on general sets can be derived by using the formal properties (4)- (7): (Instead of a sum over the trajectories contained in the sets X and Y , we have an integral, because we are working with continuous variables.) Now we can compute the measure of the set X(x 0 , t 0 ), which we define as the set of all possible histories ending at (x 0 , t 0 ): We have thus recovered from the decoherence functional the same probability density that one computes using ordinary quantum mechanics. In this manner, one can understand Quantum Mechanics as a level 2 measure theory.
In the formulas just above, we had continuous integrals, but for discrete systems, we will have sums, in which case the decoherence functional will take the simpler form where A(x) is the amplitude of the history x, and where δ x(t0),y(t0) now denotes a Kronecker delta. This will be the applicable form in the remainder of this paper.
We have just seen how the probability of one particular experimental observable (the position of a particle at a specified time) can be understood as the measure of a certain set of histories, but this is only a start. There are many other sets of histories (many other events) that do not correspond to any particular time or any obvious observable of ordinary quantum mechanics. How to interpret the measures of such sets is not evident. (Recall that a quantum measure µ can take values bigger than one and cannot be construed as a probability measure.) Just in the case of an event having a measure 0, we can say that the event does not happen; we will say it is "precluded". But how should we interpret the measure when it does not vanish? Can its value be made the object of an experimental test? Z 0 1 0 1 X Fig. 1 A simple setup in which we have two Stern-Gerlachs oriented in the Z and in the X direction. We label with 0 and 1 the two beams in which the particle can emerge after encountering an analyzer.
Our experiment
Can we design an experimental setup that will allow us to "measure the measure" of of any desired event of a given system. Let us try to find such a setup for the kind of idealized system one encounters in quantum-information theory and quantum optics.
Our system will be a particle that passes through a succession of similar devices (say Stern-Gerlach analyzers) which split the beam into two different trajectories, depending on the eigenvalue of the observable being "filtered", and such that the beams are reunited before the next device so that they can interfere with each other. In this setting a history will simply be one of the possible paths the particle can follow. If the particle carries spin-1/2 (is a two-level quantum system), then the beam in which it emerges from a given analyzer can be labelled by the corresponding eigenvalue, letting us represent a history by a sequence of eigenvalues which we will sometimes call a "chain".
One could also think of each encounter with an analyser as a kind of measurement, but if one wanted to use that language, a term like "fake measurement" or "pre-measurement" would be more appropriate, unless one inserted a detector into one of the beams to "collapse the wavefunction" and provide irreversible macroscopic information about which path the system had travelled.
An example of this kind of setup is a series of Stern-Gerlach apparatuses oriented in different directions and a spin 1/2 particle travelling through this series of apparatuses, as we can see in figures 1 and 2. Another example is an optical circuit like the one shown in figure 3. In this kind of circuit, the ket |0 corresponds to a photon travelling in the upper branch and |1 to a photon travelling in the lower branch. In relation with the previous example, the beam splitter serves the dual purpose of reuniting the two beams and then splitting them again according to a different eigenbasis. For reflectivity 1/2, the setup is equivalent to the one in figure 1, since if we identify the ingoing beams (before the beamsplitter) with eigenstates in the Z-basis, the outgoing beams will correspond to eigenstates in the X basis. For splitting according to another basis than X, we would have to design more complicated combinations of optical devices (beamsplitters and phase shifters, mainly). We label the beams with 0 and 1, as before. In red, an example of a possible path followed by a particle for a length 4 path. Our notation represents this history as γ = (0, 1, 0, 0) For this kind of system, the history formulation is simple. We can represent a history γ as a chain of n bits for n analyzers, indicating the corresponding particle-path. We will assign 0 to the upper beam and 1 to the lower beam, and we will write a history as where the γ i are either 0 or 1. An example of how this notation works can be seen in figure 4. From now on we will use the terms "history", "path", and "chain" interchangeably.
For each path, the ordinary quantum mechanical apparatus of state spaces and projectors gives us an amplitude as follows. Corresponding to the i th device or "filter" is an operator given byn i (in the Stern-Gerlach case, the direction in which we orient the magnetic field), and we project the state-vector according to the selected eigenvalue γ i : where |Ψ i and |Ψ i−1 represent the state-vectors after and before the device, respectively, and where |n i , γ i is the state-vector with eigenvalue γ i in then i direction. We can expand this result to a chain of length n, γ = (γ 1 , γ 2 , γ 3 ...γ n ) as follows: ... n 2 , γ 2 |n 1 , γ 1 n 1 , γ 1 |Ψ initial (18) where |Ψ initial , |Ψ f inal are the initial and final state-vectors. From this, we can read off the amplitude of the chain γ as where by |n 0 , γ 0 we mean the initial wave function Ψ initial . Now that we know the amplitude of a single history, we can compute the decoherence functional for any two sets of histories, we obtain the measure of the event X: We see that µ(X) is a function of the initial wave function, of the histories contained in the set X, and of the n settings n i .
Coupling ancillas
Now that we have defined the measure of every possible event or set of histories, we will try to devise a procedure that will let us determine these measures experimentally. For that purpose, we will couple in a series of ancillas as follows. Each ancilla will be prepared in a initial state, |r where r stands for ready. At each step the corresponding ancilla will detect which beam the particle occupies at that point: From now on we will distinguish a particle state with the subscript s, leaving the ancilla-states without subscripts. The ready state could be a third state orthogonal to both 0 and 1 (such multilevel ancillas could be useful if we wanted to couple system to ancilla weakly, as in a "weak measurement"), but for our purposes, it suffices to make do with a two level ancilla, with the 0 state, for example, serving as ready state. Such a coupling corresponds to a CNOT gate. For a general superposition, |Ψ s = α |0 s + β |1 s , the ancilla acts as follows: 10Álvaro Mozota Frauca, Rafael Dolnick Sorkin Now suppose we were to measure ("strongly") the ancilla in the basis, ( |0 , |1 ). Evidently, this would be equivalent to measuring the particle in the same basis, inasmuch as the outcome-probabilities would be the same and the particlestate would "collapse" in both cases to the eigenstate associated with the eigenvalue obtained.
If instead we were to measure the ancilla in the basis ( |+ , |− ) something different would happen: As we see from (27) and (28), the probabilities to obtain |+ or |− would both be 1/2, and we would not learn anything about which path the particle had taken. Furthermore, after the outcome |+ , the wave-function of the system would have reverted to what it had been before its coupling to the ancilla: "the system would not have been disturbed" ("quantum eraser effect"). On the other hand, after the outcome |− , a phase would have been introduced into |Ψ s in what turns out to be an unhelpful way.
Later we will generalize this result to show that looking for, and finding, a particular superposition (in this case |+ ) causes the ancillas 'forget' some information, leaving |Ψ s in a "minimally disturbed" state. Notice that this 'erasure' is probabilistic; it only succeeds if we obtain a particular outcome upon measuring the ancillas.
Final wavefunction of particle + ancillas
Now that we have designed the ancilla-system coupling, let's compute the final wavefunction of the combined system. Suppose we start with |Ψ s , and the n th analyzer is set in then i direction. Let's see what happens when we couple in the first two ancillas: For n stages this generalizes immediately to where in |γ s,f , γ , γ s,f denotes the position of the particle at the end of the path γ, which is that corresponding to the last bit in the chain, and the second γ denotes the joint state of the n ancillas, the first ancilla being in the state corresponding to γ 1 , the second to γ 2 , and so on. We can see that |Ψ f inal reflects a superposition over all the possible paths, and the amplitude corresponding to each path is the amplitude computed earlier.
Measuring the measure
Let E be any given event (any given set of histories of our system). As we have said, our objective is to find an experimental procedure that will reveal µ(E), the quantum measure of this event. Specifically, we seek to relate µ(E) to the probability of some directly observable instrument-event or "outcome".
To that end, we have introduced a series of ancillas which in a sense watch the particle and record the path that it follows. We now look for a unitary transformation on the ancillas, followed by a final projective measurement with two or more outcomes, so arranged that the probability of the first outcome will be proportional to µ(E) by a known factor of proportionality.
We will start by explaining how to achieve this in a simple example with histories of length two, and then we will generalize to histories of any length.
A simple case
Consider, then, the simple case shown in figure 1. This is the case of histories of length two, so the number of beams for the particle, the number of ancillas and the length of the chains are all two.
There are four possible histories, and one can easily compute their amplitudes for initial wave-function, The result is shown in table 1, while table 2 records the resulting measures of the 2 4 = 16 events which can be built with these histories. Once the ancillas have done their work, we will measure them in a suitably chosen basis and interact no further with the particle. If x is a possible outcome of our measurement and |x is the associated eigenvector, then the probability for outcome x is For computing P (x) we need the "final wavefunction" found above, namely Table 1 Amplitudes of all the possible histories of the length 2 system for initial state Table 2 The 16 possible events E and their measures µ(E), for a length 2 system with initial state |Ψ = α |0s + β |1s .
Trivial measures and easy to measure measures
Among the events that we have shown in table 2 there are two that are trivial and need not be measured at all: the empty set and the set of all histories. Almost as trivial are the singleton events, those which comprise only one history. For these events, we didn't need the ancillas at all, but since we have them, it suffices to measure each ancilla separately and observe which chain results, because that is equivalent to directly observing which path the particle has followed. In fact, equation (31) says precisely that the probability that these measurements yield the chain γ is exactly the measure of the event containing just the history γ: P (00) = |α| 2 /2 P (01) = |α| 2 /2 P (10) = |β| 2 /2 P (11) = |β| 2 /2 (32)
Two-history events
Turning now to events that contain two histories (the first case of real interest), let's look first at, {00, 01}, {10, 11}, {00, 10} and {01, 11}. All of these events have in common that both histories agree in one bit and differ in the other. Hence, we want the ancilla that records the bit where they differ to "forget" that information. To do this, as explained before, we will measure that ancilla in the basis ( |+ , |− ). The other ancilla, we will measure in the basis ( |0 , |1 ). The outcome-probabilities we obtain this way are, respectively, P (0+) = |α| 2 /2 P (1+) = |β| 2 /2 P (+0) = |α + β| 2 /2 P (+1) = |α − β| 2 /2 (33) Thus, we recover the desired measures up to a factor of two that comes from the fact that we have a probability one half of obtaining |+ . This lost factor of 2 in probability represents the inefficiency of extracting information that we didn't really need, and then having to forget it.
It is instructive to compute also the probabilities where we get outcome |− : (34) We can see that where there is no interference between the histories (the first two probabilities) we have obtained the measure again, but where there is interference the probability doesn't correspond to the true measure. For this reason we must take the probabilities with outcome |+ . More generally, for events that may contain more than two histories, we will always look for a superposition that won't alter the interference among them.
The remaining two-history events are {00, 11} and {01, 10}, and for them, we will need to involve both ancillas nontrivially. For example, we can perform first a unitary operation on the ancillas with the effect: where with ⊕ we denote Boolean addition, i.e. 0 ⊕ 0 = 1 ⊕ 1 = 0, 0 ⊕ 1 = 1 ⊕ 0 = 1. Under this transformation the wavefunction becomes: If after doing this, we measure the first ancilla in the basis ( |0 , |1 ) and the second one in the basis ( |+ , |− ), we obtain Comparing with the table, we see that we have obtained correctly the measures, µ{00, 11} and µ{01, 10}, with the same normalization of 1/2 as before.
The reason this works is that these events can be characterized by the parities of their chains, (γ 1 , γ 2 ), namely γ 1 ⊕ γ 2 = 0 for the first event and γ 1 ⊕ γ 2 = 1 for the second event. By the transformation U , we arrange for the first ancilla to hold this parity, and by our choice of what to measure, we "erase" the now unwanted information held by the second ancilla, which still would distinguish between the two histories comprising the event.
Three-history events
This type of event is sufficiently close to the general case that it seems best to stop thinking in terms of the separate ancillas, and ask instead what measurement we would like to perform in their joint Hilbert space.
Suppose, for example, that we are are interested in the event E = {00, 01, 10}.
We then want to measure in an orthonormal basis containing the superposition, |00 + |01 + |10 . We will take the following basis: We claim that µ(E) = µ{00, 01, 10} is deducible from the probability of obtaining the measurement-outcome |1 . What is important here is that |1 corresponds to a superposition of the three histories of the event with the same weight and with no phase between them. Any phase that was present would affect the way the different histories interfere, as happened before when we looked for the vector |− . The probabilities of the four outcomes are P (1) = 1 6 (|α + β| 2 + |α| 2 ) P (2) = 1 6 (|α + β| 2 + |α| 2 ) P (3) = 1 6 (|α − β| 2 + |β| 2 ) P (4) = 1 6 (|α − β| 2 + |β| 2 ) In particular, we see as claimed that the probability of outcome 1 is one third of the measure of the event {00, 01, 10} we were looking for. The factor of 3 comes from the normalization factor 1 √ 3 , which in turn just reflects the number of histories comprising E.
The probabilities of outcomes 2 and 3 are not proportional to the measures of the sets of histories superposed in |2 and |3 because of the phases introduced. Curiously, however, there is no discrepancy in the case of outcome 4. In that case, also, a phase is introduced but it is a relative phase between the histories ending on 0 and the histories ending on 1. Since histories with different final positions do not interfere, such a phase doesn't affect the answer.
In order to measure the measure of one of the remaining two three-history events, we need to measure the ancillas in a basis including the sum of the ancilla-states corresponding to the event in question, or else in some other superposition with phases that cannot affect the probability, as happened with |4 .
We remark here that it wasn't really necessary to perform a "complete measurement" on the ancillas in any basis. It would have sufficed to measure the ancilla observable that took (say) the value 0 on |1 , and the value 1 on its orthogonal complement.
The general case
In a more general situation with histories of n steps, we will have 2 n possible histories and the events will be collections of them. Suppose we wish to measure the measure of an event E containing k histories: We can use the expression (21) derived earlier to find the value we are after: The most direct approach to determining µ(E) experimentally is, as we have done before, to look for the superposition of the k chains contained in this event, that is, to measure the ancillas in any basis containing the following state: Provided that the measurement is performed on the ancillas without touching the system itself, the probability of outcome E is given by (30) with wavefunction (29). The projector in this case is When we apply this projector to our wavefunction we get: The probability of outcome E is the squared norm of this state: where we have used the orthonormality relations γ j s,f , γ j |γ i s,f , γ i = δ γ i n ,γ j n δ i,j . As anticipated, P (E) is the measure µ(E) of the event in question, divided by the number of histories in the event.
Thus, we have shown in general that in order to measure the measure of an event E, it suffices to determine the probability of the corresponding superposition |E (in the ancillas) of the histories comprising E. In principle, this solves the problem completely. In practise, however, it might not be easy to find an accessible observable in the Hilbert space of the n ancillas that has |E as an eigenvector. In the next section we will see some ways to simplify this task.
It is also worth taking note of the state of the combined system after the measurement, which is After a measurement of the ancillas which yields the result E, they are of course no longer entangled with the particle, but what's of interest in (46) is the wave function of the particle that results from such a measurement. As is easy to recognize, it is precisely the wave function that one would obtain in the path-integral formalism by performing a "conditional" integral to which not every history contributes, but only those histories contained in the event E.
Measuring in a big Hilbert space
As we have seen, in order to measure the measure we need to look for a particular superposition in a 2 n -dimensional Hilbert space. This can be hard to implement, and in this section we will examine some ways of doing it.
Simplification with boolean sum
First of all let's explain how via 2 qubit gates we can reduce the number of multi-ancilla measurements we have to do. We will do this by generalizing the device of Boolean sums that we utilized earlier. We will start by treating the simple case of two-history events and then generalize to 3 histories and k histories.
Two histories
Suppose the event whose measure we want to measure consists of two histories: E = {γ 1 , γ 2 }. For any given pair of chains, there will be two kind of bits, bits that are shared by both chains and bits in which they differ. Since the order in which the bits occur is not important here, we can write the chains as In accordance with (44), we thus want to design a measurement which looks for the state, The tensor product structure of this state will let us build up our measurement from simpler pieces. The first set of factors can be measured directly, qubit by qubit. The second factor cannot, but we will now show that the required measurement can also be built up from single qubit measurements.
Recall that in the simple case of n = 2, we introduced a unitary operation, the Boolean sum, that allowed us to make do with a single qubit measurement. We can generalize this idea to l qubits in the following way: This unitary operation can be decomposed in l − 1 Boolean sums which are two qubit gates, so it can be easily implemented. We apply this unitary to the n − m qubits corresponding to the parts of the chains that differ and get the following chains: Now the chains differ just in one qubit! Therefore we can measure individually each one of the common qubits in the ( |1 , |0 ) basis and measure the last qubit in the ( |+ , |− ) basis, so as to "forget it", as we have explained before.
Let us prove that measuring this way after performing the unitary operation is equivalent to measuring for the original superposition |E . The probability doesn't change when a state evolves unitarily if the projector also evolves unitarily: The unitary evolution of our projector is: 18Álvaro Mozota Frauca, Rafael Dolnick Sorkin where: which is exactly the state we were proposing to measure. Therefore measuring for this state after the Boolean sum is an equivalent procedure. But after the sum, the required measurement is no longer a measurement of some abstract observable in a big Hilbert space, but n individual measurements of simple observables of each ancilla.
Three histories
Suppose now that we want to measure the measure of an event consisting of three histories, E = {γ 1 , γ 2 , γ 3 }. Now we will have at most 4 kinds of bits: bits shared by every chain, bits that are different in the first chain, bits that are different in the second chain and bits that are different in the third chain: For the subchains b, c and d we can do as before, apply the Boolean sum over each subspace to make each chain differ only in the last bit: Then, instead of having to measure for the superposition of the three chains in the bigger Hilbert space, we can break the measurement into n − 3 individual measurements plus a measurement in the Hilbert space of three ancillas looking for the state,
k histories
These results generalize as follows to events of k histories. We are always able to cut the chains in a similar fashion as we have done above and apply a Boolean sum that makes each set of subchains differ just in one bit. By doing so, we reduce the single measurement in the 2 n -dimensional Hilbert space to n − α measurements on individual qubits and a single measurement in an 2 α -dimensional Hilbert space, where α is the number of different subchains.
For each event-cardinality k, we can bound the possible values of α, both above and below. As a perusal of the array beginning the previous subsection will reveal, a lower bound on α is the number of bits necessary to distinguish k histories. With m bits we can label 2 m different histories, so for labelling k histories we will need at least the base-2 logarithm of k bits: where . denotes the ceiling function, i.e. the function that rounds its argument up to the next bigger or equal integer. For example for k = 3, the logarithm is between 1 and 2, so we will need at least 2 bits.
To derive an upper bound is a bit more complicated. If we take as a reference one particular chain we can start counting how many differing subchains we can make. We will have k − 1 subchains for which one of the other chains is different but the others are still like the first one. There are k − 1 2 subchains for which two chains differ from the first one while the rest are same... In general there will be k − 1 i subchains for which i chains differ from the first one while the rest are same. For counting the total number of subchains we have to add over all this possibilities: We have also to take into account that we have n ancillas, which is a bound that has to be satisfied. In sum, α must lie in the range, where α is the number of ancillas we have to measure together in a superposition state after doing the Boolean sums. Notice that it can grow exponentially with k until it reaches its bound n. For such cases, and for cases with k > 2 n−1 we cannot break our measurement into simpler ones by applying Boolean sums, and we are thrown back to measuring a superposition in the whole Hilbert space.
Measuring a superposition
Let us approach the question from a somewhat different angle. We want to measure a state |E which is a superposition of product states. If we measured each ancilla individually in the basis ( |0 , |1 ), we wouldn't be able to access this superposition. Instead, as indicated in figure 5, we can try to invent a unitary transformation in the space of the ancillas which will map the state E into something which we can measure easily, like a particular chain of bits or an eigenstate of some global operator.
Transforming E into a single chain
Any unitary operator that mapped the state |E (or the resulting state after doing some Boolean sums as above) into a product state of the ancillas would simplify life. But how to design and implement such a unitary? Consider once again, for example, the measurement proposed in § 5.1.3 for the three-history events in the case of a length 2 system. A way to perform a measurement in the basis of equations (38-41) is to implement a unitary operation that takes the state |1 to the state |00 , the state |2 to the state |01 , and so on. We show this schematically in figure 6.
For each event E we want to measure there are infinitely many bases that contain the state |E , so there are infinitely many unitary operations that would allow us to measure the measure (a unitary transformation being equivalent to a change of basis). An interesting unitary of this sort is a quantum fourier transform in the subspace of the k histories that form our event. Our event is Fig. 6 A setup with histories of length 2. We couple ancillas to both the Z and X devices, apply a unitary transformation to the two ancillas, and then measure each ancilla in the basis ( |0 , |1 ).
We define the Fourier transform as (This transformation only needs to act on the qubits that are different after any Boolean sum we might have done.) We can check that it is unitary: where we have used that k l=1 e 2πi k al = kδ a,mk , for m ∈ Z. Since U is the identity for the subspace of histories not contained in E, we conclude that our Fourier transform is unitary.
Therefore, measuring for the state E, is equivalent to measuring for the history γ k after applying the Fourier transform. Since the latter can be done simply by measuring each ancilla individually in its ( |0 , |1 ) basis, we have found a way to measure the measure of E with individual ancilla measurements. The difficulty is now in performing the unitary transformation which acts as a Fourier transform, but only in the subspace of histories of the event E.
Global Observables
Instead of trying to use a fourier transform to reduce the measurement of |E E| to something more manageable, we could think to measure directly a suitable global observable in the bigger Hilbert space. If with a unitary transformation we could map the state |E to a particular eigenstate of the global observable, we could look for |E just by measuring the global observable.
For example, if our ancillas were themselves spin 1/2 particles, instead of measuring each of their spins in the z direction, we could measure their total spin (or the total spin of just a few of them), together with its projection in the z direction. In a three-history event, for illustration, after applying the Boolean sums described earlier, we need to measure a superposition in two qubits. Let us map |E to the singlet state and then measure the total spin. The probability of obtaining spin 0 as a result will then give us the measure of E (when corrected with the adequate factor).
Generalizing this idea to arbitrarily many histories seems to be highly nontrivial. In the best case, we might find an observable for which one eigenvalue was a singlet while the other was a multiplet with degeneracy 2 n − 1. In that case, with an adequate unitary operation, we could describe our measurement with the following projectors: Fig. 7 A setup for histories of length 2. We couple an ancilla to the first analyzer (or its output beams), and then measure it in the Z-basis ( |1 , |0 ). No ancilla is needed for the second analyzer, since it is equivalent to a strong measurement on the emerging particle.
Interpretation
We have designed an experimental setup that will allow us to "measure the measure" of any event E. More specifically, we have identified the measure µ(E) with the probability of a certain experimental outcome O(E) corrected by a known factor. Now a question we can ask ourselves is if obtaining the outcome O(E) means that the event E has really happened, and conversely if not obtaining it means that E did not happen.
Here we need to be careful, since most formulations of quantum theory do not let one draw conclusions about what has or has not happened microscopically.
In the context of quantum measure theory, however, it is natural to postulate that no event of measure 0 can happen. Moreover in certain extensions of this preclusion postulate (including the so called multiplicative scheme) it is sometimes possible to conclude that the complement of a precluded event does happen.
Let us now analyse our whole system (particles and ancillas) from this point of view, and ask which particle events are compatible with a particular outcome of our measurements on the ancillas. To start with, let us ask which events have a measure different from zero, given a particular outcome of our experimental procedure.
To begin with, we will analyse the simple arrangement in which the particle passes through only two analyzers and we have only a single ancilla, as shown in figure 7. Notice that we don't couple an ancilla to the second analyzer because we can directly observe which beam the particle emerges in. Interposing an ancilla would accomplish nothing beyond complicating the notation. For this setup, the possible histories and their amplitudes are shown in table 3.
For the full system of ancilla plus particle, there are eight joint histories in all. As we might have expected, the table shows that every one of them where the Table 3 Amplitudes, for an initial wave-function |Ψ = α |0s + β |1s , of all possible histories of a length 2 system with an ancilla coupled in. The notation we use for the histories is: we write first the history of the particle, and then the history of the ancilla, separated by a comma. Fig. 8 Another setup for histories of length 2. We couple an ancilla to the first analyzer (or its output beams), and then measure it in the X-basis, ( |+ , |− ). No ancilla is needed for the second analyzer, since it is equivalent to a strong measurement on the emerging particle.
ancilla's history disagrees with the particle's history is precluded. In this sense, we can affirm that if we measure the ancilla in the ( |1 , |0 ) basis (and also observe the final emerging beam with a particle-detector), then the particle has actually travelled the path associated with the outcomes measured. 1 We can generalize this straightforwardly to any number of ancillas. When we measure the ancillas in their ( |1 , |0 ) bases and obtain the outcome γ, it is hard to doubt that the particle actually has followed the corresponding path. That we can deduce the path this way is not surprising, as the setup is equivalent to doing a strong measurement at each step.
Consider now the slightly more complicated case of figure 8. As we saw earlier, this setup lets us measure the measures of the events, {00; 10} and {01; 11}.
(Now we use a semicolon ";" to separate different histories, since we are using the comma "," to separate the history of the particle from the history of the ancilla). Here the ancilla's history has also length 2, and the set of joint histories with amplitudes different from 0 is: {(00, 00); (00, 01); (01, 00); (01, 01); (10, 10); (10,11); (11,10); (11,11)}. Suppose we want to measure the measure of the particle-event {00; 10}. As we saw earlier, this corresponds to measuring the the first ancilla in the |+ state and the second ancilla (if we had included it) in the |0 state. (In fact there is no second ancilla, since we have again simplified our protocol by measuring the final particle location directly.) This outcome is only compatible with the histories, {00, 00} and {10, 10}. Therefore when we get +0, we can say that the particle-event {00; 10} has happened. We can extend this analysis to every two-history event where we apply the unitary trick described before. After measuring all the ancillas which are meant to be measured in the ( |0 , |1 ) basis, there will be only two histories that are compatible with whatever outcomes we have obtained. When we measure the remaining qubit in the basis ( |+ , |− ), either of the two possible histories (the one labelled with 0 and the one labelled with 1) could have happened, so we can say that the event E has happened. Again, E happens whether we get + or −, but we can only learn its measure from the measured probabilities if we get +.
Before we turn to some more complicated setups, we need to agree on a linguistic convention. In order to explain what we mean, let us first of all include in each overall history the results of all the final measurements made upon the ancillas. In the setup just discussed, for example, we would include either a '+' or a '−' depending on which outcome was obtained. Then it may happen, given a set O of measurement outcomes, that every (overall) history Γ in which O happens, and whose measure is nonzero, has the further property that a certain particle event E also happens. 2 In this case we will allow ourselves to say that E also happened, (and that the complementary event "not E" did not.) It is this convention that led us to say, in the first setup that the particle followed the trajectory γ, and in the second setup that E happened when either + or − was obtained. We qualify it as a convention because, as self-evident as it might sound at first hearing, there are reasons why one might want to replace it with something different. For more on this point, see the discussion in [23].
events with k histories
To try to say "what has happened" when the event E of interest is formed by k > 2 histories is more complicated, since the analysis in that case depends on which unitary transformations and final measurements we employ.
Recall first the procedure proposed in §6.2.1 for the three-history events of a length-two system. This was a way to perform a measurement in the basis of equations (38-41) by means of a unitary operation that maps the ancilla state |1 to |00 , |2 to |01 , and so on, as was shown schematically in figure 6.
Following an analysis of the histories similar to what we have done before, it is not hard to check that the histories compatible with each of the outcomes 00, 01, 10, 11, are those contained respectively in the states, |1 , |2 , |3 , |4 .
Hence we can say that if our ancilla measurement yields 00, then the particle event E = {00, 01, 10} has happened, because no history of nonzero measure combines the particle-history 11 with the ancilla outcome 00. (For example the history {11, 1100} has measure 0, where the last two bits represent the outcome of the ancilla measurement.) We might have expected this, because the state |1 was the one we used to measure the measure of E. However, when the outcome of our ancilla measurement is something other than 00 we cannot state that the event {00, 01, 10} has not happened! For example, the measure of the event containing the particle histories 00 and 01, and the ancilla histories compatible with measuring |2 , is different from 0 as there is overlap between |2 and the single-history states corresponding to 00 and 01.
Generalizing these conclusions, we can say that measuring for -and obtaining -|E implies that the event E has happened, because only the histories contained in E have measure different from 0 when we limit ourselves to ancilla histories that have the outcome E. On the other hand, if the outcome state is not |E but has a non-zero overlap with the subspace generated by the single-history states of the histories contained in E we cannot exclude that E or some subevent of E happened. Now consider the same system and 3-history event E, but with a different setup, such that we measure the ancillas in the basis given by the quantum Fourier transform. In this case we can still imagine the setup as the circuit represented in figure 6, but with a different unitary operator, acting now in the subspace spanned by |01 = γ 1 , |10 = γ 2 , and |00 = γ 3 . This procedure corresponds to measuring in the basis, Examining the makeup of these vectors, we see that the outcomes 00, 01, 10 are incompatible with the particle travelling the path 11, while vice versa, the outcome 11 is incompatible with any of the particle-paths 00, 01, 10. Thus, the correlation is fuller in this setup. If the outcome is one of 00, 01, 10, then we can say that the event E = {00, 01, 10} has happened, while the history 11 has not happened. And if the outcome is 11 then we can say that the history 11 has happened, while the event E has not. However, as before, it is only the outcome 00 that lets us recover the measure of the event E, and that lets us assert that the particle's "collapsed" wave function is the same as if it had evolved freely but following only the trajectories contained in E.
The same conclusions evidently hold for events with more than three histories. When (via a suitable unitary) we measure in a basis containing the ancilla state |E and we obtain it, we can say that the event E has happened. When we obtain something different, what we can say depends on the basis in which we have measured. For a general basis we won't be able to tell whether the event has happened or not, as was the case for the three-history events and the basis |1 , |2 , |3 , |4 . But for the Fourier transform basis, whenever (after applying the unitary transformation) we obtain one of the histories contained in the event, we can say that the event has happened, and if we obtain a different history then we can say that the event hasn't happened.
In order to see this, we can observe that the states associated with each outcome are either a superposition of all the histories in the event when the outcome is such a history itself, or else the same history as the outcome when it is not in the event: In the language of measures, the only compatible histories whose measures differ from zero when we get an outcome that corresponds to a history of the event are precisely those contained in the event. On the other hand, when we obtain an outcome that is not part of the event we can say that that history has happened, and therefore the histories in E have not.
As our analysis has demonstrated, when we measure for and obtain the state |E , the resulting particle wave function is the same as if the particle had evolved freely, but following only the trajectories in E. One might wonder what wave function results when we obtain an outcome different from |E . Is there one among these outcomes such that it is as if the particle had evolved according to all the trajectories in the complementary event to E? This would correspond to making the state of the ancillas collapse to We could arrange for this to be among the possible outcomes by implementing the quantum fourier transform associated with the histories not contained in E.
In table 4 we summarize the inferences we have arrived at so far, presupposing that the measurement performed is a projective measurement which completely collapses the wavefunction of the ancillas, so that they are not entangled with the particle any more. Table 4 Possible states in which one could find the ancillas when performing a projective measurement that completely collapses the ancilla wavefunction. We analyse for each case if we can say that the event E = {γ 1 , ...γ k } happened, if the probability of finding that state is directly related with the measure of the system event or its compliment, and if the resulting system wave function is the result of evolving it via certain histories, with or without introducing extra phases or weights into the path amplitudes.
Lastly, we can consider the case where we measure a global ancilla-observable, for example the total spin when the ancillas are spin-1/2 particles. As described earlier, we would like this observable's spectrum to consist of a first eigenvalue with a degeneracy of 1 and a second eigenvalue with a degeneracy of 2 n − 1.
In that case, applying an appropriate unitary operator will set up a simple measurement for which the projectors corresponding to each outcome are those of equations (62) and (63). If, then, we obtain the outcome corresponding to E we can say that the event has happened, but if not we can not say anything.
We can also imagine a situation in which one can discover an observable with two outcomes and a nontrivial multiplet associated with each outcome. If the first multiplet has k states, we can look for a unitary transformation that maps the k ancilla-histories of our event E to states of this multiplet, and then measure the observable. Obtaining the outcome associated with this first multiplet will then mean that the event E has happened, while measuring the other outcome will mean that the event hasn't happened (in fact that its complementary event has happened). This setup would be like a detector for the event E, but we wouldn't be able to recover the measure of E from such a measurement. Moreover, the final wave function of the particle would still be entangled with the ancillas, so one could not describe it with the wavefunction generated by evolving the particle through some specified subset of the trajectories, and a density matrix would be a better description.
Conclusions
Given any system-event E system , we have provided a set of ancillas, couplings of them to the system and to each other, and an ancilla-event E ancilla which in a certain sense asks whether E system has happened. If an ensemble of "identically prepared" copies of the "system" (and also of the ancillas) is available, then we can "measure the measure" of E system by performing on the ensemble projective measurements that look for E ancilla . If P is the probability of a positive outcome, then µ(E system ) = kP , where the correction factor k is the number of histories comprising E system . Here of course, µ(E) means the measure of E computed in the absence of ancillas, i.e. for the closed system. Furthermore, when our ancilla measurement yields a positive outcome, i.e. when E ancilla happens, then the effective wave function for the system will be obtained -if we employ the usual collapse rule -by propagating its initial wave-function (or density-matrix) forward via the histories in E system .
In light of these results, we can claim in some informal sense that our procedure constitutes a way to verify that the system-event E has happened without disturbing the system more than necessary. One might say that we learn that E happened but we learn no more than this.
If we wish to speak more precisely, we can observe that the couplings induce at the level of the measure a certain correlation between E ancilla and E system , namely the preclusion (µ = 0) of the event, E ancilla ∩E c system , where the superscript c denotes complement. In words, the event, "E ancilla but not E system " cannot happen. If we could employ classical inference then we could conclude that E ancilla =⇒ E system , however this doesn't necessarily follow quantum mechanically. Nevertheless we have in our presentation spoken as if a form of this implication could be assumed. We also pointed out in this connection, that the "converse" event, "E system but not E ancilla ", is not precluded in general. Thus we do not claim, even informally, that the complementary outcome E c ancilla implies that E system has not happened, or that its complement E c system has. It's worth noting that the procedures we have described are already fairly realistic. For the kind of system we have discussed, they are not that far from letting us actually measure the quantum measure of many events E pertaining to the system. The fact that this is possible, even in principle, lends a direct experimental meaning to the quantum measure, similarly to how more familiar schemes of measurement lend experimental meaning to the expectation values of projection operators. In both cases, one has converted a formally defined quantity into a macroscopically accessible number. In saying this, though, we don't mean to imply that the quantum measure has no meaning other than this experimental one. On the contrary, its real purpose is to let one reason directly about the quantum world in itself, without the aid of external observers. But knowing, as we now do, that a direct experimental determination of the measure is also available can only serve to encourage the larger interpretive project.
Our procedure generalizes the "quantum eraser" setup, in which an ancilla is coupled to a double slit experiment such that the interference appears or disappears depending on the basis you measure the ancilla in. In our case, we have generalized that idea to let us select from all the possible trajectories, just the subset we are interested in, the subset that constitutes the event E.
In reference [12], the authors studied a two-site quantum random walk that is equivalent to the particular case of our experiment in which the analyzers are placed in the sequence Z, Y, −Z, −Y, Z, Y, −Z, −Y . . . . Coupling and ancillas to this random walker and then measuring them in a similar way to the one proposed here would allow one to verify experimentally all the properties of the measure of the system described in that article.
The system we studied herein was rather special, but generalizing our procedure to any other discrete system would present no difficulty, at least if one idealizes every sort of ancilla coupling and every unitary as physically realizable. For continuous systems, one can think of coupling ancillas also with continuous degrees of freedom, but in order to define continuous trajectories one would need to couple an ancilla at each instant of time, which would require infinitely many ancillas. Something similar was done in [7] by coupling one ancilla to the system in every time interval τ and then taking the τ → 0 limit. In that work, the authors were interested in how continuous measurements would affect the evolution of expectation values, so they measured the ancillas immediately, without interpolating any unitary operator in the way presented here. | 2016-10-06T22:04:33.000Z | 2016-10-06T00:00:00.000 | {
"year": 2016,
"sha1": "700768e81be1b3593a267a2bc617c81faef596f9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1610.02087",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "700768e81be1b3593a267a2bc617c81faef596f9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
210361770 | pes2o/s2orc | v3-fos-license | Student enjoy to learning and broaden their horizons with language at the library in the centre of the school
The school library is one of the best environments for learning. Student loves school library and can feel inspired to learning. I designed the school library at Mesei Junior and Senior High School. It is not an ordinary one. The high ceiling, wide tables, wooden structures, earth-tone colors, and bookshelves that reach the top of the ceiling, truly gives the library a special atmosphere. There are a total of 12 million books available to the students. In addition, there are 65 different magazine titles and 3 million English books. This library also has e-books, Scholastic Bookflix and Trueflix, as the elementary schools in the United States. The school library produced all of English extensive reading program. Then the library produced a DVD that features English extensive reading. It was featured on NHK, a cable television program, and in some books. Students also borrow over 20milion title books in a year.
leadership skills we can develop, together with students, the idea of "cultivating independence of mind through library committee activities." With the changing of the life circumstances of students, more and more individuals are becoming less skillful at participating in cooperative activities. Also, although library committee members should participate in committee activities out of their own free will, many members are leaving decisions up to others and do not even go to the library. At this time of reorganizing and reforming our school's educational activities, I would herein like to report on practical research I have conducted with the aim of developing students who reflect honestly on their own strengths and weaknesses and strive to become confident adults.
Forward
In 1923, Meisei Junior and Senior High School was established as a practical school in Fuchu, Tokyo. Currently, our school is a junior and senior high school combined with a student number of 1500. In September of 2004, along with the building of a new school facilities, a new library which has an authoritative and educational atmosphere was built with the aim of becoming a showcase for our Meisei brand. Together with the change of having an all-boys and all-girls school to becoming a coed school, the library committee which consists of 90 members, who have gathered at their own free will, continues to uphold the tradition of this committee.
The library ranked second in the Second L-1 Grand Prix and is now propelling what is called the Tohoku Supportive Action. We are practicing our library education in consideration of the fact that school libraries are directly connected with the public in general and are a part of our society's social educational system.
Purpose of research
During the period when our school consisted of an all-boys and all-girls school, the library committee in the all-boys school was registered as a part of the student council's activities and in the all-girls school as a "special committee" activity. And there was a separate committee for junior high and senior high so accordingly there were 4 school library committees in all. Then, along with our school becoming coed, our school became a junior high and senior high combined school. With these changes, the organization of committees in the school also had to change.
With the completion of the new school facilities, the new library with a floor space of 660 square meters, became a common meeting place for the whole student body of 1500 students. From the onset, a new coed library committee was set up. While upholding the traditions of each school's library committee, a new coed and junior and senior high school combined library committee was set up aiming to become a self-motivated PDCA cycle activity.
Methods and content of research
The new library's management system basically followed the management system of the old boy's school because it had the longest history, but it incorporated characteristics of the previous 4 schools so as to make a smooth transition to becoming a coed library. We are aiming to making a student-centered library where the students themselves can help with the management of the library. Figure2:School's Library Committee 2-3 In 2008, the all-girls "special library committee" was dissolved and this committee was incorporated into the school's Library Committee, as one committee in the student council. At the student council's annual general meeting, the Library Committee carried out a presentation (hand made by the student's themselves) 2-4 In 2009, the junior high and senior high separate library committees were combined and officially became The Junior and Senior High School Library Committee. In this way the junior and senior high school combined educational activity was initiated.
2-5 In 2011, for the Meisei School Festival, the Library Committee officially registered it's activity as The Library Committee. With all committee members participating, a story reading was held in the library.
Figure3: A story reading in the library at school festival 2-6 In 2011, we entered the Second L-1 Grand Prix and came in second place. As representatives of our school's Library Committee, members from junior high 1 st year to senior high 3 rd year made presentations. It was the first time for a student under 10 years of age to participate in a presentation at this contest so this fact gained much attention.
2-7
In 2012, we entered the Second L-1 Grand Prix for the second time and we took first place.
We carried out the following two Library Committee cooperative projects; "Shanti Volunteer Association" and "The Great East Japan Earthquake relief work." As part of the "Run Tohoku! Mobile Library Project" we made and sent wall newspapers to Odate Town Library and Kanamachi Library. We have strived to encourage the members of our school's Library Committee to think about what our committee's role should be as part of the Student Council organization. One of the strong points of being a combined junior and senior high is that senior high school students can lead and do joint activities. By working in small groups and meeting often the students have learned to share their ideas openly. They have discussed together about the problems and management methods of our committee and by doing so they have learned how to overcome various problems.
The School Library has thus become a place for learning from each other. The students do not arbitrarily choose a leader of the committee but rather through daily activities leadership is developed synergistically and we feel this is what is important. By watching over the active activities of the students, students learn how to move forward a big organization and by doing so they will learn skills they will need in society. They will learn how to express themselves while at the same time learning to respect and nurture the abilities of others and thus they will become global citizens.
In this day and age of a low birth rate, it is at times difficult to attract new library committee members. Changes in student council activities and its organization will of course affect the organization of the School Library Committee. Furthermore, students are now leading very busy lives at school and at home yet we feel it is important to encourage students to develop their own leadership skills and nurture a spirit of wanting to help others. The new age is demanding that changes be made in the field of education. It is important that we communicate with our students and provide them with a space where they can make their abilities come to life. This can be done by learning together from our experiences. The school library has a direct connection to society. It is our great wish to nurture students who will be able to contribute to society. However, how well they can do this is greatly affected by the experiences they have during their schooling and the quality of their education. Library education is thus very important and the school library committee organization can play a great role in each student's schooling.
Global Understanding Education Fostered by Library Education
~Establishing an Extensive English Reading Program and Supporting Systems in School Libraries~ Overview From the founding of our school we built a school library with the hope of creating a parallel universe inside the school. To be able to create the "heart" of the school education, affirmative book reading activities of students should be rooted in the aim of "learning a language, cultivating sensitivity, enhancing expressiveness and stimulating creativity." Reading activities have to be something that teaches students how to build strength in themselves so that they can live a meaningful life. Furthermore, reading activities will gain liveliness when the needs and consciousness of students bond together with the needs of the times and thus students can interact and learn from each other. We believe that all school libraries must make efforts to improve the level of global understanding and education in the 21 st century with firm faith, propel extensive reading education and consolidate an environment so as to help students' learning/reading activities and global understanding of students. In this article, I will present the activities and practical researches performed by school libraries which are providing educational activities to support their students.
Forward
In 1923, Meisei Junior and Senior High School was established as a practical school in Fuchu, Tokyo. Currently, our school is a junior and senior high school combined and the number of students is currently1500. In September of 2014, management software was installed in the library in concert with the union of the two libraries in the school. The united library which has an authoritative and educational atmosphere contains an estimated 200 thousand books. The library, a parallel universe in our school has become many of our students' "oasis." And the library committee which consists of 80 members who have gathered at their own free will continue to uphold the tradition of this committee. The library ranked second in the Second L-1 Grand Prix and is now propelling what is called the Tohoku Supportive Action. We are practicing our library education in consideration of the fact that school libraries are directly connected with the public in general and are a part of our society's social educational system.
Figure7 : Students enjoy active learning
Young people nowadays are not used to reading. In order to encourage them to read more books, we thought it is necessary to rebuild their reading experiences. Although we are feeling the necessity of encouraging students to read more fairytales, folktales and biographies which have not been popular among young people, we understand that it is normal to have a sense of resistance to reading such books in their period of adolescence. We think that it is important to carry out an extensive English reading program in the phase of major changes of children's reading activities. Students will spend more time on reading in multiple languages and enjoy reading textbooks and picture books of English-speaking countries at their own will. School libraries are now being put to the test, that is, as to whether they are offering students better learning environments and learning activity support. When a school library becomes the "heart" of the school and starts to beat, it will pump out energy to the students.
Purpose of performing research
To give students who are not used to reading an experience of acknowledging that reading is fun, we feel that offering students a sense of accomplishment of reading an entire book is important. Rewritten simplified books read by children at the reading introduction age in English-speaking countries might provide a fertile reading experience to Japanese children.
Famous folktales written by major authors such as Grimm Brothers, Andersen and Shakespeare can become a good introduction to international education. In addition, we thought that reading English books has a function of reminding the students of their experience of reading their first book and will give students a sense of accomplishment.
By preparing lots of textbooks and side-readers published in English-speaking countries, we can meet the diverse needs of our students and they can share the excitement of reading with their friends since each of them will read books that they are interested in. Growth of the desire to read leads them to improve their four skills (listening, reading, speaking and writing) and they will recognize the importance of improving these four skills when learning a foreign language. And I think school libraries will be able to lay out the framework of such learning activities. From the perspective of library administration, utilization rate and lending rate will increase and school libraries will rejuvenate by shifting the system of the libraries to a new level from the perspective of users. Students can read foreign books anytime they want and school libraries will become a crucial service in schools as a result of an increase in the utilization rate, lending rate and an interest in reading books.
Method and content of research
We decided to start an early English extensive reading program from within the classroom. We suggested that we set up a completely new and unique intensive reading program and decided to utilize the school library which has lots of English title books. We considered how we could create a program that would assist the students to the utmost. With the purpose of choosing and increasing the number of English books in the school library, we investigated what kinds of books each English publisher has. We even went to England to explain about the situation of our school library and when a new book series was to be published we stated our preferences and by doing so we were able to build strong connections with various publishers. As a library we looked into getting the authors copyright permission. We also did research as a library into different ways of purchasing English books and also looked into how to file English books in a library and how to update management software so as to keep records of each student's book readings. We researched how to do word counts of each book and how to record this in each book's bibliographic filing so that students can utilize this information in their extensive reading. We did research into how to develop classes which would stimulate the students, for example as part of our extensive reading program we incorporated shadowing very early on and made this a standard procedure.
We looked into how to organize reading materials so that they could be used easily and how to lessen the work load of teachers when an extensive reading program was introduced into the curriculum. We also looked into how to keep records of each student's reading history. (We made a pre reading sheet to be filled out before reading a certain series.) We created a time schedule for the extensive reading program. We researched to learn about the necessity of sound recognition in foreign language acquisition and researched about how to incorporate shadowing. We looked at how input style teaching and output style teaching affected the student's development in foreign language acquisition of the four basic skills. We looked at correlations between the number of books read by students and accumulated word counts with STEP test results. Looking at the needs of each student we continually offered after-school reading advice. We have made it a point to closely observe our students progress and continue to strive to better our school library and extensive reading program. With the aim of increasing the library usage rate we conducted 28 hours of classes per week. We have increased the number of English books, the English book count now being 38 thousand books. The borrowed book count has increased and is now over 200 thousand per year. We have students now who have an accumulated word count of over 1 million words. We have signed contacts for copyright usage. The number of students who take the STEP test has increased. We have had authors visits to our school and we have held learning sessions for our students.
Through the use of speaker phones many student's listening skills have advanced.
Biographical note
The school library produced all of English extensive reading program. Then the library produced a DVD that features English extensive reading, listening, dictation, and shadowing. This is a unique study program that is found only at Mesei Gakuen Junior and Senior High School. It was featured on NHK, a cable television program, and in some books. Mesei Gakuen Junior and Senior High School students attend an English extensive learning class in the library once a week.
There is an after-school program called, Power up TADOKU, that offers extensive English reading to students of all grade levels. The program`s inclusion of students of all ages is a unique model within the traditional Japanese school system. Now English extensive reading class are 43classes in a week at the library. High School students borrow over 20milion title books of Meisei Gakuen Junior & Senior High School library in a year. Student loves school library. | 2019-10-10T09:24:11.149Z | 2016-08-26T00:00:00.000 | {
"year": 2016,
"sha1": "648d3c40e668b6564ebc1429f6b67dde24532710",
"oa_license": null,
"oa_url": "https://journals.library.ualberta.ca/slw/index.php/iasl/article/download/7208/4207",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f2568b057a06dbbea2c0f4b56912a99e793f9db0",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
5663482 | pes2o/s2orc | v3-fos-license | Anatomical variant of the liver blood supply
Vascular variations are significant for liver transplantations, radiological procedures, laparoscopic method of operation and for the healing of penetrating injuries, including the space close to the hepatic area. These variants are very common in the abdominal region, and their description will be useful. During a routine dissection of a 73 year old female cadaver, we found in the subhepatic region that the blood supply of the liver differed from a normal one. The difference was found in the absence of the right liver branch and the cystic artery, which normally arises from the common hepatic artery. After a detailed dissection of the superior mesenteric artery we distinguished a branchthat was routed to the right lobe of the liver. The diameter of this vessel was 3.7 mm and the length 8.2 cm. In the artery pathway, three consecutive branches were observed. The first branch was found about 2.02 cm before the portal region of the liver. The second one became visible after another millimeter and finally the artery made one little curve and became a cystic artery.
Introduction
The variations of hepatic vascular structures are of great importance for general surgery, particularly hepatic surgery. Vascular variations are significant for liver transplantations, radiological procedures, laparoscopic method of operation and for the healing of penetrating injuries, including the space close to the hepatic area. The anatomical knowledge of liver vascular variants is essential for reducing operative and postoperative morbidity and mortality in donors and recipients. Recently, due to the increase in the number of liver transplants, the importance of the hepatic artery anatomy has become obvious. The lack of normal blood supply to the liver is usually asymptomatic, until it is also interrupted to the visceral organs. We can find these variants of blood supply during diagnostic angiography. Vascular variants are very common in the abdominal region, and their description and study will be useful.
The pattern of the normal vascular system of the liver the cadaver that the blood supply of the liver differed from a normal one. The following variations occurred in another case of a 73 year old female cadaver during a subhepatic region dissection. Unfortunately, the medical history of the cadaver was not known. At the level of the T12 vertebra, the celiac trunk, which arises from the abdominal aorta, was observed. The branches of this trunk showed normal trifurcation and formed the left gastric, main splenic and the common hepatic arteries. The first branch of the common hepatic artery (CHA) was the right gastric artery; after the process of separation CHA gave two branchespropria hepatic artery (PHA) and gastroduodenal arteries.
The vascular system of the celiac trunk and the branches which originated from there were normal and we did not find any deviation. The difference was found in the absence of the right liver branch, which normally arises from the PHA. In our case the PHA closer to the portal region of the liver did not divide and we couldn't see the right hepatic artery (RHA) and the cystic artery (CA). This raised our interest to investigate where the blood supply of the right lobe of the liver came from. We focused our attention on another branch of the abdominal aorta -SMA, which was 3.4 cm below the celiac trunk ( Figure 1). The course and the branching pattern of the SMA were documented and recorded using a digital camera. In this region we found a lot of iliac arteries, which feed the same name intestine. After a detailed dissection we distinguished a branch, which was routed to the right lobe of the liver. This branch was the first one of the SMA and ascended the portal region of the liver. The diameter of this vessel was 3.7 mm and the length 8.2 cm. In the artery pathway, three consecutive branches were observed (Figure 2). The first branch was found about 2.02 cm before the portal region of the liver. The second one became visible after another millimeter and finally the artery made one little curve and became a cystic artery. The first two branches entered the right lobe of the liver and replaced the absent RHAs. In summary, we can say that the absence of the RHP and CA were replaced by accessory right artery and its branches.
Discussion
This type of liver blood supply is already known. An international classification describing the vascular variation of the liver was observed by many authors, like: Adachi [2], Michels [7], Hiatt et al. [6], and Abdullah et al. [1]. These investigations were suitable for inquiry studies based on large groups of subjects. However, some variants of liver blood supply were not found in these classifications. This is one of the reasons why every type of abnormal feeding of the liver is helpful for the development of future classifications. Galen was the first anatomist who researched the arterial system from the celiac trunk and observed the arteries leading to the liver, stomach and spleen. Later on Andreas Vesalius gave anatomical descriptions of the Galen's discoveries in the sixteenth century, commenting the CHA and splenic artery. Hepatic artery variation described by Michels et al. in the 1966 [7] was based on dissection of The variants of the hepatic artery have their origins in the embryo development. At the time of angiogenesis of the celiac trunk (CT), the most important vessels include the ventral splanchnic arteries which start directly from the embryonic aorta. The splanchnic arteries sprout 4 individual branches and many longitudinal anastomoses at different levels. The first main branches, the primitive celiac axis, originate the normal branches of CT such as spleen, left gastric and common hepatic arteries. The next 2 branches are obliterated and the last one become SMA. The variation of hepatic artery when the RHA is absent and is totally replaced by an accessory right hepatic artery from the SMA is a rarer variant [2,7]. This is because the embryology does not explain the origin of this anomaly and this variant, as is our case study, which is a riddle for science.
As mentioned previously, according to the Adachi [2] classification, our case belongs to group 17. This group includes variations with extant accessories: left or right artery arising from SMA. This anomaly according to Adachi was present in 0.4% of observed cadavers. Another classification by Michels [7] shows that approximately 9% which belong in class 3 originating from the Superior Mesenteric Artery, like our case. Considering the De Cecco [4] classification, the variants with the availability of the accessory right artery are very rare and occur in only 4%. From the above listed facts, it is obvious that the accessory right artery arising from the SMA is very rare. We cannot give the real answer in embryo development vision. The variation percentage of this variation according to Adachi, Michel and De Cecco is a big exception. At the end of our case report we have come to think that every type of the vessel variation will be helpful for the next investigation and for the surgeons' practice and liver transplantation. | 2017-06-16T16:32:47.501Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "949f305c53abe14bb2680eeef90ede947a94597f",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc4632906?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "949f305c53abe14bb2680eeef90ede947a94597f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231550495 | pes2o/s2orc | v3-fos-license | The role of lecithin on topical anti-inflammatory activity of turmeric (Curcuma longa L.) ointment
The role of lecithin on topical anti-inflammatory activity of turmeric (Curcuma longa L.) ointment Somayeh Esmaeili, Saleh Omid-Malayeri, Homa Hajimehdipoor*, Hamid Reza Rasekh, Hamid Reza Moghimi, Soheil Omid-Malayeri, Roya Yaraee, Mohammad Reza Jalali Nadoushan Traditional Medicine and Materia Medica Research Center and Department of Traditional Pharmacy, School of Traditional Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran Pharmaceutical Sciences Branch, Islamic Azad University, Tehran, Iran Department of Pharmacoeconomy & Administrative Pharmacy, School of Pharmacy, Shahid Beheshti University of Medical Sciences, Tehran, Iran Department of Pharmaceutics, School of Pharmacy, Shahid Beheshti University of Medical Sciences, Tehran, Iran Student Research Committee, School of Medicine, Shahed University, Tehran, Iran Department of Immunology, School of Medicine, Shahed University, Tehran, Iran Department of Pathology and Anatomy, School of Medicine, Shahed University, Tehran, Iran
Introduction
Curcuma longa L. (turmeric), a perennial herb, is a member of Zingiberaceae family which has been used as an ethno-medicine from time memorial in traditional medicine of many countries especially Iranian traditional medicine, traditional Chinese medicine and Ayurveda system [1,2]. It has a long traditional of use particularly as an anti-inflammatory agent and for the treatment of flatulence, jaundice, menstrual difficulties, hematuria, hemorrhage and colic orally. The rhizome of the plant is used extensively in foods for both flavor and color [1]. The biological activities of turmeric rhizome are mainly attributed to the presence of phenolic compounds (curcuminoids, yellow pigments) and the terpenoids (mono-and sesquiterpenoids). The β-dicarbonylic system in curcuminoids has the conjugated double bonds which provides anti-inflammatory power. Moreover, the presence of the diene ketone system provides lipophilicity to the compounds and thus probably better skin penetration [1]. Therefore, the plant is good candidate for usage as anti-inflammatory agent in topical dosage forms. In the Indian system of medicine, turmeric is a household remedy for reducing pain, swelling, wound injury and various types of inflammation [3]. Several topical preparations have been made from the plant especially in combination with other plants. In Iranian folk medicine, turmeric powder and egg yolk mixture is the common treatment for strains and dislocations which cause inflammation. Since, anti-inflammatory properties of turmeric have been established [4][5][6], egg yolk may play an especial role in the formulation. Lecithin is a major constituent of egg yolk which has phospholipid structure. It is a complex mixture of phosphatides, which mainly consist of phosphatidyl choline, phosphatidyl ethanolamine, phosphatidyl serine, and phosphatidyl inositol along with other substances such as triglycerides and fatty acids. The main sources of lecithin are soya beans and egg yolk. Lecithin varies greatly in its physical form, from viscous semi-liquid to powder depending on the free fatty acid content. It's color maybe also different from brown to light yellow depending on whether it is bleached or unbleached. Lecithin acts as an enhancer for absorption of some compounds [7]. Therefore, the reason of egg yolk usage in the formulation may be increasing turmeric constituent's penetration and alleviating its efficacy. Despite of the wide folklore usage of turmeric/egg yolk in strains and dislocations, there is no academic study about this formulation, its efficacy and the role of egg yolk in the mixture. In the present investigation, turmeric ointment was prepared by using different concentrations of turmeric extract with and without lecithin and their efficacy was evaluated in arthritis model in rat.
Ethical consideration
Ethical Committee of Shahid Beheshti University of Medical Sciences approved this study with the code of 107-90/07/24.
Chemicals
Lecithin E80 (Egg lecithin) was purchased from lipoid Co, USA. All other chemicals and solvents were from Merck Co, Germany.
Plant materials
Rhizome of Curcuma longa L. was purchased from Tehran herbal market and identified in Herbarium of Traditional Medicine & Materia Medica Research Center, SBMU, Tehran, Iran. A sample with code of HMS 337 was kept in the Herbarium. Then the rhizomes were powdered.
Plant extraction
The plant powder was extracted by using ethanol 80% with maceration method (plant: solvent 1:5) for 4 days. Every 24 hours, the solvent was renewed. Combined extracts were concentrated under reduced pressure and freeze dried.
Quantitation of dicinnamoyl methane derivatives in turmeric extract
The spectroscopic method was used for quantitative determination of dicinnamoyl methane derivatives expressed as curcumin at 530 nm in the dried extract [8].
Preparation of turmeric/lecithin ointments
In order to prepare ointment base, bees wax, liquid paraffin, eucerin and vaseline were used. Regarding the usage of the ointment in strains and dislocations, the viscosity of the ointment is very important to stay on the injured area for long time. In order to obtain suitable viscosity of the ointment, different ointment bases with various concentrations of constituents were prepared and the best one was selected. Then two concentrations of turmeric extract (2.5 & 5 %) and lecithin (5 & 15 %) were added to the ointment base.
Anti-inflammatory investigation of the ointments in arthritis model in rat
Fifty four male Wistar rats, 150-200 g, were maintained under 12 hours light-dark cycle in a temperature and humidity controlled room. Rats were allowed free access to standard laboratory feed and water before experiment. The animals were divided to nine groups; each one contained six rats as the following: group 1, turmeric 2.5 % ointment; group 2, turmeric 5 % ointment; group 3, turmeric 2.5 % and lecithin 5 % ointment; group 4, turmeric 2.5 % and lecithin 15 % ointment; group 5, turmeric 5 % and lecithin 5 % ointment; group 6, turmeric 5 % and lecithin 15 % ointment; group 7, ointment base; group 8, Piroxicam gel; group 9, no arthritis was induced. One gram of each ointment/gel was used on right hind wrist joint of rats every day.
In order to induce arthritis, 0.05 ml complete Freund's adjuvant (CFA) was injected sub cutaneous (S.C.) in tibio-tarsal of right hind wrist joint of rats. Inflammation was started 3-4 days after injection and raised to maximum at day 14. On 15 th day, the treatment was started. The ointments were used daily on inflamed joints for 20 days. Before starting treatment and on 20 th day after treatment, arthritis index was determined in each group by using visual scoring system ranging from 0 to 4: 0, wrist with no swelling and focal redness; 1, redness without swelling; 2, redness with mild swelling; 3, redness with severe swelling; 4, redness with severe swelling and difficulty in movement. On 20 th day after treatment, the serum was collected from peripheral blood. TNF-α in serum was measured using ELISA kit (Enzo Life, USA) at 450 nm. Each sample was assayed triplicate. For histopathology examination, hind wrist joints were amputated and were fixed in 10% neutralbuffered formalin, then decalcified in 5% formic acid and embedded in paraffin. The sections (5 µm) were stained with haematoxylin and eosin (H & E) and examined microscopically. A blind observer evaluated the samples by synovial proliferation, cellular infiltration, pannus formation and cartilage erosion using following scoring system: synovial proliferation: grade 0, proliferation was absent; grade 1, proliferation was mild with two to four layers of reactive synoviocytes; grade 2, proliferation was moderate with four plus layers of reactive synoviocytes, increased mitotic activity and mild or absent synovial cell invasion of adjacent bone and connective tissue; grade 3, proliferation was severe and characterized by invasion and effacement of joint space and adjacent cartilage, bone and connective tissue. Cellular infiltration: grade 0, no changes; grade 1, few focal infiltrates; grade 2, extensive focal infiltrates; grade 3, extensive infiltrates invading the capsule with aggregate formation. Cartilage erosion: grade 0, no changes; grade 1, superficial, localized cartilage degradation in more than one region; grade 2, localized deep cartilage degradation; grade 3, extensive deep cartilage degradation at several locations. Pannus formation: grade 0, no changes; grade 1, pannus formation at up to two sites; grade 2, pannus formation at up to four sites, with infiltration or flat overgrowth of joint surface; grade 3, pannus formation at more than four sites or extensive pannus formation at two sites [9].
Statistical analysis
Data were expressed as mean ± SD. The analysis of variance (ANOVA) followed by Tukey post-test was used to determine significant difference between means of groups. T-test was used for determining difference of before and after treatment in each group. P < 0.05 was considered statistically significant.
Dicinnamoyl methane derivatives content in turmeric extract
Dicinnamoyl methane derivatives content in dried turmeric extract was found 10.68 ± 0.75 %.
Preparation of the ointment
After using different proportion of ointment base constituents and determination of physical characteristics of the prepared bases, the ointment base including beeswax 7 %, liquid paraffin 3.5 %, eucerin 14 % and vaseline 75.50 % was found as the best formulation. This formulation showed suitable appearance and viscosity. Then turmeric and lecithin were added to the ointment base.
Pharmacological experiment
The results of arthritis index showed that it has decreased in all groups but it was significant only in groups of turmeric 2.5 % and turmeric 5 % with lecithin 15 % (P < 0.05) (Fig. 1). The results of TNF-α assay have been shown in Fig. 2. It is obvious that TNF-α has decreased after treatment with all turmeric/lecithin ointments (P < 0.05). But reduction was more in group 1 which received turmeric 2.5 % ointment. This formulation was effective even more than Piroxicam which has been considered as positive control. No significant difference was observed between treatment groups (P > 0.05).
The results of histopathological examination have been demonstrated in Fig. 3. It shows total scoring of histological changes of joints including synovial proliferation, cellular infiltration, pannus formation and cartilage erosion. It is obvious that groups of turmeric 2.5 %, turmeric 5 %, turmeric 5 % plus lecithin 5 %, turmeric 5 % plus lecithin 15 % and Piroxicam had significant difference with ointment base group. But turmeric 5 % plus lecithin 15 % group was less effective than others. There was no significant difference between turmeric 2.5 %, turmeric 5 %, turmeric 5 % plus lecithin 5 % and Piroxicam groups. Therefore, mentioned groups could decrease histopathological changes as same as Piroxicam. Fig. 4 shows the results of histopathology examination.
Discussion
Turmeric has been used in traditional medicine for a long time. It is used in various disorders including rheumatism, fever, dyspepsia, intestinal parasites, hepatic failure, and skin disorders [1]. In Iranian folk medicine, mixture of turmeric powder and egg yolk is externally used for strains and dislocations. Since strains and dislocations are followed by inflammation which causes joint pain and immobility, logic for using turmeric in this situation is its anti-inflammatory properties [10]; but, the role of egg yolk in this formulation has not been known. Egg yolk contains lecithin . During another study, in order to improve the poor curcumin penetration, nanoemulsion formulations with different enhancers including lecithin were prepared and the results showed that the nanoemulsions significantly enhanced curcumin penetration [19]. In study which was conducted on skin rat, curcumin absorption in presence of lecithin from turmeric gel was investigated. It was established that lecithin could increase curcumin absorption [20]; therefore, it is expected that lecithin increases turmeric absorption and enhances its antiinflammatory effect. In the present investigation, herbal ointments were prepared using different concentrations of turmeric extract (2.5 & 5 %) with or without lecithin )5 & 15 %) and their effects on chronic inflammation of rat wrist joint induced by complete Freund's adjuvant were assessed by determination of arthritis index, TNF-α concentration and histopathological changes of the joints. Comparing the results of the experiment demonstrated not only lecithin had no effect on anti-inflammatory properties of turmeric extract, but also turmeric 2.5 % ointment showed the best activity. In previous studies, turmeric formulations have been prepared with low concentrations of turmeric where lecithin was used to enhance the penetration and thus the drug efficacy; however in our study, the formulation was provided regarding the folklore knowledge which consisted of high concentrations of turmeric. The efficacy of this high concentration formulation was not enhanced by increasing the turmeric content showing that lecithin did not have a role of penetration enhancer but was used as a binder to keep the formulation on injury site for several hours. In folk medicine, the mixture is put on the cotton dress like a paste and attached on the inflamed area; therefore, turmeric alone acts as an anti-inflammation agent in the formulation which has been demonstrated during previous investigations [4][5][6]. It has been established that curcumin is the most responsible component for the plant activity and has a potent antiinflammatory property. It is capable of interacting with numerous molecular targets involved in inflammation. Curcumin modulates the inflammatory response by down-regulating the activity of cyclooxygenase-2 (COX-2), lipoxygenase, and inducible nitric oxide synthase (iNOS) enzymes; inhibits the production of the inflammatory cytokines tumor necrosis factoralpha (TNF-α), interleukin (IL) -1, -2, -6, -8, and -12, monocyte chemoattractant protein (MCP), and migration inhibitory protein; and downregulates mitogen-activated and Janus kinases [21][22][23][24][25]. The results of the present study were in agreement with the effect of curcumin on TNF-α which is a specific factor in inflammatory disorders such as Rheumatoid arthritis.
Conclusion
In summery, in order to prepare a dosage form as anti-inflammatory agent for strains and dislocation according to Iranian folk medicine in industrial scale, turmeric ointment 2.5 % is an ideal form because of good physical characteristics, better results compared to other formulations containing turmeric/lecithin, less concentration of turmeric making it cheaper which is economically important.
Author contributions
Homa Hajimehdipoor and Somayeh Esmaeili supervised formulation part and data analysis. They also edited the manuscript. Saleh Omid-Malayeri and Soheil Omid-Malayeri performed the experimental studies. Saleh Omid-Malayeri prepared the manuscript. Hamid Reza Rasekh supervised pharmacological part. Hamid Reza Moghimi was involved in study design. Roya Yaraee and Mohammad Reza Jalali Nodoushan were involved in TNF-α analysis and pathology part, respectively. | 2021-01-07T09:06:01.587Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "e4eaea9ec402ed9cbead5b0a3d6c58a75e490a01",
"oa_license": "CCBYNC",
"oa_url": "http://jmp.ir/files/site1/user_files_8d9738/homahajimehdipoor-A-10-1105-6-d4621ed.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8a761017605f2f3c247518e1b090853ba84253d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
117191739 | pes2o/s2orc | v3-fos-license | Classical dynamics of strings and branes, with application to vortons
These notes offer an introductory overview of the essentials of classical brane dynamics in a space-time background of arbitrary dimension, using a systematic geometric treatment emphasising the role of the second fundamental tensor and its trace, the curvature vector $K^\mu$. This approach is applied to the problem of stability of vorton equilibrium states of cosmic string loops in an ordinary 4-dimensional background.
The first fundamental tensor
Earlier treatments of the classical dynamics of strings and higher p-branes were inclined to rely too much on gauge dependent auxiliary structures such as internal coordinates σ i on the d=p+1 dimensional worldsheet, which can be useful for various computational purposes but tend to obscure what is essential. The present notes offer an introductory overview of a more geometrically elegant approach [1] that is particularly useful for work in a background spacetime whose dimension n is 5 or more [2,3,4], but that I originally developed for the purpose of studying cosmic string loops and particularly the question of the stability of their vorton equilibrium states [5] in a background of dimension n=4. Following the strategy originally advocated by Stachel [6], the guiding principle of this approach [1] is to work as far as possible with a single kind of tensor index, which must of course be the one that is most fundamental, namely that of the n-dimensional coordinates, x µ , on the background spacetime with metric g µν .
The idea is to avoid unnecessary use of the internal coordinate indices, which are lowered and raised by contraction with the induced metric η ij = g µν x µ ,i x ν ,j (using the notation x µ ,i = ∂x µ /∂σ i ) on the worldsheet, and with its contravariant inverse η ij . This is achieved by working instead with the (first) fundamental tensor as given by projection back onto the background according to the prescription (in the manner that is applicable to the contravariant version of any worldsheet tensor) so that η µ ν will be the tangential projector. The complementary orthogonal projector is ⊥ µ ν = g µ ν − η µ ν . As well as having the properties η µ ρ η ρ ν = η µ ν , and ⊥ µ ρ ⊥ ρ ν = ⊥ µ ν these projection tensors will evidently be related by η µ ρ ⊥ ρ ν = 0 = ⊥ µ ρ η ρ ν .
The second fundamental tensor
In so far as we are concerned with tensor fields such as the frame vectors whose support is confined to the d-dimensional world sheet, the effect of Riemannian covariant differentation ∇ µ along an arbitrary directions on the background spacetime will not be well defined, only the corresponding tangentially projected differentiation operation being meaningful for them, as for instance in the case of a scalar field ϕ for which the tangentially projected gradient is given in terms of internal coordinate differentiation simply by ∇ µ ϕ = η ij x µ ,i ϕ ,j . The action of this operator on the first fundamental tensor η µν itself gives the entity that we refer to [1] as the second fundamental tensor.
As this second fundamental tensor, K µν ρ will play an important role in the work that follows, it is worth lingering [1] over its essential properties. The expression (3) could of course be meaningfully applied not only to the fundamental projection tensor of a d-surface, but also to any (smooth) field of rank-d projection operators η µ ν as specified by a field of arbitrarily orientated d-surface elements. What distinguishes the integrable case -in which the elements mesh together to form a well defined d-surface through the point under consideration -is the Weingarten identity, whereby that the tensor defined by (3) will have the symmetry property an integrability condition that is derivable [1] as a version of the well known Frobenius theorem.
As well as being symmetric, the tensor K µν ρ is obviously tangential on the first two indices and also orthogonal on the last : ⊥ σ µ K σν ρ = K µν σ η σ ρ = 0 . It fully determines the tangential derivatives of the first fundamental tensor η µ ν by the formula (using round brackets to denote symmetrisation) and it is characterisable by the condition that the orthogonal projection of the accelerationu µ = u ν ∇ ν u µ of any tangential unit vector field u µ (with u µ u µ = −1 ) will be given by u µ u ν K µν ρ = ⊥ ρ µu µ .
Extrinsic curvature vector and Conformation tensor
It is very practical for a great many purposes to introduce the extrinsic curvature vector K µ , defined [1] as the trace of the second fundamental tensor, which is automatically orthogonal to the worldsheet, η µ ν K ν = 0 . It is useful for many specific purposes to work this out in terms of the intrinsic metric η ij and its determinant |η| . For the tangentially projected gradient of a scalar field ϕ on the worldsheet, it suffices to use the simple expression ∇ µ ϕ = η ij x µ ,i ϕ ,j . However for a tensorial field (unless one is using Minkowski coordinates in a flat spacetime) the gradient will also have contributions involving the background Riemann Christoffel connection Γ ν µ ρ = g νσ g σ(µ,ρ) − 1 2 g µρ,σ .
The curvature vector is thus obtained in explicit detail as This expression is useful for specific computational purposes, but much of the literature on cosmic string dynamics has been made unnecessarily heavy by a tradition of working all the time with long strings of non tensorial terms such as those on the right of (7) rather than exploiting more succinct tensorial expressions, such as K ν = ∇ µ η µν .
As an alternative to the universally applicable tensorial approach advocated here, there is of course another more commonly used method of achieving succinctness in particular circumstances, which is to sacrifice gauge covariance by using specialised kinds of coordinate system. In particular, for the case of a string, i.e. for a 2-dimensional worldsheet, it is standard practise to use conformal coordinates σ 0 and σ 1 so that the corresponding tangent vectorṡ The physical specification of the extrinsic curvature vector (6) for a timelike d-surface in a dynamic theory provides what can be taken as the equations of extrinsic motion of the d-surface [1], the simplest possibility being the "harmonic" condition K µ = 0 that is obtained (as shown below) from a surface measure variational principle such as that of the Dirac membrane model [7], or of the Goto-Nambu string model [8] whose dynamic equations in a flat background are therefore expressible with respect to a standard conformal gauge in the familiar form x ′′µ −ẍ µ = 0 , There is a certain analogy between the Einstein vacuum equations, which impose the vanishing of the trace R µν of the background spacetime curvature R λµ ρ ν , and the Dirac-Gotu-Nambu equations, which impose the vanishing of the trace K ν of the second fundamental tensor K λµ ν , Moreover, just as it is useful to separate out the Weyl tensor [9], i.e. the trace free part of the Ricci background curvature which is the only part that remains when the Einstein vacuum equations are satisfied, so also analogously, it is useful to separate out the the trace free part of the second fundamental tensor, namely the extrinsic conformation tensor [1], which is the only part that remains when equations of motion of the Dirac -Goto -Nambu type are satisfied.
Explicitly, the trace free extrinsic conformation tensor C µν ρ of a d-dimensional imbedding is defined [1] in terms of its first and second fundamental tensors as Like the Weyl tensor W λµ ρ ν of the background metric (whose definition is given implicitly by (13) below) this conformation tensor has the noteworthy property of being invariant with respect to conformal modifications of the background metric : g µν → e 2α g µν ⇒ This is useful [10] for work like that of Vilenkin [11] in a Robertson-Walker cosmological background, which can be obtained from a flat spacetime by a conformal transformation for which e α is a time dependent Hubble expansion factor.
Codazzi, Gauss, and Schouten identities
As the higher order analogue of (3) we can go on to introduce the third fundamental tensor [1] as which by construction is obviously symmetric between the second and third indices and tangential on all the first three indices. In a spacetime background that is flat (or of constant curvature as is the case for the DeSitter universe model) this third fundamental tensor is fully symmetric over all the first three indices by what is interpretable as the generalised Codazzi identity.
In a background with arbitrary Riemann curvature R λµ ρ σ the generalised Codazzi identity is expressible [1] as A script symbol R is used here in order to distinguish the (n-dimensional) background Riemann curvature tensor from the intrinsic curvature tensor of the (d-dimensional) worldsheet to which the ordinary symbol R has already allocated. For many of the applications that will follow it will be sufficient just to treat the background spacetime as flat, i.e. to take R στ β α = 0 .
For n > 2, the background curvature tensor will be decomposible (if present) in terms of the background Ricci tensor and its scalar trace, and of its trace free conformally invariant Weyl part W µν ρ σ -which can be non zero only for n ≥ 4 -in the well known [9] form In terms of the tangential projection of this background curvature, the corresponding internal curvature tensor takes the form which is the translation into the present scheme of what is well known in other schemes as the generalised Gauss identity.
The less well known analogue (attributable [9] to Schouten) for the (trace free conformally invariant) outer curvature is expressible [1] in terms of the relevant projection of the background Weyl tensor as In a background that is flat or conformally flat (for which it is necessary, and for n ≥ 4 sufficient, that the Weyl tensor should vanish) the vanishing of the extrinsic conformation tensor C µν ρ will therefore be sufficient (independently of the behaviour of the extrinsic curvature vector K µ ) for vanishing of the outer curvature tensor Ω µν ρ σ , which is the condition for it to be possible to construct fields of vectors λ µ orthogonal to the surface and such as to satisfy the generalised Fermi-Walker propagation condition to the effect that 2 Laws of motion for a regular brane complex
Definition of brane complex
The term p-brane has come [12,13] to mean a dynamic system localised on a timelike support surface of dimension d=p+1 , in a spacetime background of dimension n> p . Thus a zero -brane means a "point particle", and a 1-brane means a "string", while a 2-brane means what is commonly called a "membrane". At the upper extreme an ( n-1)-brane is what is commonly referred to as a "medium" (as exemplified by a simple fluid). The codimension-1 (hypersurface supported) case of an ( n-2)-brane (as exemplified by a cosmological domain wall) is what may be referred to as a "hypermembrane", while the codimension-2 case of an ( n-3)-brane is what may analogously be referred to as a "hyperstring". Figure 1 -Nautical archetype of a regular brane complex in which a 3-brane (the wind) acts (by pressure discontinuity) on a 2-brane (the sail) hemmed by three 1-branes (bolt ropes) terminating on 0-branes (shackles) that are held in place by three more (free) 1-branes (external stay/sheet ropes).
A set of branes forms a "brane complex" if the support surface of each (d-1)-brane member is a smoothly imbedded d-dimensional timelike submanifold of which the boundary, if any, is a disjoint union of support surfaces of lower dimensional members of the set. For the complex to qualify as regular [1] it is required that a p-brane member can act directly only on an (p-1)-brane member on its boundary or on a (p + 1)-brane member on whose boundary it is itself located, though it may be passively influenced by higher dimensional background fields.
Direct mutual interaction between branes with dimension differing by 2 or more would usually lead to divergences, symptomising the breakdown of a strict -meaning thin limitbrane description. To cure that properly, a more elaborate treatment -allowing for finite thickness -would be needed, but it may suffice to use a thin limit approximation [15] whereby the divergence is absorbed [16,17] in a renormalisation.
In the case of a brane complex, the total action I will be given as a sum of contributions from the various (d-1)-branes of the complex, of which each has its own Lagrangian dsurface density scalar (d) L say. Each supporting d-surface will be specified by a mapping σ → x{σ} giving the local background coordinates x µ (µ = 0, .... , n-1) as functions of local internal coordinates σ i ( i = 0, ... , d-1). The corresponding d-dimensional surface metric tensor (d) η ij induced as the pull back of the n-dimensional background spacetime metric g µν , determines the surface measure, (d) dS , in terms of which the total action will be expressible as
Conserved current and the stress-energy tensor
As well as on its own internal (d-1)-brane surface fields and their derivatives, and those of any attached d-brane, each contribution (d) L will also depend (passively) on the spacetime metric g µν and perhaps other background fields, of which the most common example is a from which, for each (d−1)-brane, one can read out the electromagnetic surface current vector (d) j µ , and also (since For any d-dimensional support surface (d) S , Green's theorem gives taking the integral on the right over the boundary The Maxwell gauge invariance condition (independence of α ) is thus seen to be equivalent to the electric current conservation condition which means that the source of charge injection into any particular (p-1)-brane is the sum of the currents flowing in from the p-branes to which it is attached.
Force and the stress balance equation
The condition of being "Lagrangian" means that L δ is comoving as needed to be meaningful for fields with support confined to a particular brane. However for background fields one can also define an "Eulerian" variation, E δ , with respect to some appropriately fixed reference system, in which the infinitesimal displacement of the brane complex is specified by a vector field ξ µ . The difference will be given by is the Lie differentiation operator, which will be given for the relevant background fields by the familiar formulae ξ-LA µ = ξ ρ ∇ ρ A µ +A ρ ∇ µ ξ ρ , and ξ-Lg µν = 2∇ (µ ξ ν) .
In a fixed Eulerian background, the background fields will have Lagrangian variations given just by their Lie derivatives with respect to the displacement ξ µ . Subject to the internal field equations, the action variation δI due to the displacement of the branes will therefor just be in which total force density, includes the Faraday-Lorenz contribution (p) f ρ = F ρµ (p) j µ , from the background, while on each (p-1)-brane, the contact force exerted by attached p-branes is in which it is to be recalled that, on the (p+1)-dimensional support surface of each attached p-brane, (d) λ µ is the unit vector that is directed normally towards the bounding (p-1)-brane.
The tangential force balance equations will hold as identities when the internal field equations are satisfied (because a surface tangential displacement has no effect). The nonredundent information governing the extrinsic motion of a (d −1)-brane will be given just by the orthogonal part. Integrating by parts, as the surface gradient of the rank-(n − d) orthogonal projector (p) ⊥ µ ν will be given in terms of the second fundamental tensor (p) K ρ µν of the d-surface by the extrinsic equations of motion are finally obtained in the form It is to be remarked that this is valid not just for a conservative force such as the electromagnetic example considered above, but also for dissipative forces such as frictional drag [10] by a relatively moving background medium.
3 Canonical Liouville and symplectic currents
Canonical formalism for Branes
For the study of small perturbations, and particularly for the systematic derivation of conservation laws associated with symmetries, it is useful to employ a treatment of the canonical kind that was originally developped in the context of field theory (as a step towards quantisation) by Witten, Zuckerman, and others [18,19,20,21,22,23,24]. This section describes the generalisation of this procedure to brane mechanics in the manner initiated by Cartas-Fuentevilla [25,26] and developed in collaboration with Dani Steer [27]. After a general presentation, including a review of the relationships between the various (Lagrangian, Eulerian and other) relevant kinds of variation, the procedure is illustrated by application to a particular category that includes the case of branes of purely elastic type.
Consider a generic conservative p-brane model whose mechanical evolution is governed by an action integral of the form over a supporting worldsheet with internal co-ordinates σ i (i = 0, 1, ... p) , and induced metric η ij = g µν x µ ,i x ν ,j in a background with coordinates x µ , (µ = 0, 1, ... n − 1) , (n ≥ p + 1) and (flat or curved) space-time metric g µν . The relevant Lagrangian scalar density L = η 1/2 L , is given as a function of a set of field components q Aincluding background coords -and of their surface deriatives, q A ,i = ∂ i q A = ∂q A /∂σ i . The relevant field variables q A can be of internal or external kind, the most obvious example of the latter kind being the background coordinates x µ themselves.
The generic action variation, specifies partial derivative components L A and and corresponding generalised momentum components p i A . The variation principle characterises dynamically admissible "on shell" configurations by the vanishing of the Eulerian derivative In terms of this Eulerian derivative, the generic Lagrangian variation will have the form There will be a corresponding pseudo-Hamiltonian scalar density for which (The covariance of such a pseudo-Hamiltonian distingushes it from the ordinary kind of Hamiltonian, which depends on the introduction of some preferred time foliation.) For an on-shell configuration, i.e. when the dynamical equations are satisfied, the Lagrangian variation will reduce to a pure surface divergence, and the correponding on-shell pseudo-Hamiltonian variation will take the form
Symplectic structure
The generic first order variation of the Lagrangian will be given by in terms of the generalised Liouville 1-form (on the configuration space cotangent bundle) defined by Now consider a pair of successive independent variationsδ ,δ , which will give a second order variation of the form Thus using the commutation relationδδ =δδ one gets where the symplectic 2-form (on the configuration space cotangent bundle) is defined bý̟ For an on-shell perturbation we thus obtain while for a pair of on-shell perturbations we obtaiń The foregoing surface current conservation law is expressible in shorthand as in which the closed (since manifestly exact) symplectic 2-form (39) is specified in concise wedge product notation as Some authors prefer to use an even more concise notation system in which it is not just the relevant distinguishing (in our case acute and grave accent) indices that are omitted but even the wedge symbol ∧ that indicates the antisymmetrised product relation. However such an extreme level of abbreviation is dangerous [25] in contexts in which symmetric products are also involved.
Translation into strictly tensorial form
To avoid the gauge dependence involved in the use of auxiliary structures such as local frames and internal surface coordinates, by working [28] just with quantities that are strictly tensorial with respect to the background space, one needs to replace the surface current densities whose components ϑ i and ̟ i depend on the choice of the internal coordinates σ i , by vectorial quantities with strictly tensorial background coordinate components given and with strictly scalar divergences given by In terms of the surface projected covariant differentiation operator defined in terms of the fundamental tensor η µν = η ij x µ ,i x ν ,j by ∇ ν = η µ ν ∇ µ , one thus obtains a Liouville current conservation law of the form for any symmetry generating perturbation, i.e. for any infinitesimal variation δq A such that δL = 0 . Similarly a symplectic current conservation law of the form will hold for any pair of perturbations that are on-shell, i.e. such that δ(δL/δq A ) = 0 .
Application to hyperelastic case
In typical applications, the relevant set of configuration components q A will include a set of brane field components ϕ α as well as the background coords x µ , so that in terms of displacement vector ξ µ = δx µ the Liouville current will take the form in which the latter version replaces the original momentum components by the corresponding background tensorial momentum variables, which are given by π α The hyperelastic category [29] (generalising the case of an ordinary elastic solid which includes the special case of an ordinary barotropic perfect fluid) consists of brane models in which -with respect to a suitably comoving internal reference system σ i -there are no independent surface fields at all -meaning that the ϕ α and the p α i are absent -and in which the only relevant background field is the metric g µν that is specified as a function of the external coordinates x µ . In any such case, the generic variation of the Lagrangian is determined just by the surface stress momentum energy density tensor T µν according to the standard prescription whereby T µν is specified in terms of partial derivation of the action density with respect to the metric.
In a fixed background (i.e. in the absence of any Eulerian variation of the metric) the Lagrangian variation of the metric will be given by L δg µν = ξ-Lg µν = 2∇ (µ ξ ν) . Comparing this to canonical prescription δL = L µ ξ µ + p i µ ξ µ ,i with ξ µ = δx µ shows that the relevant partial derivatives will be given by the (non-tensorial) formulae It can thus be seen that in the hyperelastic case, the canonical momentum tensor π µ ν and the Liouville current Θ ν will be given just in terms of surface stress tensor T µν by the very simple formulae In order to proceed, we must consider the second order metric variation, whereby (following Friedman and Schutz [30]) the hyper Cauchy tensor (generalised elasticity tensor) C µνρσ = C ρσµν is specified [31] in terms of Lagrangian variations by a partial derivative relation of the form The symplectic current is thereby obtained in the form 4 Brane perturbation by gravitational radiation
Generic case
A background metric perturbation δg µν = h µν will provide an extra Lagrangian and stress contributions δL = 1 2 T ρσ h ρσ , and δT µν = C µνρσ h ρσ , whence a corresponding force increment δf µ = 1 2 T νσ ∇ µ h νσ − ∇ ν T νσ h σ µ . The effect of this is expressible as the inclusion of an extra term f G µ on the right of the original force balance equation, as expressed in terms of the unperturbed values of the metric g µν , stress tensor T µν , and force density f µ , so as to obtain a perturbed force balance of the form
Regularisation of self-interaction
To treat such self-interaction one must face the problem that the regularity condition (see Figure 1) is violated whenever a brane of dimension d = p+1 acts on a background of dimension n ≥ d + 2 , . To cure this, a physically realistic regularisation involves replacing the infinitely thin worldsheet by a support of finite thickness. The divergent self-interaction fields such as A µ and h µν are then replaced by regularised averages A µ and h µν with dominant contribution proportional to the relevant source [16,17]. This means A µ ∝ j µ and h µν ∝ (n − 2)T µν − T σ σ g µν , which for a Nambu-Goto hyperstring, p = n − 3 , gives h µν ∝ (p + 1)T ⊥ µν , with a proportionality coefficient that diverges as the thickness tends to zero. On such world sheet confined fields, the ordinary gradient operator ∇ ν must be replaced by the corresponding regularised operator ∇ ν , so that for example the field F µν = 2∇ [µ A ν] will have the regularised average F µν = 2 ∇ [µ A ν] , as needed for the electromagnetic self-interaction force density f ρ = F ρµ j µ . The required result, giving zero gravitational contribution, f G µ = 0 , for Nambu-Goto hyperstrings (including [33] the ordinary string case p=1 with n=4) has been shown [15] to be provided generally by the conveniently simple and easily memorable formula ∇ ν = ∇ ν + 1 2 K ν .
5 Vorton equilibrium states of elastic string loops
The category of simple elastic string models
For any string model the fundamental tensor of the 2 dimensional worldsheet will be expressible in terms of any orthonormal diad of space like and timelike vectors u ν , u ν as η µ ν = − u µ u ν + u µ u ν . There will generically be a prefered diad with respect to which the symmetric surface stress energy tensor will be expressible as where T is the string tension, and U is the surface energy density, which, in the elastic case, is determined as a function of T by an equation of state.
In addition to the extrinsic (transversely polarised)"wiggle" perturbations which, as in any string model, travel with a characteristic velocity v = T /U such a model has perturbation modes of only one other kind : these are sound type (longitudinal compression) "woggle" modes, which propagate relative to the locally preferred frame with speed given by the formula v L = −dT /dU . A particularly important special case is that of models of the integrable transonic type [35] for which the "wiggle" and "woggle" speeds coincide, which occurs when the equation of state is specified simply by the specification of a fixed value for the product UT . The kind of model appropriate for representing such familiar technical applications as bow strings, or the strings of musical instruments, will generally be of subsonic type, meaning that the wiggle speed v is less than the sonic speed v L , while on the other hand it has been shown by Peter [36] that models of supersonic type will commonly be needed for the representation of cosmic strings of the conducting vacuum vortex type envisaged by Witten [37].
A model of any such elastic type is specifiable in variational form by a string Lagrangian L depending only on the magnitude of the gradient of some stream function ϕ (which in the Witten case represents the phase of a complex scalar field). This means that the string model is characterised by a single variable equation of state giving L as a function of the scalar w = η ij ϕ ,i ϕ ,j . It is useful [14,38] to introduce the corresponding adjoint formulation in terms of the quantity Λ = L + wκ , with κ = −2 dL/dw . When w < 0 , one finds that the tension and energy density will be given by T = −L , U = −Λ , while when w > 0 they will be given by T = −Λ , U = −L . In all cases the phase gradient is proportional to a surface current, c µ = x µ ,i c i , c i = κη ij ϕ ,j = −∂L/∂ϕ ,i , that has the property of being conserved, ( √ −η c i ) ,i = 0 , whenever there is no external force, so that the equation of motion of the worldsheet reduces to the simple form When he originally introduced the concept of conducting cosmic strings [37] Witten suggested that a simple linear action formula, L = −m 2 (1 + δ 2 * w) , involving just a single extra parameter (namely a lengthscale δ * ) might be used as a good approximation, least in the weak current limit for which w is sufficiently small. However it subsequently became clear that such a linear formula is inadequate even in the weak current limit, since it implies that wiggle propagation would always be subsonic v 2 < v 2 L , whereas detailed examination of the relevant kind of vacuum vortex by Peter [36] revealed that the wiggle propagation in such a case would typically be supersonic v 2 > v 2 L As a more satisfactory replacement for Witten's direct linearity ansatz, it has been found [39,40] that at the cost of introducing one more mass scale m ⋆ , a reasonably good representation is obtainable by using an ansatz of logarithmic form L = −m 2 − 1 2 m 2 ⋆ ln {1+δ 2 ⋆ w} .
Stationary string states in flat background
We shall conclude this overview by considering what can be said about stationary equilibrium states, as characterised, in a flat background a world sheet that is tangent to a timelike unit static Killing vector satisfying ∇ µ k ν = 0 . In such a worldsheet there will also be an orthogonal (and therefor spacelike) unit tangent vector e µ satisfying the invariance condition k ν ∇ ν e µ = 0 . For such a worldsheet, the first fundamental tensor will be given by η µν = −k µ k ν + e µ e ν , while in terms of the curvature vector, K µ = e ν ∇ ν e µ , the second fundamental tensor will be given by K µν ρ = e µ e ν K ρ .
Within the worldsheet, the preferred timelike eigenvector of the stress energy tensor, as characterised by the relation T µ ν u ν = −U u ν , will be expressible in the form which defines the relative flow velocity v . Under these conditions, the free dynamical equation (5.1) can be seen to reduce to the simple form (U − v 2 T )K ρ = 0 .
For an infinitely long string this equation can of course be solved in a trivial manner by choosing a configuration that is straight, which means K ρ = 0 , in which case the value of v is unrestricted. However for a finite closed loop the curvature cannot vanish everywhere, and where K ρ is non-zero the only way of satisfying the extrinsic equilibrium condition (5.2) is for the relative flow velocity to bethe same as the relevant wiggle propagation speed : v = T /U , while to satisfy the intrinsic (current conservation) equilibrium condition it is trivially sufficient (and generically necesssary) for the value of this speed to be uniform. Provided this centrifugal equilibrium condition is satisfied, there is no retriction on the curvature, which need not be uniform : thus the equilibrium configuration of the string loop need not be circular, but may have an arbitrary shape. After thus obtaining the generic condition for string loop equilibrium, the next problem is to find which of such vorton equilibrium states are stable. This question has so far been dealt with [5,41] only in the simple case of equilibrium configurations that are circular.
Stability criterion for circular vorton states
It is easy to see that the stability of a uniform circular equilibrium state of an elastic string loop in a flat background will depend just on the extrinsic (wiggle type) and longitudinal (sound type) perturbation speeds, v and v L . Moreover it is fairly easy to show [5] that such a state will always be stable in the subsonic case, v 2 ≤ v 2 L , which is what is most likely to be relevant in a terrestrial engineering context.Even in the supersonic case, it has been shown [5] that monopole n = 0 and dipole n = 1 perturbation modes are always stable. However instability may occur for higher modes, n ≥ 2 for which, in a state with radius a , the eigenfrequency ω is given by the solution of an equation of the cubic form x 3 + b 2 x 2 + b 1 x + b 0 = 0 , for the quantity x = a ω/v + n , where v + = 2v/(1 + v 2 ) , is the relative velocity of orthogonaly polarised forward propagating wiggles, and the coefficients of the cubic are given by b 2 = Γ − 2 − ξ , b 1 = −2Γ + (1 + ξ) 1− n −2 , b 0 = Γ 1− n −2 , using the notation ξ = Γ (1 − v 2 + ) , The stability criterion, for all the roots to be real, is the positivity of a discriminant Figure 2 shows the zones of negativity (instability) for the lowest relevant mode numbers, n = 2, 3, ... by Martin [41]. In the ultrarelativistic limit v → 1 , v L → 1 that is relevant for weak currents in conducting cosmic strings, one gets ξ → 0 and ∆ → 4n −2 (Γ +1+n −1 ) 2 (Γ +1−n −1 ) 2 , which is strictly positive (implying stability) almost always, the unstable exceptions being on the lines converging in the plot to the limit point v 2 = 1 , v 2 L = 1 with gradient given in terms of the corresponding node number by 1/(2n − 1) .
The upshot is that although some circular vorton states are unstable, there are plenty more -the ones that would presumably be selected under natural conditions -that are stable, at least with respect to macroscopic string perturbations. It is however to be remarked that -since it deals only with the thin string limit -the kind of analysis described here can not resolve the (sensitively model dependent) issue of stability with respect to quantum effects or other processes involving the microscopic internal structure of the vacuum vortex or whatever else may constitute the string. | 2011-12-09T13:16:32.000Z | 2011-12-09T00:00:00.000 | {
"year": 2011,
"sha1": "c860370ccede4312bfdd364b7197c410befb21d6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c860370ccede4312bfdd364b7197c410befb21d6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
8991616 | pes2o/s2orc | v3-fos-license | Normalized Accessor Variety Combined with Conditional Random Fields in Chinese Word Segmentation
The word is the basic unit in natural language processing (NLP), as it is at the lexical level upon which further processing rests. The lack of word delimiters such as spaces in Chinese texts makes Chinese word segmentation (CWS) an interesting while challenging issue. This paper describes the in-depth research following our participation in the fourth International Chinese Language Processing Bakeoff 1 . Originally, we incorporate unsupervised segmentation into Conditional Random Fields (CRFs) in the purpose of dealing with unknown words. Normalization is delicately involved in order to cater to problem of small data size. Experiments on CWS corpora from Bakeoff-4 present comparable results with state-of-the-art performance.
Introduction
Words are the basic linguistic units of natural language. However, Chinese texts are character based, not word based. Thus, the identification of lexical words or the delimitation of words in running texts is a prerequisite of NLP.
Chinese word segmentation can be cast as simple and effective formulation of character sequence labeling. A prevailing technique for this kind of labeling task would be Conditional Random Fields1 (CRFs) [1], following the current trend of applying machine learning as a core technology in the field of natural language processing. Based on conditional dependency assumption, CRFs could exert predominant performance on the known words 1 The Fourth International Chinese Language Processing Bakeoff & the First CIPS Chinese Language Processing Evaluation (Bakeoff-4), at: http://www.chinalanguage.gov.cn/bakeoff08/bakeoff-08_basic.html (which refer to those words exist in both the testing and training data), yet further improvement for CWS systems are usually limited by the comparative large fraction of unknown words (which refer to those words exist only in the testing data).
Regarding this nontrivial issue, in this paper, we are intended to provide a semi-supervised methodology: incorporates an unsupervised method into supervised segmentation, following the in-depth research after our participation in Bakeoff-4. Catering to the common case of limited training data, normalization is involved in the unsupervised phrase.
The rest of the paper is organized as follows: Section 2 describes the framework of our CWS system in detail. Section 3 discusses the unsupervised segmentation method based on a modified version of the target function. Section 4 presents and analyzes our experimental results. Finally, we conclude the work in Section 5.
Framework of CWS
Our framework of CWS utilizes Conditional Random Fields (CRFs) as the basic statistical model. The Tag set and features used to train CRFs are also introduced briefly in this section.
Conditional random fields
Conditional random fields (CRFs) for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position [2]. CRFs are an undirected graph established on G = (V, E), where V is the set of random variables Y = {Y i |1≤i≤ n} for each the n tokens in an input sequence and E = {(Y i−1 , Y i ) |2≤i≤n} is the set of (n − 1) edges forming a linear chain. Following [1], the conditional probability of the state sequence (s 1 , s 2 …s n ) given the input sequence (o 1 , o 2 …o n ) is computed as follows: where f k is an arbitrary feature function; and λ k is the weight for each feature function; it can be optimized through iterative algorithms like GIS [3]. Recent research indicates that quasi-Newton methods such as L-BFGS [4] are more effective than GIS.
Tag set
As justified in [5,6], a 6-tag set enables the CRFs learning of character tagging to achieve a better segmentation performance than others. So we adopt this tag set in our CWS framework, namely, B, B2, B3, M, E and S, which respectively indicates the start of a word, the second position within a word, the third position within a word, other positions within a word, and the end of a word. An example is illustrated in Table 1.
Type Feature
Unigram C n (n=-2,-1,0,1,2) Bigram C n C n+1 (n=-2,-1,0,1) Date,Digit,letter T -1 T 0 T 1 Table 2 illustrates the features we used in our CWS systems. Where C represents character; subscript n indicates its relative position taking the current character as its reference; Pun derives from the property of the current character: whether it is a punctuation; T describes the type of the character: numerical characters belong to class 1, characters whose meanings are date and time represent class 2, English letters represent class 3, punctuation labels represent class 4 while other characters represent class 5. In addition, the tag bi-gram feature is also employed.
Unsupervised segmentation
Although CRFs model could tackle the known words accurately based on the information learned from the training data, the segmentation on the unknown words rests on reliable statistical information derived from large amount of running texts. Thus, we resort to unsupervised segmentation method to deal with these unknown words. In general, unsupervised segmentation assumes no label information for training. It rests on statistical information over the whole corpus to identify potential words, each assigned a goodness score to indicate their credibility. In this section we will introduce an existing unsupervised segmentation criterion, whose segmentation results are encoded into additional features to facilitate supervised learning for CWS. To make it more reliable, normalization strategy is involved.
Accessor variety
In Chinese text, each substring of a whole sentence can potentially form a word, but only some substrings carry clear meanings and thus form a correct word. Accessor variety (AV), sparked by [7] is used to evaluate how independent a string is from the rest of the text. The more independent it is, the higher the possibility that it is a potential word carrying a certain kind of meaning. The accessor variety (AV value) of a string s is defined as: where Lav(s) is the left accessor variety of s, which is defined as the number of its distinct predecessors, plus the number of distinct sentences in which s appears at the beginning, while Rav(s) is the right accessor variety of s, which is defined as the number of its distinct successors, plus the number of distinct sentences in which s appears at the end.
Unsupervised segmentation
Given the formula for calculating the AV value of a certain string within a sentence, the segmentation problem is then cast as an optimization problem to maximize the target function of the AV value over all word candidates in a sentence. For the sake of convenient, we use a segmentation to denote a segmented sentence, a segment to denote a continuous substring in the segmentation, and f to denote the target function. We use s to represent a string (e.g. a sentence ), S to represent a segmentation of s, n to represent the number of characters in s, and m to denote the number of segmentation in S. The sentence s can be displayed as the concatenation of n characters, and S as the concatenation of m strings: s = c 1 c 2 c 3 …c i …c n S = w 1 w 2 w 3 …w i …w m where c i stands for a character and w i stands for a segment.
The target functions f is given below [8]: Given a target function f and a particular sentence s, we need to choose the segmentation that maximizes the values of f(S) over all the possible segmentations. In formulation function f(w) ,we consider two factors: one is the segment length, denoted as |w|, and the other is the AV value of a segment, denoted as AV(w). Then, f(w) can be formulated as a function of |w| and AV(w), thus the target function can be regarded as a choice of normalization for the AV(w) to balance the segmentation length and the AV value for each segmentation. Theoretically, the choice of f(w) is arbitrary, among the most representative types of functions (namely, polynomial, exponential, and logarithmic functions), we choose polynomial function for f(w) (hereafter, referred as AV), since it proves to be the best in our CWS system, and it is defined as: where c and d are integer parameters that are used to define the target function f(w), whose performance has been justified in [8].
As the training is usually too limited, then there would be a great chance that fluctuation exists in the AV value of a string consist of extreme number of characters, that is to say: there should be a disparity between dealing with strings with very few characters and that with much more characters when calculating AV values. Such fluctuation may deteriorate the reliability of AV value in that: singlecharacter candidate, such as stop word or interrogative marker, may receive comparatively low AV value, though considering them as an isolate word is actually much better; multi-character potential word, which carries no practical meaning is highly possible to obtain a relatively high AV value just because there is a high concurrence frequency among those characters. Unfortunately, both of these flaws inherent in formula (4) are overlooked in either [6] or [8], at least not mentioned in detail. To deal with this special case, as well as alleviate the fluctuation in AV values, we introduce a normalized version of formulation function f N (w) (hereafter, referred as NAV) in accessor variety, as formulated below: A real-value normalizer, named as Norm is involved in (4) to obtain (5). The modified formulation function f N is based on the following consideration: on the one hand, when |w| is large enough, unless its accessor variety is relative high, it would not be considered as a potential word, thereby a low value would be assigned to the current segment strategy; on the other hand, when |w| is too small, unless its accessor variety is also relative low, it would still enjoy high favor, the current segment strategy receives comparably high value accordingly. This measure coincides with that proposed in [8], with a superiority of the absence of special consideration for single character or multi-character candidates.
With all the information above prepared, here comes the computation of f(S) for a given sentence s. Since the value of each segment can be computed independently from the other segments in S, f(S) can be computed using a dynamic programming technique, in which the time complexity is linear to sentence length. Let us use f i to denote the optimal target function value for the subsentence c 1 c 2 …c i and w j…i to denote the segment c j+1 c j+2 …c i (for j ≤ i). Then we have the following dynamic equations:
AV feature
Having nailed down the definition of accessor variety and target function, we could conduct the unsupervised segmentation. However, we now confront two choices to utilize the AV feature: (1) using the unsupervised segmentation result (in the form of 6-tag set as mentioned in section 2 as auxiliary feature for each character within a sentence s in training CRFs. (hereafter, referred as 'Auxiliary Seg') (2) directly assigning the AV value calculated by formula (5) to a string under the best segmentation S for sentence s (hereafter, referred as 'NAV value'). In the latter case, we need to define a feature function to narrow down the value span of AV feature to avoid the problem of data sparsity. Here, we adopt the same feature function in [6], which is defined as where t is an integer to logarithmize the score.
Without any single piece of proof that either of two methods of utilizing AV feature is superior to the other, controlled experiment is conducted in section 4 to seek for an explicit conclusion to this issue.
Evaluation results
This section reports the experiment result based on CWS corpora from Bakeoff-4. The corpora consists of 5 data sets, namely, CITYU, CKIP, CTB, NCC and SXU on both closed and open tracks. The corpus from MSRA is simplified Chinese text while the other corpora are in traditional Chinese. The original label for the training data set is IOB-2. Here, we convert all the corpora to 6-tag set as introduced in section 2.2.
Subsections experiment setting
In the unsupervised method (both AV and NAV), maximal segment length of potential word is set to 6. The two parameters c, d in formula (4) and (5) are set to 1, and 2 respectively, followed by the best setting achieved in our CWS system. Notice, the calculation of AV values in the phrase of unsupervised segmentation are derived from both training and testing corpus (in unsupervised segmentation, the training data is utilized as unlabeled data as well).
Two ways of utilizing AV value
To find out the better strategy to utilize accessor variety, we conduct a controlled experiment on the close tracks, that is: CWS with AV, CWS with NAV, and the result is shown in Table 3. (Note: the parameter Norm in formula (5) for NAV is set to 2.5) The final result indicates that the strategy with 'NAV value' presents better performance. This may be explained as the error brought in by the 'Auxiliary Seg' which promulgates through the whole sentence thus misguides the CRFs learner. As we can see from Table 4, NAV achieves comparatively higher performance when Norm is set to 2.5. Our experiment implies that when parameter Norm is set within the span between 2 to 3, relatively performance promotion can be obtained. For the sake of convenience, the parameter Norm in formula (5) for NAV is set to 2.5 in the following experiments.
Performance of four systems
For the purpose of comparison, Table 5 lists the performance of four systems on the close tracks. Where 'baseline' presents our CWS system participating in Bakeoff-4, which only utilizes the feature defined in Table 2. '+AV' indicates AV features are applied; '+NAV' indicates normalized NAV features are involved; while 'best' indicate the topline achieved in Bakeoff-4. Close scrutiny to Table 5 indicates '+AV' can lift the performance of the original CWS ('baseline') to a comparatively higher position, while '+NAV' performs best and are really comparative to the topline result. For the performance improvement of NAV, the normalization mechanism in formula (5) plays a key role. However, it is necessary to point out that the performance of CTB is slightly drawn down by NAV feature compared to that of AV , yet still higher than the 'baseline' system. The value, 2.5 for Norm may not be a proper setting, which can serve as a reasonable explanation for this abnormal phenomenon.
Performance of CWS open tracks
In this experiment group, we will report the performance of NAV on the open tracks.
In the open tracks, corpus from previous bakeoffs are involved to train CRFs. Additionally, transformation-based error-driven learning (TBL) is also involved and used in the post-processing phrase. Table 6 lists the corpora used to train the CRFs and TBL learner in the open tracks. This experiment group aims at clarifying whether NAV could bring further performance promotion for CRFs in open tracks. As a great amount of external resource is involved, the space for improvement left for NAV is really limited, thus proves to be a challenging task for NAV. Table 7 lists the result of NAV and four comparison systems on the CWS open tracks. With a stronger CRFs model and an additional TBL learner, the performance of 'baseline' system are boosted to a much higher level, as we can see from the comparison of Table 6 and Table 7. Still, performance promotion does occur under such circumstance, and the result brought by NAV (96.99) even surpass the topline (96.97) on CITYU data set. Thus, it demonstrates that accessor variety is also useful in the case of open tracks where large amount of external resource are involved, and Normalized accessor variety turns out to be more effective than original AV value.
Conclusions
In this paper, we have proposed an effective method of incorporating unsupervised segmentation method into CRFs model. To make the unsupervised strategy more reliable, normalization strategy is involved. Our experiments justify that accessor variety used as 'NAV value' presents better performance over 'Auxiliary Seg' strategy. Although a core parameter Norm, which if differ in diverse settings, will bring about different results in the final evaluation, creditable performance promotion can be obtained within a certain span. In the closed tracks of Bakoff-4, CRFs model with NAV method achieves comparable performance with the topline; while in the open tracks, NAV is still useful when large amount of external resource are involved. Thus, NAV provides us with a effective way to further boost the performance of Chinese Word Segmentation. | 2015-06-05T01:59:53.000Z | 2009-09-01T00:00:00.000 | {
"year": 2009,
"sha1": "63f0ab9017817e8128e63fc72f6282831f47799e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "63f0ab9017817e8128e63fc72f6282831f47799e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
225768034 | pes2o/s2orc | v3-fos-license | RETRACTED ARTICLE: Dynamic boundary of floating platform and its influence on the deepwater testing tube
Statement of Retraction We, the Editor and Publisher of the journal European Journal of Remote Sensing, have retracted the following articles that were published in the Special Issue titled “Remote Sensing in Water Management and Hydrology”: Marimuthu Karuppiah, Xiong Li & Shehzad Ashraf Chaudhry (2021) Guest editorial of the special issue “remote sensing in water management and hydrology”, European Journal of Remote Sensing, 54:sup2, 1-5, DOI: 10.1080/22797254.2021.1892335 Jian Sheng, Shiyi Jiang, Cunzhu Li, Quanfeng Liu & Hongyan Zhang (2021) Fluid-induced high seismicity in Songliao Basin of China, European Journal of Remote Sensing, 54:sup2, 6-10, DOI: 10.1080/22797254.2020.1720525 Guohua Wang, Jun Tan & Lingui Wang (2021) Numerical simulation of temperature field and temperature stress of thermal jet for water measurement, European Journal of Remote Sensing, 54:sup2, 11-20, DOI: 10.1080/22797254.2020.1743956 Le Wang, Guancheng Jiang & Xianmin Zhang (2021) Modeling and molecular simulation of natural gas hydrate stabilizers, European Journal of Remote Sensing, 54:sup2, 21-32, DOI: 10.1080/22797254.2020.1738901 Tianyi Chen, Lu Bao, Liu Bao Zhu, Yu Tian, Qing Xu & Yuandong Hu (2021) The diversity of birds in typical urban lake-wetlands and its response to the landscape heterogeneity in the buffer zone based on GIS and field investigation in Daqing, China, European Journal of Remote Sensing, 54:sup2, 33-41, DOI: 10.1080/22797254.2020.1738902 Zhiyong Wang (2021) Research on desert water management and desert control, European Journal of Remote Sensing, 54:sup2, 42-54, DOI: 10.1080/22797254.2020.1736953 Ji-Tao Li & Yong-Quan Liang (2021) Research on mesoscale eddy-tracking algorithm of Kalman filtering under density clustering on time scale, European Journal of Remote Sensing, 54:sup2, 55-64, DOI: 10.1080/22797254.2020.1740894 Wei Wang, R. Dinesh Jackson Samuel & Ching-Hsien Hsu (2021) Prediction architecture of deep learning assisted short long term neural network for advanced traffic critical prediction system using remote sensing data, European Journal of Remote Sensing, 54:sup2, 65-76, DOI: 10.1080/22797254.2020.1755998 Yan Chen, Ming Tan, Jiahua Wan, Thomas Weise & Zhize Wu (2021) Effectiveness evaluation of the coupled LIDs from the watershed scale based on remote sensing image processing and SWMM simulation, European Journal of Remote Sensing, 54:sup2, 77-91, DOI: 10.1080/22797254.2020.1758962 Ke Deng & Ming Chen (2021) Blasting excavation and stability control technology for ultra-high steep rock slope of hydropower engineering in China: a review, European Journal of Remote Sensing, 54:sup2, 92-106, DOI: 10.1080/22797254.2020.1752811 Yufa He, Xiaoqiang Guo, Jun Liu, Hongliang Zhao, Guorong Wang & Zhao Shu (2021) Dynamic boundary of floating platform and its influence on the deepwater testing tube, European Journal of Remote Sensing, 54:sup2, 107-116, DOI: 10.1080/22797254.2020.1762246 Kai Peng, Yunfeng Zhang, Wenfeng Gao & Zhen Lu (2021) Evaluation of human activity intensity in geological environment problems of Ji’nan City, European Journal of Remote Sensing, 54:sup2, 117-121, DOI: 10.1080/22797254.2020.1771214 Wei Zhu, XiaoSi Su & Qiang Liu (2021) Analysis of the relationships between the thermophysical properties of rocks in the Dandong Area of China, European Journal of Remote Sensing, 54:sup2, 122-131, DOI: 10.1080/22797254.2020.1763205 Yu Liu, Wen Hu, Shanwei Wang & Lingyun Sun (2021) Eco-environmental effects of urban expansion in Xinjiang and the corresponding mechanisms, European Journal of Remote Sensing, 54:sup2, 132-144, DOI: 10.1080/22797254.2020.1803768 Peng Qin & Zhihui Zhang (2021) Evolution of wetland landscape disturbance in Jiaozhou Gulf between 1973 and 2018 based on remote sensing, European Journal of Remote Sensing, 54:sup2, 145-154, DOI: 10.1080/22797254.2020.1758963 Mingyi Jin & Hongyan Zhang (2021) Investigating urban land dynamic change and its spatial determinants in Harbin city, China, European Journal of Remote Sensing, 54:sup2, 155-166, DOI: 10.1080/22797254.2020.1758964 Balaji L. & Muthukannan M. (2021) Investigation into valuation of land using remote sensing and GIS in Madurai, Tamilnadu, India, European Journal of Remote Sensing, 54:sup2, 167-175, DOI: 10.1080/22797254.2020.1772118 Xiaoyan Shi, Jianghui Song, Haijiang Wang & Xin Lv (2021) Monitoring soil salinization in Manas River Basin, Northwestern China based on multi-spectral index group, European Journal of Remote Sensing, 54:sup2, 176-188, DOI: 10.1080/22797254.2020.1762247 GN Vivekananda, R Swathi & AVLN Sujith (2021) Multi-temporal image analysis for LULC classification and change detection, European Journal of Remote Sensing, 54:sup2, 189-199, DOI: 10.1080/22797254.2020.1771215 Yiting Wang, Xianghui Liu & Weijie Hu (2021) The research on landscape restoration design of watercourse in mountainous city based on comprehensive management of water environment, European Journal of Remote Sensing, 54:sup2, 200-210, DOI: 10.1080/22797254.2020.1763206 Bao Qian, Cong Tang, Yu Yang & Xiao Xiao (2021) Pollution characteristics and risk assessment of heavy metals in the surface sediments of Dongting Lake water system during normal water period, European Journal of Remote Sensing, 54:sup2, 211-221, DOI: 10.1080/22797254.2020.1763207 Jin Zuo, Lei Meng, Chen Li, Heng Zhang, Yun Zeng & Jing Dong (2021) Construction of community life circle database based on high-resolution remote sensing technology and multi-source data fusion, European Journal of Remote Sensing, 54:sup2, 222-237, DOI: 10.1080/22797254.2020.1763208 Zilong Wang, Lu Yang, Ping Cheng, Youyi Yu, Zhigang Zhang & Hong Li (2021) Adsorption, degradation and leaching migration characteristics of chlorothalonil in different soils, European Journal of Remote Sensing, 54:sup2, 238-247, DOI: 10.1080/22797254.2020.1771216 R. Vijaya Geetha & S. Kalaivani (2021) A feature based change detection approach using multi-scale orientation for multi-temporal SAR images, European Journal of Remote Sensing, 54:sup2, 248-264, DOI: 10.1080/22797254.2020.1759457 LianJun Chen, BalaAnand Muthu & Sivaparthipan cb (2021) Estimating snow depth Inversion Model Assisted Vector Analysis based on temperature brightness for North Xinjiang region of China, European Journal of Remote Sensing, 54:sup2, 265-274, DOI: 10.1080/22797254.2020.1771217 Yajuan Zhang, Cuixia Li & Shuai Yao (2021) Spatiotemporal evolution characteristics of China’s cold chain logistics resources and agricultural product using remote sensing perspective, European Journal of Remote Sensing, 54:sup2, 275-283, DOI: 10.1080/22797254.2020.1765202 Guangping Liu, Jingmei Wei, BalaAnand Muthu & R. Dinesh Jackson Samuel (2021) Chlorophyll-a concentration in the hailing bay using remote sensing assisted sparse statistical modelling, European Journal of Remote Sensing, 54:sup2, 284-295, DOI: 10.1080/22797254.2020.1771774 Yishu Qiu, Zhenmin Zhu, Heping Huang & Zhenhua Bing (2021) Study on the evolution of B&Bs spatial distribution based on exploratory spatial data analysis (ESDA) and its influencing factors—with Yangtze River Delta as an example, European Journal of Remote Sensing, 54:sup2, 296-308, DOI: 10.1080/22797254.2020.1785950 Liang Li & Kangning Xiong (2021) Study on peak cluster-depression rocky desertification landscape evolution and human activity-influence in South of China, European Journal of Remote Sensing, 54:sup2, 309-317, DOI: 10.1080/22797254.2020.1777588 Juan Xu, Mengsheng Yang, Chaoping Hou, Ziliang Lu & Dan Liu (2021) Distribution of rural tourism development in geographical space: a case study of 323 traditional villages in Shaanxi, China, European Journal of Remote Sensing, 54:sup2, 318-333, DOI: 10.1080/22797254.2020.1788993 Lin Guo, Xiaojing Guo, Binghua Wu, Po Yang, Yafei Kou, Na Li & Hui Tang (2021) Geo-environmental suitability assessment for tunnel in sub-deep layer in Zhengzhou, European Journal of Remote Sensing, 54:sup2, 334-340, DOI: 10.1080/22797254.2020.1788994 Hui Zhou, Cheng Zhu, Li Wu, Chaogui Zheng, Xiaoling Sun, Qingchun Guo & Shuguang Lu (2021) Organic carbon isotope record since the Late Glacial period from peat in the North Bank of the Yangtze River, China, European Journal of Remote Sensing, 54:sup2, 341-347, DOI: 10.1080/22797254.2020.1795728 Chengyuan Hao, Linlin Song & Wei Zhao (2021) HYSPLIT-based demarcation of regions affected by water vapors from the South China Sea and the Bay of Bengal, European Journal of Remote Sensing, 54:sup2, 348-355, DOI: 10.1080/22797254.2020.1795730 Wei Chong, Zhang Lin-Jing, Wu Qing, Cao Lian-Hai, Zhang Lu, Yao Lun-Guang, Zhu Yun-Xian & Yang Feng (2021) Estimation of landscape pattern change on stream flow using SWAT-VRR, European Journal of Remote Sensing, 54:sup2, 356-362, DOI: 10.1080/22797254.2020.1790994 Kepeng Feng & Juncang Tian (2021) Forecasting reference evapotranspiration using data mining and limited climatic data, European Journal of Remote Sensing, 54:sup2, 363-371, DOI: 10.1080/22797254.2020.1801355 Kepeng Feng, Yang Hong, Juncang Tian, Xiangyu Luo, Guoqiang Tang & Guangyuan Kan (2021) Evaluating applicability of multi-source precipitation datasets for runoff simulation of small watersheds: a case study in the United States, European Journal of Remote Sensing, 54:sup2, 372-382, DOI: 10.1080/22797254.2020.1819169 Xiaowei Xu, Yinrong Chen, Junfeng Zhang, Yu Chen, Prathik Anandhan & Adhiyaman Manickam (2021) A novel approach for scene classification from remote sensing images using deep learning methods, European Journal of Remote Sensing, 54:sup2, 383-395, DOI: 10.1080/22797254.2020.1790995 Shanshan Hu, Zhaogang Fu, R. Dinesh Jackson Samuel & Prathik Anandhan (2021) Application of active remote sensing in confirmation rights and identification of mortgage supply-demand subjects of rural land in Guangdong Province, European Journal of Remote Sensing, 54:sup2, 396-404, DOI: 10.1080/22797254.2020.1790996 Chen Qiwei, Xiong Kangning & Zhao Rong (2021) Assessment on erosion risk based on GIS in typical Karst region of Southwest China, European Journal of Remote Sensing, 54:sup2, 405-416, DOI: 10.1080/22797254.2020.1793688 Zhengping Zhu, Bole Gao, Renfang Pan, Rong Li, Yang Li & Tianjun Huang (2021) A research on seismic forward modeling of hydrothermal dolomite:An example from Maokou formation in Wolonghe structure, eastern Sichuan Basin, SW China, European Journal of Remote Sensing, 54:sup2, 417-428, DOI: 10.1080/22797254.2020.1811160 Shaofeng Guo, Jianmin Zheng, Guohua Qiao & Xudong Wang (2021) A preliminary study on the Earth’s evolution and condensation, European Journal of Remote Sensing, 54:sup2, 429-437, DOI: 10.1080/22797254.2020.1830309 Yu Gao, Ying Zhang & Hedjar Alsulaiman (2021) Spatial structure system of land use along urban rail transit based on GIS spatial clustering, European Journal of Remote Sensing, 54:sup2, 438-445, DOI: 10.1080/22797254.2020.1801356 Xia Mu, Sihai Li, Haiyang Zhan & Zhuoran Yao (2021) On-orbit calibration of sun sensor’s central point error for triad, European Journal of Remote Sensing, 54:sup2, 446-457, DOI: 10.1080/22797254.2020.1814164 Following publication, the publisher identified concerns regarding the editorial handling of the special issue and the peer review process. Following an investigation by the Taylor & Francis Publishing Ethics & Integrity team in full cooperation with the Editor-in-Chief, it was confirmed that the articles included in this Special Issue were not peer-reviewed appropriately, in line with the Journal’s peer review standards and policy. As the stringency of the peer review process is core to the integrity of the publication process, the Editor and Publisher have decided to retract all of the articles within the above-named Special Issue. The journal has not confirmed if the authors were aware of this compromised peer review process. The journal is committed to correcting the scientific record and will fully cooperate with any institutional investigations into this matter. The authors have been informed of this decision. We have been informed in our decision-making by our editorial policies and the COPE guidelines. The retracted articles will remain online to maintain the scholarly record, but they will be digitally watermarked on each page as ‘Retracted’.
Introduction
Besides the wind, wave and ocean current, the test string system also has to bear the influence of the motions of floating platform, such as the drift, swing and heave of the drilling boat or platform.So, it is of great practical significance to find out the influence of platform motion on the dynamic behavior of the test tube.
The static behavior analysis is of highly deformed risers were carried out successively by Chucheepsakul and Monprapussorn (2000), Karunakaran et al. (1999) and Santillan and Virgin (2011).The vortex-induced vibration responses of marine riser were investigated by Xue et al. (2015) and Gao and Low (2016) through numerically simulating and theoretical analysis.Ge et al. (2019) presented a method for the fatigue analysis of marine risers.Guo et al. (2001), Meng and Guo (2012), Zhang et al. (2015) and Paı¨doussis et al. (2007) discussed the coupled effects of internal and external flow on the dynamic behavior of testing tube by establishing dynamic models.The focus in the above studies was put on the lateral vibration of the riser under the internal and external flow excitation.The influence of platform motion on the lower pipe string kept unclear.
A matrix iteration method based on two dimensional pipe-beam was used by Zhu (1988) to solve the large deflection of marine risers and to investigate the effects of top tension, transverse drift of drilling boat and wave load on the pipe string.They found that the classical solution based on small displacement hypothesis will cause great deviation for the dynamic analysis of the riser of deep-water floating production system.Finite element method was used by Xie et al. (2011) to investigate the dynamic behavior and the variation of stress with time at different positon.In their computational model, the heave motion load which is compensated by the compensation system of the hook and the horizontal displacement of the drilling vessel were used respectively as the force boundary condition and displacement boundary condition of the top of the test tube.A nonlinear finite element analysis model of deep-water testing tube system was established by Liu et al. (2014) to determine the offset warning limits of the deep-water operation platform, the test work window and the safe operation boundary of the pipe disconnection.Based on the conceptual design of FDPSO-TLD, the dynamic model of the TLD (tension deck) system is established by Lei et al. (2015) to investigate the axial dynamic response of the riser caused by ship heave.Zhang and Li (2015) studied the influence of the phugoid motion of the floating ship on the axial tension.OrcaFlex software was used by Gong et al. (2014) to study the dynamic superposition effect, including surface wave, ocean current, pipeline transport ship motion, and the collision between the pipeline and the pipe support roller.Wang et al. (2015) proposed a dynamic analysis method of marine riser under the coupling action of forced excitation and parametric excitation which are respectively induced by transverse wave and the risefall motion of floating boat.
A new method to meet the requirements of riser stability and bottom tension allowance was developed by Yang et al. (2015) to investigate the influence of real axial force and effective axial force on the lifting mechanical properties.Liu et al. (2016) and Dai et al. (2014) proposed a drift dynamics model for the deepwater drilling platform and the riser system and studied the coupling dynamics characteristics of the deep-water drilling platform and riser system.
The comprehensive analysis of the motion of the platform and its influence on the mechanical behavior of testing tube is still lacking.Therefore, the purpose of this paper is to analyze the various possible motions of the platform and their mathematical description, based on which the influences of platform motion on the dynamic behavior of the testing tube are investigated.
Dynamic boundary analysis of floating platform
Under the action of ocean environment loads (wind, wave and current), the floating platform mainly produces six kinds of motion responses, such as surging, swaying, heave, pitch, rolling and flat rolling, among them, the effect of flat rolling on the floating platform is the smallest and can be ignored.Pitching and rolling are related to the swing characteristics of the floating platform.Surging and swaying belong to the motion in horizontal direction, which directly affects the positioning of the floating platform.In general, the assumption that the wind, wave and ocean currents act in the same plane is made to analyze the limiting conditions of the pipe structure.In the analysis of the influence of the platform motion on the testing string, the main consideration is the pitching and heaving motion of floating platform.
Motion limit of floating platform
The allowable motion limit, which is related to the motion compensation device and the operator's proficiency, is the basic requirement for the platform designer and is also the basis for the evaluation of the platform motion performance.In fact, motion limit is not only related to the seabed condition, but also to the water depth and the motion period.The deeper the water, the smaller is the allowable range of motion.The smaller the period, the smaller is the allowable range of motion.The allowable motion limits of semi-submersible platform and drilling ship of a well in Liwan in South China Sea are listed in Table 1.
Heave motion analysis of floating platform
Being considered as a single degree of freedom system, the sketch map of the force analysis of floating platform with heave motion is shown in Figure 1, in which S w is the cross sectional area of platform in sea level (m).Z 0 is the draft depth of platform in stationary state (m).G is the gravity of the platform (N).
As the floating platform is still in the water, it's gravity and buoyancy are equal and can be written as Where m is the mass of platform (kg), ρ is the seawater density (kg/m 3 ), g is gravity acceleration (m/s 2 ).The equation of vertical vibration is given by Where F 0 b and a are respectively instantaneous buoyancy (N) and acceleration of platform (m/s 2 ).
Equation ( 2) can written as the following differential form Equation ( 4) is a harmonic vibration equation of natural vibration with angular frequency ɷ (rad) and period T (s).
As the effect of seawater on the platform is considered, Equation ( 6) can be further written as
R E T R A C T E D
where k is added mass coefficient depending on the shape of the lower part of platform, for circular section k ¼ 1, for rectangular section K is selected in Table 2 The amplitude of heave motion of floating platform under wave force can be determined according to the method proposed in the literature (Guo et al., 2001) Where u w is the amplitude of wave motion (m).
Nonlinear dynamic response of testing tube under platform motion
Establishment of finite element model The 4½-inch Q-125 test tube of a well in Liwan in South China Sea is taken as the studied object.In actual operation, as the wellhead on the mud line is fixed by a hanger shown in Figure 2, it can be seen as a fixed constraint.So the dynamic response of test tube in deep-water is mainly affected by the marine environment and platform motion and its numerical model is established in this section.
(2) Displacement and load boundary conditions Pipe element is used to discretize the testing tube shown in Figure 3, the static liquid column pressure in the annulus between the test tube and riser, and the gas pressure inside the test tube being taken into account.According to the actual condition, the wellhead on the mud line is simplified to a fixed constraint and the platform motion is taken as the dynamic boundary of the upper end of testing tube.The wave period of deep water is taken as 9 s and the amplitude of platform heave motion is assumed to be 1 m after the heave displacement is compensated by a compensation system.For the horizontal motion of the platform, 2% of the water depth is used to represent the average offset.According to these analysis, the dynamic boundary of the floating platform and load are shown in Table 3.
Effect of average offset of floating platform
The floating platform has an average deviation under the ocean environment load, which is mainly due to the effect of the steady current load and is the main part of the horizontal motion of the platform.To investigate the influence of the average deviation of the platform on the testing tube, the offset value is taken as 1%~6% of water depth.
Figure 4 shows that the stress in the testing tube varies linearly with water depth, except for the sudden change on the tube section closed to mud well head.With the platform offset increasing, the stress level in test tube overall increase.This phenomenon is similar to the influence of the top tension on the strength of the riser.The reason for this is that when the offset is
R E T R A C T E D
increased, the test string is stretched, and the hook load will be increased accordingly, resulting in increased stress in the test tube.The effect of average offset on the transverse displacement of test tube is shown in Figure 5, which indicates the linear relation between the displacement and water depth.
In fact, excessive offset will have a greater effect on the deep-water test operation.For instance, for a excessive deflection angle of the tube section near mud line, it is difficult to rise and land the test string and to disconnect the underwater test tree in emergency condition.In severe cases, the test tree may be stuck and unable to escape.So, in general, the deflection angle of the deep-water test string should not exceed 2 degrees.Because of the relatively small size of the drill string, it's maximum deflection angle can reach about 9 degrees.
Combined effects of heave motion and mean offset
The dynamic behavior of the testing tube under the combined influence of mean offset and platform heave motion with 9 s cycle is investigated.Dynamic boundary of the platform and load parameters are shown in Table 4.The time-history responses of the stress, displacement, velocity and acceleration of the top, middle position and the position of mud line are shown in Figures 6-13.As can be seen from the figures, the stress and displacement amplitudes of the deep-water test tube decrease gradually from top to bottom.Whereas the alternating amplitudes of the upper and lower ends are larger than that of the middle point.This is related to the top boundary of the test tube, the constraint of the mud line and the distribution of the damping.The velocity and acceleration decrease gradually from the top to the bottom evidently because of the damping effect.The dynamic response of the test string is random under the condition of platform heave and mean deviation.Moreover, wave motion, as well as the response of platform is stochastic.So, the response of the testing tube under the condition of heave motion and average deviation is also a stationary stochastic process.
R E T R A C T E D
Stress of the test tube under the limit displacements of platform As the floating platform heaves to the longitudinal limit positions y = ±1 m and offsets to the horizontal limit position x = ±40.5151m (mean deviation plus slow drift amplitude, namely 2% × water depth +03024 ffiffiffiffiffiffiffiffiffi ffi 1450 p Þ, the Mises stress in testing tube is shown in Figure 14.It can be found in the figure that as the floating platform sinks to the limit position, the maximum stresses (131.754MPa and 176.565MPa) of the test tube appears at the bottom end connected to the hanger.Whereas, with the floating platform rising to the limit position, the maximum stresses (228.616MPa and 263.194MPa) appears at the top which is the position of the hook on the platform deck.The main reason for this phenomenon is that the falling motion counteract the axial force produced by the offset of the platform, and the axial tension load of the testing tube increases due to the superposition of rising motion and the offset of the platform.
Conclusion
The dynamic boundary of floating platform has been analyzed and the its influence on the dynamic response of deep-water testing tube has been investigated.The following conclusions are drawn from the results obtained:
R E T R A C T E D
Larger offset of platform cause greater hook load and Mises stress in testing tube.Moreover, an excessive offset, resulting in deflection angle of the lower end of the test tube, will lead to operation difficulty of rising and landing testing tube.The average offset value should not exceed 6% of the water depth.
It has been displayed in the coupling analysis of heave motion and mean shift that the response of It is shown in the stress analysis of the test tube under the limit displacements of platform, As the floating platform sinks to the limit position, the maximum stresses of the test tube appears at the bottom
Figure 1 .
Figure 1.Force analysis of floating platform in heave motion direction.
Figure 2 .
Figure 2. Schematic diagram of simplified wellhead on mud line.
Figure 3 .R
Figure 3. Element partition of test tube.
Figure 4 .
Figure 4. Effect of platform offset on the stress of test tube.
Figure 5 .
Figure 5.Effect of platform offset on the displacement of test tube.
Figure 6 .
Figure 6.Longitudinal displacement time-history response of upper node.
Figure 7 .
Figure 7. Longitudinal Mises stress time-history response of upper node.
Figure 8 .
Figure 8. Longitudinal acceleration time-history response of upper node.
Figure 9 .
Figure 9. Longitudinal velocity time-history response of the top node.
Figure 10 .
Figure 10.Longitudinal displacement time-history response of middle node.
Figure 11 .
Figure 11.Longitudinal velocity time-history response of the middle node.
Figure 12 .
Figure 12.Longitudinal acceleration time-history response of middle node.
Figure 13 .
Figure 13.Longitudinal Mises stress time-history response of middle node.
Figure 14 .
Figure14.Mises stress distribution of test tube as floating platform is in limit position.
Table 1 .
Allowable motion limits of semi-submersible platform and drilling ship.
Table 2 .
Added mass coefficient of platform with rectangular lower body.
Table 4 .
Dynamic boundary of the platform and load parameters.
3 Pressure in testing tube From mudline head to the platform Wellhead 28.63 ~25.11MPa Measured Gravity Density of pipe 7850 kg/m 3 | 2020-07-02T10:18:14.287Z | 2020-06-29T00:00:00.000 | {
"year": 2021,
"sha1": "5646ad4a18efc47d57736f59de945cf5b3ca9ab7",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/22797254.2020.1762246?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3f10d3915082f64207cfec5b9268a1d6f751d36e",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
235280052 | pes2o/s2orc | v3-fos-license | Ballistic Target Signal Separation Based on Differential Evolution Algorithm
During the course of the ballistic target’s mid-flight, it is very important to accurately identify the target. Separating the target micro-Doppler curve is the key to accurate identification. Aiming at this problem, this paper proposes and improves a differential evolution algorithm to separate the micro-Doppler curves of the scattering points. According to the results of the simulation experiment, the signal is separated well, which verifies the effectiveness of the proposed algorithm.
Introduction
In order to cope with the threat of ballistic missiles, all countries are rushing to study the construction of ballistic missile defense systems [1] . Ballistic target recognition technology is one of the core technical problems that need to be solved in ballistic missile defense. However, in recent years, the penetration technology of ballistic missiles has been greatly developed. With the increasing maturity of sub-guided multi-warhead and missile decoy technology [2] , ballistic target recognition technology based on traditional feature quantities has been unable to adapt to modern high-tech warfare. Demand. In 2000, Professor V. C. Chen extended the micro-movement to the field of radar observation, and pointed out that the micro-movement feature of the target is the inherent attribute of the target [3] . Because this characteristic is difficult to be imitated, and different warheads and decoy motion forms have obvious differences, this characteristic can be used to identify ballistic missiles.
Since precession will produce micro-Doppler modulation of radar echo, the micro-Doppler of radar echo can be analyzed, and then the characteristic parameters of the target can be obtained for identification [4]- [7] . In order to study the characteristics of the target, it is necessary to separate the echo signals of the group targets, and extract the micro-motion signals of the sub-targets separately. Literature [8] uses the segmented Viterbi algorithm to separate the compensated time-frequency curve. Literature [9] combined the Viterbi algorithm with adaptive field of view cluster matching, and obtained the optimal matching path of the target micro-Doppler curve, which realized signal separation. Literature [10] uses the EMD algorithm to decompose the micro-Doppler characteristics of the aircraft to achieve target resolution. Literature [11] uses the estimated period to segment the echo signal, and uses the support domain obtained from the strong energy region of the signal to perform time-frequency joint filtering on the echo, thereby separating different echoes. Literature [12] proposed a method of using sliding windows to separate the fretting curves, but this method did not mention the separation of ballistic group targets with different motion laws. Literature [13] proposed a learning algorithm using C-means clustering to improve the ICA mixing matrix, and then using sparse decomposition to separate the source signal, but this method relies on the accurate estimation of the mixing matrix. When the mixing matrix is inaccurate, the separation algorithm will not be able to Play its due role. Based on the above-mentioned research status, this paper proposes the use of differential evolution algorithm to achieve the purpose of signal separation by seeking the optimal solution. Aiming at the problems of traditional differential evolution algorithm which is easy to fall into local optimum and premature convergence, its related strategies are improved, and the JADE algorithm is proposed. According to the results of the simulation experiment, the signal separation is realized. It lays a good foundation for extracting target feature parameters in the next step.
Precession model of ballistic target
The center of mass of the ballistic target is O . As shown in Figure 1, the reference coordinate system O XYZ is established with O as the origin. The ballistic target performs conical rotation on the Z axis, and the conical rotation angular velocity is c ; spins on its own symmetry axis, and the spin angular velocity is s . The angle between the cone axis and the spin axis, that is, the precession angle is . At the beginning, the azimuth and elevation angles of the radar line of sight LOS in the reference coordinate system O XYZ are and respectively. The body coordinate system of the ballistic target is O xyz , rotate e around the z axis, rotate e around the x axis, and rotate e around the y axis to get the reference coordinate system. The distance from the center of mass O of the target to the vertex A of the cone is 1 l , and the distance from the center of the bottom surface of the cone is 2 l .
, then the position vector of the scattering point in the reference coordinate system at time t is [14] int 0 c s
r T T R r
( 1 ) In the formula: c T is the spin matrix of the ballistic target, s T is the cone spin matrix of the ballistic target, and int R is the Euler rotation matrix, which is determined by the initial Euler angle ( , , ) e e e .
According to the literature [15] and Rodrigues formula [16] , the expressions of c T , s T and int R matrix can be obtained as follows In the formula: I is the identity matrix, s are the skew symmetric matrices of c ω and s ω , respectively. Construct the following oblique symmetric matrix [17] ' (5) Therefore, the distance between the scattering point and the radar at time t is In the formula: 0 R is the position vector of the warhead's rotation center from the radar, and LOS n is the direction of the radar's line of sight.
According to formula , the echo signal of the target can be obtained, and the fast Fourier transform is used, and then the envelope is obliquely removed, and a high-resolution range image can be obtained [17] In the formula: p T is the pulse width, is the frequency modulation, c f is the radar carrier frequency, and R is the radial distance between the scattering center and the rotation center.
Micro-motion information signal separation
3.1 Introduction and principle of differential evolution algorithm Differential Evolution (DE) is a heuristic random search algorithm based on group differences. This algorithm is proposed by R.Storn and K.Price to solve Chebyshev polynomials, and is mainly used to solve real number optimization problems. This algorithm is a kind of group-based adaptive global optimization algorithm, which is a kind of evolutionary algorithm. Because of its simple structure, easy implementation, fast convergence, and strong robustness, it is widely used in data mining and pattern recognition., Digital filter design, artificial neural network, electromagnetics and other fields [18] . The DE algorithm uses floating-point vectors for encoding to generate population individuals. In the process of DE algorithm optimization, firstly, two individuals are selected from the parent individuals to perform vector difference to generate a difference vector. Secondly, another individual is selected and the difference vector is summed to generate an experimental individual. Then, the parent individual is compared with the difference vector. Corresponding experimental individuals perform crossover operations to generate new offspring individuals. Finally, a selection operation is performed between the parent individual and the offspring individual, and the individuals that meet the requirements are saved to the next generation group. Figure 2 shows the flow chart of the DE algorithm. [19] Suppose the optimization model is as follows: 1 2 min ( , ,..., )
Differential evolution algorithm process
. , 1, 2,..., In the formula: D is the dimension of the solution space, and L j x and U j x respectively represent the upper and lower bounds of the value range of the j -th component j x .
Initial population
The initial population is: Various groups of individuals are randomly generated by the following formula: In the formula: (0) represents a random number uniformly distributed in the (0,1) interval.
Mutations
The most significant difference between differential evolution algorithm and genetic algorithm is that the individual variation of DE is realized through differential strategy. The commonly used differential strategies are as follows: In the formula: F is the scaling factor, and ( ) i x g is the g -th individual in the i -generation population.
That is, by randomly selecting two different individuals in the population, the vector difference is scaled and then the vector is synthesized with the individual to be mutated.
In the evolution process, the validity of the newly generated solution must be guaranteed, so it must be judged whether the generated solution meets the boundary conditions, if not, it needs to be regenerated (the generation scheme is the same as the initial population).
Cross
Use the generation g population and its variant intermediates to cross: In the formula: CR is the crossover probability, rand j is the random integer of [1,2,..., ] D .
Select
The greedy algorithm is used to select the next-generation population individuals:
Improvement of DE algorithm
For the DE algorithm, as the number of iterations increases, the difference between individuals will gradually decrease, and the convergence speed will also decrease, which will make the DE algorithm easy to fall into local optimal and premature convergence. Therefore, it is necessary to seek various improvements on the original classic DE algorithm to improve the DE algorithm's optimization ability, convergence speed, and overcome premature convergence. Therefore, the JADE algorithm is proposed. According to the dimension D of the problem, we use a D -dimensional vector to represent a result. It should be noted that each dimension should have its own upper and lower bounds. Pack a fixed number ( NP ) of results to form a set.
The JADE algorithm is an improvement of the DE algorithm, so the basic logic remains unchanged. The main differences are as follows: 1. A new mutation strategy is proposed. Its strategy is shown in the following formula 1 1 In the formula: ( ) best x g is to directly select the best result in this set 2. F 's self-adaptation. The i F of each result in each generation is a random number conforming to the Cauchy distribution ( ,0.1) proportional parameter is 0.1, and conforms to the self-update formula (1 ) ( ) [20] (1 ) ( ) 18) In the formula: The initial value of CR is 0.5, and c is a constant. Generally speaking, 1/ [5,20] c and A mean refer to the calculation of the arithmetic mean, and CR S is the set of CR each time the result of the offspring replaces the result of the parent.
Simulation
The distance 1 . The radar is located at the reference coordinate system (100,300,-500)Km , and the line of sight of the radar is 1 (-1 35 ,-3 35 ,5 35) n . According to the distance formula and related parameters obtained in the previous section, the cone target movement is simulated, and the radar echo with a duration of 2s is received together, and the Figure 3 is obtained. Figure 4 simulates the JADE algorithm to separate the obtained cone target micro-Doppler signal. According to Figure 4, it can be seen that the micro-Doppler curves of the scattering points are better separated.
In conclusion
The radar micro-Doppler curve of the ballistic target is superimposed by multiple scattering points. For the follow-up related research, the micro-Doppler curve of the scattering points is separated first. This paper proposes the DE algorithm, and improves the JADE algorithm according to its related shortcomings, which realizes the separation of signals. The simulation experiment verifies the effectiveness of the algorithm and realizes the signal separation better. In practice, the scattering point characteristics of the target are more complicated. At the same time, during the flight of the target, there are a lot of warhead fragments and decoys along with the flight of the target. Therefore, in the next step, we can study the micro-motion characteristics of the group target in the case of complex scattering points.
At the same time, after signal separation of ballistic targets, the next step is to perform feature extraction and recognition of the target. Here, by deriving the mathematical formula of the relevant parameters, the neural network algorithm is introduced in order to achieve the purpose of feature extraction and classification and recognition of the target. | 2021-06-03T00:51:02.704Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "30f361d6728806b410c98ac9fb8ad99a1778e7be",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1883/1/012005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "30f361d6728806b410c98ac9fb8ad99a1778e7be",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
246982248 | pes2o/s2orc | v3-fos-license | Extended Kalman Filter deconvolution for extracting accurate seismic reflectivity
Deconvolution attempts compensating for the distortions affecting a recorded seismogram, increasing its bandwidth and extracting subsurface reflectivity from such seismic trace. The estimated reflectivity needs the highest reliability and resolution because of its subsequent use in the pre-stack seismic processing sequence and seismic inversion. We implemented the predictive deconvolution algorithms, the homomorphic Phase Inversion, and the Extended Kalman Filtering. Their application to synthetic traces extracted reflectivity whose comparison with well-bore allowed comparing the reliability between methods. The algorithms applied to an offshore record provided results whose comparison permitted to analyze the impact of the deconvolution assumptions on each method performance.
Introduction
The distortion, signal weakening, and the loss of resolution affect the wavelet during propagation, masking the recorded seismograms' information. Mathematically, the seismic trace results from the convolution of Earth's reflectivity profile with the signature of energy released by the source (Yilmaz, 2008). Nevertheless, deconvolution is a linear operator that compensates for the distortion of a recorded signal, increases the seismic data bandwidth, and extracts the Earth's reflectivity. The reflectivity profile requires the highest reliability and resolution possible because it is the capital input of subsequent pre-stack steps of a data processing sequence and seismic inversion procedures. The most used deconvolution in the oil industry is the double inverse of Wiener-Levinson in time WLDI (Robinson and Treitel, 2000) and frequency FDD (Claerbout, 1985). In the case of a free noise seismogram and known stationary minimumphase wavelet, the deterministic deconvolution supplies a highly trustworthy reflectivity. The wavelet is estimable in offshore data but not on onshore one, and in both cases, it is still non-stationary and noise-contained. The free noise assumption is tough to honor due to the impossibility of getting signals utterly free of noise. On the other side, the predictive deconvolution seeks the prediction error representing the reflectivity function. When the prediction distance is one sample, the prediction error filter becomes the optimal zero-lag inverse filter, appropriate in the often fair minimum phase approximation. Even though predictive deconvolution has been a handy tool for several years, it is ineffective under any infringement of the three underlying assumptions. Besides, there is the white spectrum reflectivity assumption. Nevertheless, when the rock layering is periodic, its reflectivity sequence is not random, and the processing flow must resort to alternative methods. Even though the extensive use of statistical procedures, there is no comprehensive response to the three anterior suppositions (Ziolkowski, 1991).
The Homomorphic deconvolution -HOMD (Ulrych, 1971) and the Phase Inversion deconvolution -PID (Lichman and Northwood, 1995) estimate the amplitude spectra of wavelet and reflectivity in the Cepstrum domain where these spectra must not overlap. Both deconvolutions have to fulfill the stationary and the noise-free wavelet assumptions but not the random reflectivity and the minimum phase ones (Arya and Holden, 1978). Crump (1974) designed the Kalman Filter matrixes for deconvolution, and later, Mahalanabis et al. (1983) improved the storage and updating of the matrix by estimating both the smoothed forward and backward prediction residuals of the trace, turning the algorithm computationally more efficient. Despite the above, the high computational cost remains. Recently, Deng et al. (2016) presented a Kalman Filter approach where the reverse wavelet slides over the reflectivity function instead of slides the reverse-reflectivity over the wavelet, as the conventional Kalman approach does. As a result, the number of parameters can diminish until one, and its selection should balance resolution and noise. The Kalman Filter for deconvolution substantially extends the Wiener filtering to accommodate time-varying processes, without supposing assumptions, except noise with a normal distribution of mean zero. In this research, we designed and implemented in Matlab an Extended Kalman Filter to Adaptative deconvolution -EKFD of seismic data based on the approximation of the linear system through the extension of the discrete Kalman Filter (Julier and Uhlmann, 1997). Besides, we implemented in Matlab the deconvolution methods of the double inverse of Wiener-Levinson, Phase Inversion, and Extended Kalman Filter. Finally, the comparison of its outputs allows us to know the impact of the suppositions of deconvolution on the performances of considered methods.
Theory
If a wavelet w(t) remains constant during its propagation, the reflected signal will be the superposition of delayed wavelets, with their amplitudes scaled according to the faced reflectivity r(t) along its path and the degree of geometrical divergence. According to the convolution model, a seismic trace x(t) contaminated with noise n(t) is: The * symbol represents the convolution operator. The suppositions of the isotropic, horizontal and parallel layered medium, and the plane wavelet that incises normally on the interfaces are necessary to construct the convolutional model. The absence of noise, the known stationary wavelet of minimum phase, and random reflectivity are assumptions required to solve equation 1. The deconvolution attempts to remove the wavelet from the seismic trace to retrieve the earth reflectivity. Under the above restrictions, the Deterministic deconvolution solves equation 1.
Of course, it is impossible in onshore projects to determine without any uncertainty the wavelet from explosive sources, and vibrators and in offshore from air guns, making unfeasible the deterministic convolution.
The WLDI is a widely used stochastic approach to solve equation 1 that build an optimal filter by minimizing the square mean error ϵ between the recorded trace y(t) and the signal d(t) desired and supplied by the filter f(t), according to the expression: The minimizing condition expressed as ∂ϵ ⁄ (∂f t = 0;∀t = 1⋯N, provides the following set of N coupled equations (Robinson and Treitel, 2000): Where the vector C k represents the cross-correlation between the vectors and , and A k-t is the Toeplitz matrix that represents the autocorrelation of . A recursive approach provides the solution of the equations system 4, i.e., the filter f that extracts the reflectivity. WLDI supposes a random reflectivity that implies that the trace autocorrelation scales the wavelet autocorrelation. In addition to the above, WLDI assumes no-noise, a minimum and stationary phase wavelet, the filter length plus another factor that guarantees the algorithm stability. However, some researchers (Arya and Holden, 1978;Jurkevics and Wiggins, 1984) demonstrated that WLDI is not reliable because of the assumptions' non-compliance.
In case of no noise, the Neper logarithm of the Fourier transform of equation 1 becomes: X(ω), R(ω) and W(ω) are the amplitude spectra of x(t), r(t) and w(t) respectively.
Using equation 5, Ulrych (1971) attempted to separate the R(ω) and W(ω) by converting equation 5 into the time through the inverse Fourier transform: Since W(ω) is a function smoother than R(ω), they are separable in this denominated Cepstrum domain, but not their phase spectra (Lichman, 1999). A low pass filter retrieves the wavelet contribution, whereas a high pass filter recovers the part of the reflectivity, maximum the separation in case of minimum-phase wavelet. The named HOMD approach requiring neither a random reflectivity nor a minimum phase wavelet assumes that R(ω) and W(ω) do not overlap in the Cepstrum domain (Arya and Holden, 1978). On the other hand, the recovery of the wavelet phase spectrum is not a well-established procedure that depends mainly on the processor (Lichman and Northwood, 1995).
The PID (Lichman and Northwood, 1995) is a homomorphic deconvolution that retrieves the wavelet phase spectrum by using the next Hilbert transform relationship: In equation 7, P denotes the Cauchy principal value. However, both HOMD and PID cannot separate the spectra wholly in the presence of low-frequency noise or when reflectivity contains low-frequency components.
Kalman Filter
The Kalman Filter (Kalman, 1960) optimally controls and estimates white-noisy linear system models. It achieves the best estimation of a hidden variable immersed in a measurement, based on the information supplied by sensors, control action, and the system's state at a previous instant. Analytically, the Kalman Filter assumptions are: A) The noise measurement V k has a zero mean 〈v k 〉=0 normal distribution and diagonal covariance matrix: B) The processing noise ω k has a zero mean 〈ω k 〉=0 normal distribution and diagonal covariance matrix: C) The measurement and processing noises are independent, i.e. Cov (v k¸ωk ) = 0.
A variable set characterizes the system in time k and defines the x k state. Equation 10 relates states x k , x k-1 of k , k ̶ 1 instants, where is the transition state matrix, is the controlling action matrix and u k is the control action on the system.
Equation 11
relates the x k system state with the measurements z k in the sensors at instant k through the matrix and the random noise v k .
In the first phase, the Kalman Filter obtains a first estimate of the current system state as from the predecessor corrected state using equation 10, Equation 13 relates the covariance matrix for estimated state with the covariance matrix for corrected state , the processing noise covariance matrix given by equation 8, the transition state matrix and its transposed one .
The second step calculates the Kalman gain matrix = Cov(x k¸zk ) / Cov(z k ¸z k ) to diminish uncertainty, expressed in terms of the system matrixes as The corrected state becomes: and the corrected state covariance matrix is:
Extended Kalman Filter
To overcome the fact that non-linear systems do not meet the Kalman assumptions, Julier and Uhlmann (1997) proposed the Extended Kalman Filter approximation. In this approach, equation 13 transforms into: Where and are Jacobian matrices of the state transition system constructed as first-order partial derivatives of the state transition equation 10 when ω k = 0: The Kalman Extended Filter gain is now: and are Jacobian matrices of the statemeasurements system constructed as first-order partial derivatives of the state transition equation 11 when v k = 0: The state of the system and its covariance matrixes are: The anterior Extended Kalman Filter expressions correspond to the first-order approximation. Their reliability depends strongly on the non-linearity of functions and the assumption of slight variations in each time interval. EKFD approach to deconvolution is essentially a predictive deconvolution that handles time-varying processes (Arya and Holden, 1978).
Methodology
Three different Matlab codes implemented the deconvolutions WLDI, PID, and EKFD. They extracted reflectivity from a synthetic seismogram constructed by convolving a 62 Hz causal Sinc wavelet with a welllog reflectivity. The first tests focused on the impact of the assumptions of the noise in the signal and the reflectivity randomness, and another one on the effect of using non-stationarity wavelet. To evaluate the misfit caused by the noise in the seismic trace WLDI, PID, and EKFD deconvolved synthetic traces with different S/N ratios, equating the standard deviation of whitenoise with the standard deviation of the seismogram.
Measure the effect of randomness in WLDI, PID, and EKFD, they extracted the reflectivity of a seismogram provided by the convolution between wavelet and a non-random reflectivity profile and estimated the misfit between both reflectivities. A final test contemplated synthetic traces built by the convolution with a nonstationary wavelet, counting errors of WLDI, PID, and EKFD during deconvolution. Formerly, Ricker, Sinc, and Damped-Sine wavelets as an inputs model to EKFD allow the estimation of their associated errors.
The guiding function chosen is the complete synthetic seismogram, q(n) = σ3 -n and v(m)=σ3 -m , with the standard deviation σ and 0 ≤ n,m ≤ 20. In a final step, thirteen transition state functions in EKFD provided output errors that determined their impacts.
In the final step, applying the three algorithms to a real shot gather provided images whose quality measures their performances. The common shot gather has 564 traces with 5 seconds record length with a 1 ms sampling rate. The pre-processing of the shot-gather includes amplitude recovery, refraction statics, and attenuation of the direct wave. The evaluation of the results took into account frequency content, reflector continuity, and time-resolution. Notably, the errors associated with the parameters input to the EKFD comprise the wavelet model, guide function, states transition function, processing noise factor q, and noise measurement factor v, and indicated their selection. Figure 1A shows the causal 60 Hz Sinc wavelet, and Figure 1B shows the well-log reflectivity profile that is 60% random. In contrast, Figure 1C contains the synthetic seismogram supplied by the convolution between the two anterior. Figure 1D depicts the quasinormal distribution of the reflectivity coefficients with 0.0015 mean nearby to zero. The sampling interval is 1 ms for all charts. Figure 2A shows the searched reflectivity profile; meanwhile, Figures 2B, 2C, 2D, and 2E contain the reflectivities estimated by KEFD using Spike, Ricker, Sine-damped, and Sinc wavelet, respectively. On the other hand, Figures 2F, 2G, 2H, and 2I depict the errors caused by each anterior wavelet in KEFD. A red box encloses part of the reflectivity profiles to improve the visualization of the result comparison. When using a spike as an input model, the EKFD works like an identity operator because the input ( Figure 1C) equals the output ( Figure 2B), achieving the highest error shown in Figure 2F. In this extreme case, the use of a spike wavelet makes EKFD out-off-use. On the other extreme, when using a 60 Hz Sinc as an input model, the reflectivity furnished by EKFD in Figure 2E is almost equal to the one observed in the well. In such circumstances, EKFD becomes a deterministic deconvolution with the lowest error contained in Figure 2I. The seismogram in Figure 2C (Ricker), Figure 2D (damped Sine) and Figure 2E (Sinc) look nearly identical to Figure 2A. In these cases, the input models are near similar to the source. Figures 2F, 2G, 2H, and 2I show that the error decreases from 100% to 0.42%. Albeit the input model and the source wavelet seem similar, they do not equal exactly. The evaluation of the parameters of the process noise (q) and the measurement noise (v) indicated a percentage of error below 1% when the ratio q/v < 27 achieved the lowest one errors when q(0)=σ the standard deviation of the trace. When v > q, the results are poor because the measurement noise is higher than the process noise. On the other hand, the minor impact of the 13 transition functions in the EKFD allows us to discard it as a determinant factor. Finally, using the trace as a guide function causes an error comparable to that obtained when not using a guide function, implying the guide function's unimportant role.
Results and Discussion
Henceforth, the parameters of the EKFD are a 40 Hz Sine damped function as the input model, length of 150 ms, q equals the standard deviation of the trace and v = q/27. One of deconvolution's main assumptions is the absence of noise in the trace. To assess it, the application of WLDI, PID, and EKFD to noisy-traces with the signal to noise ratio varying from 1 to 20 provided their respective errors. Figure 3 shows that the estimation errors for all methods decrease when the signal-to-noise ratio increases. As expected, the misfit or is high if the noise is comparable with the signal. PIF and EKF get the most negligible errors achieving values lower than 10% when S/N is over 3.0, while WLDI gets the worst over 10%. But in all situations, EKFD always gets the best performance when the trace contains noise. The next test evaluated the response of WLDI, PID, and EKFD when they did not meet the randomness assumption. The reflectivity test in Figure 4B indicated that it does not have a normal distribution, verifying its lack of randomness; hence, the trace autocorrelation is not at the same scale that the wavelet autocorrelation. Figure 4A shows the non-random reflectivity used, while Figures 4B, 4C, and 4D exhibit the reflectivities estimated by WLDI, PID, and EKFD, respectively.
Simultaneously, Figures 4F, 4G, 4H, and 4I contain the errors associated with each method. Comparing the WLD I deconvolution and the red box's trace used, the unfortunate result has an average error of 39.9%, and an evident low-frequency content. On the other hand, PID and EKFD achieve results with minor average errors of 2.18%, and 0.21%, indicating no random trace character. Finally, to evaluate the impact of a stationary wave assumption, we build a trace decreasing the frequencies of the Sinc wavelet in-depth, starting from 100 Hz up to 20 HZ. Figure 5 shows the predicted reflectivity when the non-stationary wavelet interacts with the well-log reflectivity. Figures 5B, 5C, and 5D show the results of applying WLDI, PID, and EKFD to this trace. In the same picture, Figures 5E, 5F, and 5G contain the errors associated with each method. Figure 5B shows how WLDI cannot extract the rightful reflectivity according to wavelet deepens and pointed out by Figure 5E, where the error increases in depth up to 24.8%. On the contrary, The PID and EKFD recover the reflectivity achieving similar results, Figure 5C and 5D. The corresponding low errors of 4.83% and 3.22% point out the reliability of these two methods. In conclusion, the tests found that WLDI is very sensitive to the stationarity wavelet and to the reflectivity randomness; while, PID and EKFD are insensible to those assumptions. EKFD gets the best results, and although it requires seven input parameters, only the input model and the q/v relation are relevant in the deconvolution and related to the trace.
In a subsequent analysis (Figure 6), through WLDI, PID, and EKFD estimated the reflectivity profile resulting from the convolution between the nonrandom well-log reflectivity noise-contaminated, and a slightly non-stationary wavelet. Figure 6A shows the reflectivity obtained by WLDI, and focusing on the red box indicates that it does not recover the amplitudes correctly. The misfit occurs throughout the profile ( Figure 6D), achieving values in parts of the pattern close to the real ones, with an average error of 10.2%. WLDI is the most used deconvolution in the petroleum industry, with an unreliable result considering that such an outcome is the input to the seismic inversion. Figure 6B contains the reflectivity estimated by the PID. Although it is hard to note considerable differences, Figure 6E shows noticeable discrepancies with the real one at first sight. The average error of 4.08 indicates the PID as a reliable deconvolution.
It is worth noting that the absence of low-frequency components in the trace, which favors the PID's performance. As known, low-frequency components hinder the separation of the signals in the Cepstrum. Figure 6C shows the reflectivity estimated by EKFD with the best performance achieved. It is supported by the correlation of 0.96 between the real and estimated reflectivity, with a mean error of 0.39% ( Figure 6F). Figure 7A shows part of a common-shot with 564 hydrophones before the deconvolution, with reflectors 1, 2 and 3 to analyze. In Figure 7B, the WLDI deconvolution with a 100 ms time window does not throw optimal results, destroying the lateral continuity of reflectors 1, 2 and 3. Figure 7C contains the common-shot gather after PID, as a result of its application the obtained image looks more focused, maintaining the lateral continuity of the reflectors. Compared with the image achieved by WLDI, the PID image contains higher frequencies in the data. Finally, the result of applying EKFD to the record, in Figure 7D, shows an image with similar coherence and quality provided by PID. The input model for EKFD was a 10 Hz Sinc, corresponding to the dominant frequency of the common-shot gather. The similarity in the quality of the images provided by PID and EKFD is because the marine registry contains a large bandwidth, avoiding the strong restriction of the PID. The zoom to an area of the record marked by the red box reinforces the previous conclusions concerning the three deconvolution methods considered in Figures 8A, 8B, 8C, and 8D. On the other hand, the images provided by PID and EKFD have high-frequency seismic events not generated by spectral whitening, representing registered seismic reflectors. Figure 9 shows the frequency spectra of the shot gathers before and after applying the deconvolutions. Figure 9A shows the amplitude spectrum of the gather without deconvolution. Figures 9B, 9C and 9D show the spectra after applying WLDI, PID and EKFD, with increased high frequency content. Spectra in Figures 9A and 9B have remnants of the shotgun wavelet, characterized by low frequency components of strong energy. Although WLDI achieves the widest frequency bandwidth, it also increases the power of unreliable components with frequency above 100 Hz. The PID and EKFD spectra look similar with a reliable increase of bandwidth between 20 to 120 Hz. The dominant frequencies in the Amplitude spectra of Figure 9 are consistent with those observed in Figure 8. | 2022-02-20T16:20:05.758Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "e78afb00b0ccf7f253764351dff16bd6aab94c1e",
"oa_license": "CCBY",
"oa_url": "https://revistas.uis.edu.co/index.php/revistaboletindegeologia/article/download/11624/654",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3910e68863013ab1a89ab6279354bfcac1b6b68c",
"s2fieldsofstudy": [
"Geology",
"Engineering"
],
"extfieldsofstudy": []
} |
3050456 | pes2o/s2orc | v3-fos-license | Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging
We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. ©2015 Optical Society of America OCIS codes: (170.3880) Medical and biological imaging; (170.4460) Ophthalmic optics and devices; (170.6935) Tissue characterization. References and links 1. C. F. Burgoyne, J. C. Downs, A. J. Bellezza, J. K. Suh, and R. T. Hart, “The optic nerve head as a biomechanical structure: a new paradigm for understanding the role of IOP-related stress and strain in the pathophysiology of glaucomatous optic nerve head damage,” Prog. Retin. Eye Res. 24(1), 39–73 (2005). 2. I. A. Sigal and C. R. Ethier, “Biomechanics of the optic nerve head,” Exp. Eye Res. 88(4), 799–807 (2009). 3. I. A. Sigal, J. G. Flanagan, I. Tertinegg, and C. R. Ethier, “Modeling individual-specific human optic nerve head biomechanics. Part I: IOP-induced deformations and influence of geometry,” Biomech. Model. Mechanobiol. 8(2), 85–98 (2009). 4. J. Albon, P. P. Purslow, W. S. Karwatowski, and D. L. Easty, “Age related compliance of the lamina cribrosa in human eyes,” Br. J. Ophthalmol. 84(3), 318–323 (2000). 5. A. J. Bellezza, C. J. Rintalan, H. W. Thompson, J. C. Downs, R. T. Hart, and C. F. Burgoyne, “Deformation of the lamina cribrosa and anterior scleral canal wall in early experimental glaucoma,” Invest. Ophthalmol. Vis. Sci. 44(2), 623–637 (2003). 6. M. R. Lesk, A. S. Hafez, and D. Descovich, “Relationship between central corneal thickness and changes of optic nerve head topography and blood flow after intraocular pressure reduction in open-angle glaucoma and ocular hypertension,” Arch. Ophthalmol. 124(11), 1568–1572 (2006). 7. W. H. Morgan, B. C. Chauhan, D. Y. Yu, S. J. Cringle, V. A. Alder, and P. H. House, “Optic disc movement with variations in intraocular and cerebrospinal fluid pressure,” Invest. Ophthalmol. Vis. Sci. 43(10), 3236–3242 (2002). 8. J. Albon, P. P. Purslow, W. S. S. Karwatowski, and D. L. Easty, “Age related compliance of the lamina cribrosa in human eyes,” Br. J. Ophthalmol. 84(3), 318–323 (2000). #233701 $15.00 USD Received 4 Feb 2015; revised 17 Mar 2015; accepted 18 Mar 2015; published 13 Apr 2015 (C) 2015 OSA 1 May 2015 | Vol. 6, No. 5 | DOI:10.1364/BOE.6.001694 | BIOMEDICAL OPTICS EXPRESS 1694 9. A. J. Bellezza, C. J. Rintalan, H. W. Thompson, J. C. Downs, R. T. Hart, and C. F. Burgoyne, “Deformation of the lamina cribrosa and anterior scleral canal wall in early experimental glaucoma,” Invest. Ophthalmol. Vis. Sci. 44(2), 623–637 (2003). 10. M. R. Lesk, A. S. Hafez, and D. Descovich, “Relationship between central corneal thickness and changes of optic nerve head topography and blood flow after intraocular pressure reduction in open-angle glaucoma and ocular hypertension,” Arch. Ophthalmol. 124(11), 1568–1572 (2006). 11. M. R. Lesk, G. L. Spaeth, A. Azuara-Blanco, S. V. Araujo, L. J. Katz, A. K. Terebuh, R. P. Wilson, M. R. Moster, and C. M. Schmidt, “Reversal of optic disc cupping after glaucoma surgery analyzed with a scanning laser tomograph,” Ophthalmology 106(5), 1013–1018 (1999). 12. N. S. Levy and E. E. Crapps, “Displacement of optic nerve head in response to short-term intraocular pressure elevation in human eyes,” Arch. Ophthalmol. 102(5), 782–786 (1984). 13. W. H. Morgan, B. C. Chauhan, D. Y. Yu, S. J. Cringle, V. A. Alder, and P. H. House, “Optic disc movement with variations in intraocular and cerebrospinal fluid pressure,” Invest. Ophthalmol. Vis. Sci. 43(10), 3236–3242 (2002). 14. I. A. Sigal and C. R. Ethier, “Biomechanics of the optic nerve head,” Exp. Eye Res. 88(4), 799–807 (2009). 15. I. A. Sigal, J. G. Flanagan, I. Tertinegg, and C. R. Ethier, “Modeling individual-specific human optic nerve head biomechanics. Part I: IOP-induced deformations and influence of geometry,” Biomech. Model. Mechanobiol. 8(2), 85–98 (2009). 16. D. B. Yan, F. M. Coloma, A. Metheetrairut, G. E. Trope, J. G. Heathcote, and C. R. Ethier, “Deformation of the lamina cribrosa by elevated intraocular pressure,” Br. J. Ophthalmol. 78(8), 643–648 (1994). 17. R. C. Zeimer and Y. Ogura, “The Relation between Glaucomatous Damage and Optic Nerve Head Mechanical Compliance,” Arch. Ophthalmol. 107(8), 1232–1234 (1989). 18. I. A. Sigal, J. G. Flanagan, and C. R. Ethier, “Factors influencing optic nerve head biomechanics,” Invest. Ophthalmol. Vis. Sci. 46(11), 4189–4199 (2005). 19. A. I. Dastiridou, H. Ginis, M. Tsilimbaris, N. Karyotakis, E. Detorakis, C. Siganos, P. Cholevas, E. E. Tsironi, and I. G. Pallikaris, “Ocular rigidity, ocular pulse amplitude, and pulsatile ocular blood flow: the effect of axial length,” Invest. Ophthalmol. Vis. Sci. 54(3), 2087–2092 (2013). 20. E. Friedman, M. Ivry, E. Ebert, R. Glynn, E. Gragoudas, and J. Seddon, “Increased scleral rigidity and agerelated macular degeneration,” Ophthalmology 96(1), 104–108 (1989). 21. I. G. Pallikaris, G. D. Kymionis, H. S. Ginis, G. A. Kounis, E. Christodoulakis, and M. K. Tsilimbaris, “Ocular rigidity in patients with age-related macular degeneration,” Am. J. Ophthalmol. 141(4), 611–615 (2006). 22. J. Friedenwald, “Contribution to the theory and practice of tonometry,” Am. J. Ophthalmol. 20(10), 985–1024
Introduction
The development of non-invasive methods to estimate ocular rigidity (OR) will have profound implications for research into ocular disease. Importantly, glaucoma remains a major cause of blindness due to formidable challenges in both its diagnosis and treatment, and its pathogenesis is poorly understood. Reducing intraocular pressure (IOP) is the most widely used clinical method for halting the progression of open angle glaucoma (OAG). However, the link between IOP and development of OAG is not straightforward [1][2][3][4][5][6][7]. Considerable recent evidence from experimental studies in primates and from mathematical modeling suggests that ocular biomechanics may play a major role in glaucoma pathogenesis [8][9][10][11][12][13][14][15][16][17]. According to finite element modeling, major determinants of optic nerve head stress and strain leading to glaucoma damage include IOP, but also scleral elasticity as well as other biomechanical factors. In fact, scleral elasticity is considered to be the most important determinant of optic nerve head stress and strain, more important than IOP [18] and it is clear that additional factors, such as ocular biomechanics, must play an important role.
Additionally, several investigations into age-related macular degeneration (AMD) have led to both mechanical and ischemic theories of pathophysiology related to OR, particularly in neovascular AMD [20,21], but it remains unknown as to whether changing rigidity plays a role in the pathophysiology of this disease. Reduced scleral rigidity is also an important feature of pathological myopia [19].
The rigidity of the eye can be derived from the Friedenwald's empirical function that estimates the change in IOP produced by a modification of the ocular volume V, according to: where k is the OR [22], accounting for the combined mechanical properties of the retina, choroid and sclera. For a given volume change, more rigid eyes will have a correspondingly larger increase in IOP, and vice versa for less rigid eyes. Since the sclera is responsible for the majority of the stiffness of the ocular globe, Eq. (1) can also be derived through a simplification of the collagen-like stress-strain behaviour exhibited by the sclera and by considering the eye to be a thin-shelled sphere [23,24]. This formula allows computing the overall ocular rigidity from the combined measurements of IOP and ocular volume changes. The ocular volume fluctuates due to the pulsatile vascular filling. Since 90% of blood flow into the eye is though the choroid [25], we propose to model the fluctuations of ocular volume by estimating the total choroidal volume change over time.
Although to date investigations into the elastic properties of the eye have produced values for this rigidity constant, they have either been on post-mortem testing [4,12,16], they make use of invasive cannulation in order to control the volume change [19], estimate indirectly the ocular volume change [26,27] or use measurements that intrinsically depend on ocular rigidity to estimate OR [27,28]. For clinical applications, ocular volume change (and thus OR) needs to be measured non-invasively, but no technology is available to measure them in a clinical environment.
Recent advances in optical coherence tomography (OCT), specifically with Enhanced Depth Imaging (EDI), have improved the signal to noise ratio in deeper tissues to the point that the choroidal-scleral interface (CSI) can now be distinguished. Segmentation methods exist to delineate this interface from high quality images [29][30][31][32], but the fast acquisition required to assess the changes of choroidal volume with pulsatile blood flow limits the imaging signal to noise ratio (SNR), thereby complicating the segmentation.
In this paper we propose a novel method for automated choroid segmentation in sequential FD-OCT images that is relatively robust to noise and low image quality, and which allows us to estimate the volumetric changes of the eye due to choroidal pulsations. These measurements, in combination with intraocular pressure measurements and biometry, allow the first non-invasive, direct estimation of OR.
Method
In order to track the pulsatile volume changes due to choroidal filling, B-scans have to be acquired faster than the heart rate, which renders the CSI nearly indistinguishable from noise. Our method computes the area between the posterior part of the RPE layer and the CSI to extrapolate choroidal volume changes based on a simple model detailed below. We have found that previous choroidal segmentation methods are not robust enough for our application; hence a new approach is required.
We combined a robust contour-detection method with a graph search based on a novel weighting scheme to develop a segmentation algorithm that boosts the reliability of CSI delineation, as described below.
Data collection
Images of the choroid were acquired using a FD-OCT (Spectralis OCT Plus, Heidelberg Engineering, Germany) system whose software was modified to provide time series where each frame results from the average of an adjustable number of B-scans. The acquisition was set to high-speed mode (496 pixels per A-scan x 768 A-scans images), enhanced depth imaging (EDI), using a class 1 laser at 870nm, and 30° wide (8-9 mm on the retina, optimized for each subject). With these settings, B-scans can be acquired at a maximum of 40Hz, at 3.9 μm axial and 11 μm of lateral resolution, and at 400 images the memory buffer is filled. The number of B-scans per frame is determined by the required time resolution of the series but it also impacts the quality of individual frames. Averaging 5 scans per image (8 Hz sampling) was an acceptable trade-off. Measurements were centred on the fovea and the azimuthal angle was chosen to maximize the visibility and continuity of the CSI for each subject. While a full 400-frame movie was acquired the subject's heart rate was measured with a finger oximeter.
The system is equipped with an eye-tracker to keep the scanning beam in place, but this feature also introduces pauses into the acquisition, producing fluctuations both in the number of averaged B-scans per frame and in the acquisition rate. Since the resulting time series are unequally spaced, the image's time-stamp was used when computing the frequency spectrum. Only frames with Spectralis' quality parameter above 20 were kept in the time series.
Immediately after imaging, the intraocular pressure is measured with a Pascal Dynamic Contour Tonometer, which is not dependant on corneal rigidity [33]. The average of three measurements having a Quality Index (Ziemer proprietary algorithm) not below 3 is computed. This provides two values, the intraocular pressure (IOP) at diastole and the ocular pulse amplitude (OPA), which represents the difference between diastolic and systolic pressure.
Preprocessing
Depending on eye size, a variable portion of the retina may be visible in each movie. Since we are only interested in estimating the volumetric changes of the eye due to choroidal filling (extrapolated from 2D images) we discard A-scans near the optic disk, where the choroid is absent ( Fig. 1(A)). Every image in the time-series is then aligned to the first one, using Matlab (The MathWorks, Natick, MA) imregister function with mean squared error metric and one- plus-one evolutionary optimizer. The registration is limited to rigid transformations (i.e. no shear or dilation) to prevent biasing the measurements with artificial image distortions. Each frame is then analyzed independently, provided the Spectralis' metric of Quality is not lower than 20, and at least two raw scans are averaged to create the frame used for further analysis.
To enhance the likelihood of correctly delineating layers of interest, despite variations in individual retinal shapes, the algorithm identifies several retina layers sequentially, in an anterior to posterior direction. We first normalize each A-scan to its maximum intensity, remove noise with a 5x5 pixels Wiener filter, and apply a Canny edge detector with thresholds of 0.01 and 0.3 The Gaussian filter size of the edge detector (σ = 4) is chosen large enough to ensure the top layer, the retina-vitreous interface (RVI), is continuous. This layer is segmented by joining the topmost 8-connected edge segments that are wider than 50 A-scans with cubic splines (Fig. 1(B), green line).
The next two segmented layers, the anterior and posterior interfaces of the retinal pigment epithelium (RPE), have been segmented using a previously published strategy [31], where a few modifications have been added to improve robustness. We profit from the positive intensity gradient that separates the neuroretina and the RPE, and the negative gradient between RPE/Bruchs membrane (BM) and the choroid to delimit these two interfaces more robustly by finding them together. After computing the gradient of the raw image and smoothing with a Gaussian kernel (σ = 3pix horizontally and σ = 0.5pix vertically), we search for its positive maximum in each A-scan between 39µm and 780µm below the RVI. The resulting points are connected with a local 2nd degree polynomial least squares weighted fit, to render the RPE ( Fig. 1(B), red line). Analogously, the negative gradient minima found between 39µm and 117µm below the anterior RPE, are connected to give the posterior RPE ( Fig. 1
(B), blue line)
Since we use a graph search method to segment the CSI, each image is 'flattened' with respect to the RPE in order to eliminate erroneous paths introduced by the curvature or tilt of the image. This is done by shifting and zero-padding each A-scan until the pixels that describe the posterior RPE are vertically aligned [31] (Fig. 1(C)).
CSI segmentation
We developed a segmentation method for the CSI to match the particular requirements of our application. In EDI-OCT scans, the CSI is a remarkably heterogeneous boundary consisting of fragments of blood vessel cross-sections, which cannot be segmented with usual edge detection approaches. Graph search edge detection is especially well suited for this problem as has already been shown [31]. Briefly, this approach associates pixels that loosely describe the target interface to nodes in a graph, and minimizes the path across the nodes based on weights assigned to each connection. The reliability of the method depends strongly on the choice of nodes and the weights. Previous implementations of graph search to segment the CSI do not suit the present application where subtle changes are essential for accurate measurements of ocular rigidity. We propose a novel approach that combines graph search with a robust contour detection method, which additionally profits from information gathered in time to boost the reliability of the segmentation.
Node locations
In earlier work, graph nodes were found using variations of image intensity along each A-scan [31]. Tian et al. looked for local maxima and minima of intensity, and used their valley pixels (the local minima) as nodes. Our implementation of this approach is unsuitable for this work for two reasons. Firstly, with this approach local minima are placed in the center of visible blood vessels rather than their bottom border, where the real CSI lies. Secondly, due to high noise, node detection is unreliable. Our approach provides a much improved weight function that favours nodes located at regions of higher local contrast and also profits from time-series information.
We find nodes using the smoothed first and second gradient of image intensity along each A-scan. The input in both cases is the A-scans of a preprocessed image, which has been smoothed with a span of 10 pixels. Pixels in which the first derivative exceeds a positive threshold of 0.7 identify the dark-to-light transition characteristic of the deepest interface of the choroid blood vessels. Additionally, those pixels whose second derivative absolute values are smaller than a near-zero threshold (10 −16 ) mark the inflection point of intensity on the lower extremity of the transition. A binary image meeting both conditions undergoes a sequence of morphological operations that reduce the detected regions to isolated pixels. First, regions are cleaned to eliminate single-pixel regions. Then, regions are filled to eliminate holes, prior to being skeletonized in order to retain only the central pixels of each connected region. Next, all pixels in every other column are eliminated to reduce the number of nodes the graph search must include. Images are shrunk to ensure any remaining regions are single pixels. Then extended-minima transformation is computed for the original preprocessed intensity image with a threshold of 10 pixels, to find the rough central shadow of large blood vessels, and any nodes that intersect these shadows are eliminated. Finally, those pixels at a depth greater than 150 pixels (585 µm) from the BM are also discarded (green dashed line in Fig. 2 E), since the CSI is unlikely to go this deep. The remaining pixels pinpoint the nodes fed to the graph search (yellow pixels in Fig. 2 E).
Graph search
The graph is constructed by connecting each node to all other nodes in the neighbourhood delimited by C max columns to the right and R max rows above and below it. C max must be sufficiently large to ensure the resulting graph is connected, even across dim regions with sparse nodes [31]. Connections between each pair of nodes a and b are assigned weights according to: w Euclid is the Euclidean distance squared (Δx 2 + Δy 2 ) where ∆x and ∆y are the horizontal and vertical distances respectively, between a and b. This term encourages the algorithm to delineate the path by connecting closely spaced nodes. To prevent abrupt vertical fluctuations, Tian et.al. incorporates the term w Vert , defined as [31]: where H is the Heaviside function, w v is a constant parameter controlling the relative weight of this term, and α governs the sigmoid function growth rate. This term adds extra penalty to connections longer than the threshold T v in the vertical direction, which are unlikely in a real interface.
Indeed, the CSI is most likely smooth, but the minimal distance condition does not guarantee a smooth interface. In fact, the shortest path between any two nodes is a straight line that avoids intermediate nodes, even if they are close together. Actually, the hard thresholds C max and R max prevent the graph search from just tracing a straight line between start and end nodes, because they limit the maximum length of edge segments. Unfortunately, these parameters cannot be reduced arbitrarily since they have to be long enough to overcome gaps produced by missing nodes. Therefore, in order to improve the edge smoothness while preventing gaps, we added a horizontal weight term which along with w Vert provides soft thresholds (T V and T H ) that favor paths made of short segments.
Due to the inhomogeneous nature of the CSI, combined with the low SNR resulting from high-speed acquisition, the node locations retrieved with the method above are not reliable enough for the graph search to delineate the right interface. To reinforce the reliability of the segmentation we compute a boundary probability which we use to compute a connection weight w Affin that favors paths through the most likely boundary.
An excellent choice for this is a multi-scale, multi-orientation approach such as the contour-detection algorithm by Arbelaez et al. [34]. This method computes the posterior likelihood X 2 that each pixel (i,j) in the image belongs to a boundary of scale σ at orientation θ. The computation uses the histograms g(I) and h(I) of pixels intensities in the two halves of a disk of radius σ, centered on (i,j), and divided along its diameter at an angle θ (Fig. 2(B) and 2(D)), according to the formula: The sum above runs over all intensity bins of the histograms. In contrast to the original method, we computed the boundary probability P b as: where the constant K ensures P b normalization. As an example, Fig. 2(C) shows the colorcoded P b corresponding to Fig. 2(A). Even with EDI, the OCT signal from deep sections of the eye is often weak and inhomogeneous due to the presence of attenuating structures above them, including blood vessels. In order to improve the reliability of P b , we used the adaptive image enhancement proposed by Mari et al, to increase the contrast in the region extending from 5 pixels below the RPE to the bottom (Fig. 2(B)) [29,35]. The weighting of the graph search, node locations and connectivity matrix are fundamentally different from previous approaches that used such compensation [29].
Since the computation of P b is time intensive, we restricted it to a region no deeper than 150 pixels below the RPE, as depicted by the green dashed line in (Fig. 2(E)), where the CSI is most likely found.
Finally, the term w Affin is computed as where A is the line integral of P b along the segment that connects a and b, and w A is the relative weight of the term. w Affin penalizes connections between nodes with low probability of belonging to an edge. This term is of crucial importance for the low SNR conditions of our application, where the reliability of nodes is low. Fig. 2. A) The uncompensated sub-RPE region. B) The same region, compensated and contrast enhanced, to which the oriented gradient algorithm will be applied. Overlaid is an example disk of radius 30 pixels. The red and green regions correspond to their respectively coloured histograms (D). C) The oriented gradient image, composed of the combination of the X 2 images of different scales and orientations, as described. The heat map shows pixels which are very likely to lie on a boundary. Even weak boundaries can be detected while excluding noisy regions with this method. E) Overlay of the oriented gradient image (heatmap), node locations (yellow x's), and the CSI found using these two inputs to the graph search (redline), onto the flattened B-scan. The green dashed line shows the limit of 585 µm below the Bruchs beyond which nodes are discarded. F) The original B-scan overlaid with the RPE (blue), CSI (yellow) and the mean RPE-CSI distance or CT (red dotted line). This distance CT is what is tracked from frame to frame.
For the start and end nodes, two 'virtual' nodes are added before the first and after the last columns, and are connected to the nodes inside the image as per the restrictions on Cmax. The graph search then uses Dijkstra's algorithm [36] to find the minimum-weight path between these virtual nodes. The resulting path is finally interpolated and smoothed to render the CSI boundary.
Due to high noise, delineation errors may occur in some frames, yielding unrealistic CSI profiles. Assuming that the CSI should not undergo significant change in shape during the cardiac cycle, we use the contours computed on all frames to correct for these outliers. First, we compute the mean CSI curve over all frames, and we measure the correlation of each individual frame CSI to the mean curve. We recomputed the frames whose CSI correlation fall below the mean correlation value, those in which the total area (in pixels) enclosed between the posterior RPE and CSI differs more than 3000 from the median, and those whose depth of the first or last pixel of the CSI deviate by more than 15 pixels from their respective means. For this second computation an additional weight term is included, as: where δy is the distance in pixels between node b and the height of the mean CSI in the corresponding column, and ε is the allowable deviation from the mean CSI. Additionally, the start and end nodes are assigned the coordinates of the mean CSI at the edges, and their weights can now be computed like for the other nodes. Only those frames in which the CSI correlation to the mean improved, are updated, and frames that do not meet the above criteria are excluded from further analysis.
Computation of ocular rigidity
Once the RPE and CSI delineations are determined, the mean RPE-CSI distance across Ascans is computed in each frame, giving a time series of sub-macular choroidal thickness (CT), as shown in Fig. 3(A) (bottom) as a black line. As expected, the frequency spectrum analysis on most time series also revealed high frequency components coincident with the first and second harmonics of the heart rate frequency F H (see Fig. 3(B), black line), which we measured independently from the oximeter signal ( Fig. A and B, top). This correlation proves that CT fluctuations in time are, at least in part, due to the pulsatile blood flow. For spectral analysis we used the Lomb-Scargle periodogram [37] instead of the popular Fourier Transform because CT time series are unequally sampled. This is due to images of Spectralis quality metric below 20 being omitted from the time series or to pauses in the acquisition due to the eye tracker. In order to extract the CT fluctuations associated mainly to the heart rate and discard respiration, head movement, saccades and segmentation noise, we filter out frequency components below ½ F H, above 3F H , and those with values below 10% of the maximum peak within this range (Fig. 3(B), red line). The inverse Fourier Transform was used to retrieve the filtered signal (Fig. 3(A), red line).
The pulsatile fluctuation of CT is obtained by using a windowed peak-to-valley algorithm, which ignores peaks and valleys that are spaced in time less than 1/6 of the heart period (T H = 1/F H ), or that are smaller than 10% of the maximum peak. All sequential peak-to-valley gaps, which are greater than the vertical resolution of the Spectralis OCT (4 µm for the used settings), are averaged to render the final ΔCT.
The ocular volume change ΔV = V-Vo is derived from ΔCT using a first order approximation of a spherical eye model. In it, the choroid is modeled as the volume between two spheres shifted by ΔCT, as:
Subjects
In order to test our method, we enrolled 45 subjects from the Ophthalmic Clinic. Since the aim of the study was to test the ability of our approach to measure OR, the sole eligibility criteria imposed on the subjects was that eyes had to be clear enough media to allow distinguishing the choroid-sclera interface from the OCT images. The study protocol adhered to standards outlined in the Declaration of Helsinki. All participants were informed of the nature and objective of the study, the procedures that would be performed and the risks, and gave their informed consent. The local research ethics committee approved the protocol.
Results
This work introduces a novel method for assessing the Ocular Rigidity non-invasively from OCT time series and usual biometric measurements. One important component of the method is the algorithm capable of segmenting the CSI from low SNR images. In the next subsections we describe experiments that demonstrate the improvements provided by this novel CSI segmentation, as well as OR measurements on our cohort of subjects that validate the methodology.
Evaluation of CSI Segmentation
In contrast to some segmentation problems, there is no gold standard for the choroid contour to compare with; therefore we need to use an alternative approach to validate the new algorithm. Ophthalmologists and other eye specialists typically assess OCT images depicting the choroid qualitatively; hence they are the best-suited people to evaluate the CSI segmentation efficiency. We presented a set of 25 OCT images to 5 independent specialists using a tablet equipped with a stylus to delineate the choroidal-scleral interface on top of the OCT images. We retrieved the manual contours and compared them with the automated traces yielded by our method and Tian's [31] (for example, see Fig. 4(A) and 4(B)). For each image, we calculated the average manually segmented CSI based on the 5 independent traces and compared the performance of each specialist and both automated methods. Histograms of deviation from the mean trace for every A-scan of all images are shown as violin plots in Fig. 4(C). Such histograms illustrate the improved capacity of our method, which is virtually indistinguishable from the specialist's.
The image shown in Fig. 4(A) illustrates one of the main problems of Tian's method mentioned above. The nodes are mostly located in the middle of the vessels, rather than on their bottom edge, and this causes the overall offset that can be observed in Fig. 4(C). Furthermore, noise can cause the image to be crowded with intensity minima, which renders a purely Euclidean shortest path to be located blow the CSI (see Fig. 4(B)).
Measurements of ocular rigidity
In order to test our approach, we measured the ocular rigidity on a group of subjects, as described above. The mean CT was 263.8 (SD = 78.4) µm. The mean magnitude of thickness change at the macula ΔCT was 16.7 (SD = 10.9) µm. The pulse volume ΔV was 7.8 (SD = 4.9) µL, and this was used to estimate the pulsatile ocular blood flow (POBF) 595.6 (495.1) µL/min, by multiplying with heart rate (HR) measured while OCT images were acquired. Finally, the mean OR constant in the tested set was 0.028 (SD = 0.022) 1/µL.
A positive correlation was observed between OR and OPA ( Fig. 5(A)). Although small, this correlation is consistent with the reasoning that for a more rigid scleral shell, larger pressure pulses are expected. More importantly, we observed a statistically significant negative correlation between OR and the axial length (AL) (see Fig. 5(A)) which agrees with a recent study that uses cannulation to artificially modify the eye volume [19]. Finally, four subjects were measured repeatedly to assess the reproducibility of the method. The results plotted in Fig. 5(B) show that the OR assessment is reproducible across the whole range of rigidity values observed in this study. In fact, these measurements render an intraclass correlation coefficient of 0.96, with a lower confidence band (at α = 0.05) of 0.63. Altogether, these results validate our method to measure the ocular rigidity.
Discussion
The Friedenwald equation is the currently accepted conceptual framework for investigating the pressure-volume relationship in the eye. It synthesizes the pressure-volume curve in a single value, the ocular rigidity constant, which simplifies the study of relationships to other variables [19,26,27,38]. It is important, however, to ensure that the quantities involved in this formula represent the real physical changes occurring in the eye. Given that the primary physiological cause of pressure fluctuations in the eye is the pulsatile choroidal volume change, the method we present offers the most representative determination of ocular volume change through non-invasive direct quantification, rather than through indirect variables such as fundus pulse amplitude (FPA) [27] or laser Doppler flowmetry of pulsatile choroidal blood flow [11]. As opposed to FPA, our method directly measures the expansion of the choroid produced by blood inflow.
The method used to calculate ocular volume change from change of choroidal thickness at the macula is a first order approximation. The fluctuation ΔCT is found in a relatively small (~9mm) section at the fundus, which amounts to between 15 and 20% of the average circumferential length of the choroid of the eyes included in this study, but accounts for a greater percentage in terms of total choroidal blood flow since the macular region has the greatest perfusion. Choroidal thickness has been shown to vary not only in the temporal-nasal directions [39], but more generally in all directions [40]. However, mean CT found using our automated segmentation (263.8 µm) agrees closely with other studies centered on the fovea making use of manual CSI segmentation [39,40]. Additionally, the pulse volume ΔV and POBF agree with estimated values obtained both using commercial devices that assume a given ocular rigidity [41], and using the slope of pressure-volume curves and OPA [19]. Together these results provide good evidence for not only the proper segmentation of the choroid, but also that the dynamic change in volume is being well computed. Importantly, as opposed to some other approaches to measure choroidal blood flow, the results obtained with our method based on OCT segmentation is neither sensitive nor biased by axial head movement.
Alternative equations to refine the volume estimation as a first order approximation in ΔCT can be used, but the final results for OR will only differ by some common factor, and comparisons among populations or the study of trends would remain legitimate. What is more noteworthy is that this first-order approximation leads not only to OR values that are the right order of magnitude, but also in the same range of values reported in earlier in vivo studies [19,21,24,27]. A future refinement of the choroid volume estimation may be to use a wide angle OCT for collecting data from a broader area of the fundus, or even volume reconstructions. Nevertheless, such volumes must be obtained significantly faster than the heart rate, which is not possible using technology that is currently in widespread clinical use . Of note, the amount of time and processing required to analyze time series of volume reconstructions would be increased by approximately two orders of magnitude.
From in vivo studies, Dastiridou et al. estimated ocular rigidity by cannulating the anterior chamber of patients undergoing cataract surgery and recording pressure change for known volume infusion [19]. These results suggest the existence of a negative correlation between the axial length and rigidity, which we also observe from our results, providing further evidence to the validity of our method.
Conclusion
We have presented a novel approach to measure choroidal blood flow and ocular rigidity. To the best of our knowledge, this is the first non-invasive method that allows calculation of the true OR parameter as defined by Friedenwald, as it is based on directly quantifying ocular volume changes rather than estimating it based on FPA or a Doppler flowmetry signal. The method relies on measuring IOP and OPA using dynamic contour tonometry, as this technology enables the most precise estimation. We strongly believe the combination of deep penetration dynamic OCT imaging and the powerful automated image segmentation we present is seminal to further understanding of key biomechanical determinants to ocular disease, and will become clinically invaluable as these measuring devices become more accurate. | 2018-04-03T03:23:36.995Z | 2015-05-01T00:00:00.000 | {
"year": 2015,
"sha1": "ac341d40847465ec48331497b0f4d73d41d4eb88",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.6.001694",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c8696e644f7b63d636ea899151dda647c4ddbda9",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
253425682 | pes2o/s2orc | v3-fos-license | Long-term usage of a commercial mHealth app: A “multiple-lives” perspective
Background Emerging evidence suggests that individuals use mHealth apps in multiple disjointed ways in the real-world—individuals, for example, may engage, take breaks, and re-engage with these apps. To our knowledge, very few studies have adopted this ‘multiple-live’ perspective to analyze long-term usage of a physical activity (PA) app. This study aimed to examine the duration of use, as well as the frequency, length, and timing of streaks (uninterrupted periods of use) and breaks (uninterrupted periods of non-use) within a popular commercial PA app called Carrot Rewards over 12 months. We also examined sociodemographic correlates of usage. Method This retrospective observational study analyzed data from 41,207 Carrot Rewards users participating in the “Steps” walking program from June/July 2016 to June/July 2017. We measured four usage indicators: duration of use, frequency and length of streaks and breaks, time to first break, and time to resume second streak. We also extracted information regarding participants' age, gender, province, and proxy indicators of socioeconomic status derived from census data. We used descriptive statistics to summarize usage patterns, Kaplan-Meier curves to illustrate the time to first break and time to resume second streak. We used linear regressions and Cox Proportional Hazard regression models to examine sociodemographic correlates of usage. Results Over 60% of the participants used Carrot Rewards for ≥6 months and 29% used it for 12 months (mean = 32.59 ± 18.435 weeks). The frequency of streaks and breaks ranged from 1 to 9 (mean = 1.61 ± 1.04 times). The mean streak and break length were 20.22 ± 18.26 and 16.14 ± 15.74 weeks, respectively. The median time to first break was 18 weeks across gender groups and provinces; the median time for participants to resume the second streak was between 12 and 32 weeks. Being female, older, and living in a community with greater post-secondary education levels were associated with increased usage. Conclusion This study provides empirical evidence that long-term mHealth app usage is possible. In this context, it was common for users to take breaks and re-engage with Carrot Rewards. When designing and evaluating PA apps, therefore, interventionists should consider the 'multiple-lives' perspective described here, as well as the impact of gender and age.
Background: Emerging evidence suggests that individuals use mHealth apps in multiple disjointed ways in the real-world-individuals, for example, may engage, take breaks, and re-engage with these apps. To our knowledge, very few studies have adopted this 'multiple-live' perspective to analyze long-term usage of a physical activity (PA) app. This study aimed to examine the duration of use, as well as the frequency, length, and timing of streaks (uninterrupted periods of use) and breaks (uninterrupted periods of non-use) within a popular commercial PA app called Carrot Rewards over months. We also examined sociodemographic correlates of usage.
Method: This retrospective observational study analyzed data from , Carrot Rewards users participating in the "Steps" walking program from June/July to June/July . We measured four usage indicators: duration of use, frequency and length of streaks and breaks, time to first break, and time to resume second streak. We also extracted information regarding participants' age, gender, province, and proxy indicators of socioeconomic status derived from census data. We used descriptive statistics to summarize usage patterns, Kaplan-Meier curves to illustrate the time to first break and time to resume second streak. We used linear regressions and Cox Proportional Hazard regression models to examine sociodemographic correlates of usage.
Results: Over % of the participants used Carrot Rewards for ≥ months and % used it for months (mean = .
± . weeks). The frequency of streaks and breaks ranged from to (mean = . ± . times). The mean streak and break length were . ± . and . ± . weeks, respectively. The median time to first break was weeks across gender groups and provinces; the median time for participants to resume the second streak was between and weeks. Being female, older, and living in a community with greater post-secondary education levels were associated with increased usage.
Introduction
Participation in regular physical activity (PA) reduces the risk of noncommunicable diseases by improving muscular and cardiorespiratory fitness, functional health, and mental health (1,2). To obtain these health benefits, it is recommended that adults (aged 18-64 years) participate in at least 150 min/week of moderate to vigorous PA (3). Population-based survey data suggests that over a quarter of adults do not meet PA recommendations globally (4). Finding effective and low-cost strategies to increase PA at a population level remains a public health priority (4,5).
The proliferation of smartphones including the software applications (apps) that run on these devices offers a promising opportunity for promoting PA at a population level and at a low cost. As of 2021, the global number of smartphone subscriptions (8.6 billion) exceeded the number of people on the planet (7.8 billion) (6). There are more than 3.4 million apps available for download on the Google Play store and 2.2 million for the Apple Play store (7).
Nowadays, people are accustomed to using diverse apps to facilitate their daily lives and support engagement in healthy behaviors such as PA. The popularity of health and fitness apps has been growing in the past few years (8). As of the first quarter of 2021, >50,000 Health apps are available in the Google Play store (9). Data from survey studies (10)(11)(12)(13) report that at least 40% of smartphone users have downloaded health and fitness apps-monitoring daily PA is the top reason for using health apps (10,12,13).
The number of app-based PA interventions has grown considerably in the past decade. These interventions demonstrate small and short-term improvement in PA outcomes (e.g., 750 steps/d over 12 weeks) (14)(15)(16)(17)(18)(19)(20). The field generally agrees that the modest and largely unsustained intervention effects may in part be due to insufficient app usage (21,22). Usage refers to the actual use of the apps (22), a behavioral aspect of engagement (23). It is commonly measured based on frequency (e.g., number of logins), intensity (e.g., number of app features used), time of duration (e.g., number of days between first and last login) and type of use (e.g., reading a post or taking quizzes) (22). To improve the sustainability of app-based PA interventions, researchers and app designers must understand individuals' usage patterns and identify how best to engage participants.
Existing studies demonstrate that sustaining usage of appbased PA interventions has been challenging (21,24,25). A recent review (26) found that 43% of users dropped out of appbased health interventions. Other studies (25)(26)(27)(28)(29)(30) also frequently observe that users abandon PA apps after a few weeks or months. However, this body of literature has two notable gaps. First, the evidence is generated primarily in controlled settings with short-term follow-up (20, 28). That means long-term usage of app-based PA interventions in the real-world is not yet well understood. Second, most studies that examined PA app usage have adopted a "single lifetime" perspective (31-33) which assumed users are unlikely to return once they have been absent for a defined period (e.g., 2 weeks). Therefore, previous studies focused on measuring usage at a single time-point, such as identifying the timing when users have lost interest in the app and stopped using it (i.e., non-usage attrition) (34).
Notably, researchers in the field of eHealth and wearable technology have proposed a "multiple-lives" perspective (31)(32)(33). Emerging evidence suggest that individuals use wearable technologies or PA tracking apps in multiple disjointed intervals in the real-world (31)(32)(33). This means that individuals may engage with an app in streaks [i.e., an uninterrupted series of use days (33)], take breaks [i.e., an uninterrupted series of non-use days (33)], and then re-engage. To our knowledge, only one study (35) has adopted this "multiple-lives" perspective to analyze usage patterns in an incentive-based PA app. Lim et al. (35) examined app usage data from 140,000 individuals who participated in Singapore's National Step Challenge over 7 months. The study found >80% of the participants took more than one break (range: 2.8-10.6), indicating that it is common for users to take breaks and re-engage with an app.
Although several studies (31)(32)(33)35) have verified the 'multiple-lives' usage patterns in PA app users, they focused on identifying the frequency and length of streaks and breaks. While informative, this information is insufficient to inform the design of future apps and engagement strategies. An essential next step is to understand the timing of when streaks and breaks occur and their correlates. To date, these aspects have not yet been examined. Data from the Carrot Rewards app (Carrot app) provides an opportunity to explore this research question. The Carrot app is one of the very few PA apps that was implemented and rigorously evaluated in a real-world setting (36)(37)(38)(39). It is a mobile app that allows users to complete health questionnaires and track steps in exchange for reward points. The Carrot app has demonstrated effectiveness in improving mean weekly daily step counts in a 12-month quasi-experimental study involving over 39,000 users. A positive relationship was observed between the duration of app use and PA outcomes. The intervention effects were more evident for participants who engaged with the app for at least 6 months (36).
In this study, we leveraged the rich usage data from the Carrot app to further investigate how users used the app in the real-world setting over 12 months using a "multiple lives" perspective. The objectives of this study were two-fold. First, we examined duration of use, the frequency, length, and timing of streaks and breaks) within the Carrot app over 12 months. Second, we examined sociodemographic correlates of usage.
Methods
The Carrot Rewards app: "Steps" program Information regarding the theoretical background, evolution and effectiveness of the Carrot app has been published previously (36)(37)(38). Briefly, the Carrot app was created by a private company with support from the Public Health Agency of Canada (39). It combined gamification elements (e.g., points, goals, challenges, collaboration and competition) and principles from behavioral economics to engage users and promote physical activity. Users tracked their daily steps and took quizzes on diet, fitness, and personal finance topics to earn loyalty reward points redeemable for consumer goods for programs such as Cineplex's Scene (i.e., movies/cinema), Aeroplan (i.e., air travel), and Petro-Points (i.e., gas/petrol). The app was made freely available to British Columbia (BC) and Newfoundland and Labrador (NL) residents on Apple iTunes and Google Play app stores on 3 March and 13 June 13 2016, respectively. Once enrolled in the Steps program users were instructed to carry their smartphones or wear their Fitbit devices during a two-week baseline period to assess habitual PA behavior and set an individualized daily step goal. After the baseline period, users could begin to earn daily incentives ($0.04 CAD) for reaching their step target. After 4 weeks of earning daily rewards, users could then enter a "Step Up Challenge" to earn a $0.40 CAD bonus for reaching their daily goal 10 or more non-consecutive times in 14 days. For users who completed a "Step Up Challenge, " a new higher daily step goal was provided. For unsuccessful users, the previous goal persisted. Participants could earn a maximum of $25.00 CAD in reward points over 12 months. Carrot Rewards was discontinued in June 2019 due to a lack of funding (40).
Data extraction
We analyzed retrospective data collected from 41,207 Carrot "Steps" program users who enabled the "Steps" walking program (i.e., allowing the app to access their step data) from 13 June to 10 July 2016, and followed them for 12 months. App usage data were automatically recorded daily while using the Steps program. We aggregated daily steps into weekly mean daily steps for each study week. We chose to use average steps by week because our focus was not to examine whether or how streak/break behaviors varied between days but to examine usage patterns over a year. In our previous publications (36,37), we learned that it was difficulty to identify patterns using daily step or usage data. Aggregating the daily steps into weeks allows us to consider usage behaviors in terms of fewer, but higher-level, meaningful patterns. Another reason is that the number of days of available app data ranged from 1 to 7 days each week. There were missing data within a week that would have to be interpolated, which would be similar to taking the average based on number of days of available data in the week. When registering with the Steps program, users provided informed consent for using their data for research purposes. As part of the privacy policy, users were also informed that data collected in the app for reporting purposes would only be done at the aggregate or deidentified level. The University of British Columbia Behavioral Research Ethics Board approved this study (H17-02814).
Measures Usage
We defined usage as daily step count data logged. Users could log their daily step count data in the Carrot app by synchronizing with the "built-in" smartphone accelerometer. This "sync" occurred each time users opened the app. For each synchronization, the app automatically retrieved daily step data from users' smartphones or other wearable devices from the past 14 days. In this study, we are interested in whether the users have used the app on a given day, which is similar to the concept of non-wear day in wearable technology studies. Therefore, we categorized a week as a "non-use week" (weekly mean daily step counts = 0) or an "active week" (weekly mean daily step counts >0). As Short et al. 30, 60, and 90 days, Meyers et al. (33) used ≥2 days. We operationalized a streak as a period that the app recorded non-zero step counts for two or more consecutive weeks without any breaks; a break is a period with zero steps for two or more consecutive weeks. We used the two-week cutpoint because it has been commonly used to determine nonusage attrition in eHealth interventions in the PA literature (30,41,42). 3. Time to first break is the number of weeks between the first study week and the first break. 4. Time to resume second streak is the number of weeks between the first break and the week participants re-engaged with the app with at least 2 weeks of non-zero step count.
We focused on the timing for the first break and resuming the second streak because previous studies suggest that users who engaged with the app during the early days or weeks of the intervention appear to predict their adherence to the app (32, 35).
Sociodemographic correlates and baseline step count
When registering for the app, participants self-reported their age (years), gender (female, male or others/not specified) and province (BC or NL). We inferred participants' median personal income, percentage of the population with post-secondary education levels, and percentage population identified as visible minorities in their communities by linking user postal codes with census data (i.e., 2011 National Household Survey) at the local health area level (89 in BC) and regional health authority level (4 in NL).
Statistical analysis
Statistical analysis was performed using R 3.3.0.68 Mavericks build (7202) R Studio Version 1.0.136. For descriptive statistics, we calculated total counts and percentages for categorical variables, means and standard deviations for continuous variables. We plotted illustrative graphs to explore potential usage patterns to examine usage status (active or non-use) for each study week using several random subsamples. Based on visual assessment, we first categorized users who used the app for all 52 weeks in one group and those who had never used the app in another group. Then, we ranked the remaining users based on the number of active weeks and evenly divided them into four groups. The six usage groups were: committed users (52 weeks), frequent users (41-51 weeks), regular users (24-40 weeks), occasional users (11-23 weeks), limited users (1-10 weeks), and non-users (0 week).
We plotted Kaplan-Meier (KM) curves to illustrate the time to the first break and the time to resume the second streak for the total sample and by gender and province. We plotted the KM curves by gender as it was a significant predictor of health app usage in previous studies (43)(44)(45) and by province because the two provinces are different in terms of geographic locations, population, weather, and the sociodemographic variables measured in this study.
Correlates of duration of use
We fitted two linear regression models that treated duration of use and number of streaks as continuous outcomes. Both models included sociodemographic variables and baseline daily step count mean as predictors. The estimated effects, 95% confidence intervals and Chi-square significance tests were determined for each predictor variable. Given the sample size for the fitted models was large, the magnitudes of the coefficients (i.e., how far they deviated from the null values: 0 for regression estimates and 1 for hazard ratios) and the range of the confidence intervals, were considered for determining practical relevance.
Correlates of time to first break and time to resume second streak
We fitted two Cox Proportional Hazard regression models. The first analysis modeled the risk of having the first break at a given week. The time variable was the number of weeks until the first break. The event variable was coded 1 if the first break occurred or 0 if it did not occur. The second analysis modeled the probability of resuming the second streak at any given week after the first break. The time variable was the number of weeks from the first break until the participants resumed usage with a non-zero step count for at least two consecutive weeks. The event variable was coded 1 if resumption occurred or 0 if not. In both analyses, we included the demographic variables and baseline steps as predictors. The hazard ratios [HR], 95% confidence intervals and Chi-square significance tests were determined for each predictor variable. Given the sample size was large, the magnitude of the hazard ratios was considered for determining practical relevance. Due to multiple statistical tests, we did a conservative Bonferroni adjustment to the significance level of 0.05. We set the significance level at p < 0.001.
Duration of use
The final analytical sample included 41,207 users with a mean age of 35.2 ± 11.7 years. Among the users, 67% identified as female, 75% lived in BC; personal median annual income of $29,503 ± 3,997, 55% with postsecondary education, and 26% of the population identifying as a visible minority. The mean baseline daily step counts were 5,537±2,691. Participants used the app for an average of 32.59 ± 18.435 weeks. We illustrate participants' usage in Figure 1. There were 29% committed users who used the app for 52 weeks, 17% frequent users, 18% regular users weeks), 18% occasional users, 16% limited users, and 2% non-users.
Frequency and length of streaks
The frequency and length of streaks are presented in Table 1. Approximately 98% (40,447/41,207) of the users experienced at least 1 streak. Of those, 60% of the participants had 1 streak and 40% had ≥2 streaks. Participants who had fewer streaks used the app for a longer time. The frequency of streaks ranged from 1 to 9 times (mean = 1.61 ± 1.04 times). The mean streak length was 20.22 ± 18.26 weeks, with a range between 2 and 34 weeks. The mean streak length for each subsequent streak ranged between 10 and 33 weeks for users who had 1 to 3 streaks; between 5 and 10 weeks among users who had 4 to 9 streaks.
Time to the first break
The frequency of breaks ranged from 1 to 9 times (mean = 1.61 ± 1.04 times). The mean break length was 16.14 ± 15.74 weeks with a range between 2 and 35 weeks (Table 1). Of users who experienced the first break, the estimated median time to the first break (i.e., the time after which 50% of users experienced the first break) was 18 weeks for the total sample and there was no observable difference between gender groups and provinces. Eighteen weeks for females, 17 weeks for males and 18 weeks for other/not specified gender ( Figure 2). As participants began using the app in June/July, their first breaks occurred around November/December (Winter months). The average daily temperature was 5.7/−3.5 • C for BC and 2.8/−6.0 • C for NL (46).
Time to resume the second streak
Of users who experienced the first break, 58% (16,506/28,271) resumed the second streak, and 42% did not return. The median time for participants to return for the second streak was 15 weeks for the total sample, 15 weeks for females, 12 weeks for males and 23 weeks for participants in other/not specified gender. The median time to resume the second streak was 11 weeks for BC participants and 32 weeks for NL participants ( Figure 3). As participants took their first breaks around November/December, the median time for resuming the second streak occurred in February for BC and August for NL. The difference in the timing to resume the second streak between the two provinces appeared to depend on weather. For BC, the coldest months are between December (−6 • C) and February (−1.2 • C). For NL, the coldest months typically begins in December (−6 • C) until early May (−0.5 • C) with late-lying snow patches persisting until July/August in some areas (46).
Correlates of the duration of use
Ages, the percentage of population with post-secondary education, and the percentage of population identifying as a visible minority in the community were significant correlates of the duration of use. Gender, baseline steps and provinces had a significant but minimal effect on the average duration of app use. Median personal income was not significantly associated with the duration of use (Figure 4). Duration of use increased by 1.2 weeks (95% CI: 1.03, 1.34) for every 10-year increase in age, 13.6 weeks (95% CI: 9.74, 17.49) for any 10% increase in the percentage of population with post-secondary education in the community, and 2.5 weeks (95% CI: 1.18, 3.91) for any 10% increase in the percentage of population as a visible minority in the community (Table 2).
Correlates of the frequency of streaks
Baseline steps, gender, median personal income, and provinces had a significant but minimal effect on the number of streaks. Age, percentage of population with post-secondary education in the community, and percentage of population identifying as a visible minority in the community were not significantly associated with the number of streaks ( Figure 5). Users who were female (vs. male), NL residents, had higher median personal income, and higher baseline steps had 0.02-0.12 fewer streaks ( Table 2).
Correlates of having the first break
Age, gender, and the percentage of population with post-secondary education in the community were significant correlates of having the first break. Baseline steps and the percentage of population identifying as a visible minority in the community had a significant but minimal effect on the risk of having the first break. Median personal income and provinces were not significantly associated with the risk of having the first break ( Figure 6). Compared to females, males had an average 12% higher risk of having the first break (95% CI: 1.09-1.15). The risk of having the first break reduced by 8% (95% CI: 0.91, 0.93) for every 10-year increase in age, 5% (95% CI: 0.92, 0.97) for any 10% increase in the percentage of population with post-secondary education in the community ( Table 3).
Correlates of resuming the second streak
All the sociodemographic factors were significant correlates of likelihood of returning for the second streak, except for baseline steps and the percentage of population identifying as a visible minority in the community (Figure 7). Compared to females, males were 6% more likely to resume the second streak (95% CI: 1.03-1.10); participants in the other/not specified gender category were 17% less likely to return (95% CI: 0.71, 0.96). NL participants were 23% less likely to return to the app (95% CI 0.73-0.80) compared to BC participants. The likelihood of returning for the second streak increased by 5% (95% CI 1.03-1.06) for a 10-year increase in age, 5% (95% CI 1.02-1.09) for a 10% increase in percentage of population with post-secondary education in the community, and reduced by 13% (95% CI 0.80-0.93) for any $10,000 increase in median personal income (Table 3).
Discussion
While sustaining the usage of app-based PA intervention is challenging, this study provided empirical evidence demonstrating that long-term usage is feasible. Over 60% of the participants used the Carrot app for more than 6 months, and 29% used it for 12 months. Our findings compared favorably to the 10,000 Steps Australian program, with 0.09% (21/22,142) using the app for 12 months (30) and 50% of the users using the app for <10 weeks (28,30), and Singapore's . /fpubh. .
National
Step Challenge 9% (12,310/139,885) for 6 months and 7% for 12 months (35). We attribute the favorable findings to a synergy of several features of the Carrot Rewards Step program. First, the Carrot app used gamified features including points, goals, challenges, peer leaderboard, teamwork and competition (40). Systematic reviews and a recent randomized controlled trial have found that gamified features enhanced health app usage (29, [47][48][49]. Second, immediate rewards leverage a behavioral economics principle called "present bias" which states that people tend to put more worth and be more satisfied in immediate rewards than with delayed ones. Third, the app required little cognitive effort to use. Once registered, steps were seamlessly tracked and rewarded using the built-in accelerometer so long as the data was synced at least once every Frontiers in Public Health frontiersin.org . /fpubh. . 2 weeks. Based on technology adoption theories (50,51) and empirical studies (52), both the instant reward and ease of use were likely drivers of users' intention to use the Carrot app, which in turn supported sustained use. Our findings support the "multiple-lives" usage pattern that emerges from wearable technology usage research (31,33,35). We observed two types of usage patterns: 60% of the participants had a "single lifetime" pattern (one streak) and 38% had a "multiple-lives" pattern (two or more streaks). A multiple-life usage pattern means that users used, dis-engaged and re-engaged with the app multiple times. In our study, this cycle repeated between 2 to 9 times. This finding has practical implications for designing and evaluating app-based PA interventions in the real world. Future studies should leverage the capability of mobile apps in collecting real-time usage data and utilize such data to conduct streak-and-break analysis. Then, identify critical time points (i.e., weeks before first break, or during first break to re-engage) and develop a bundle of engagement strategies to re-gain users' attention before they abandon the app (e.g., peer support, game-like and flashy features, feature upgrades, bonus incentives, new/fresh app aesthetics (53)(54)(55)(56).
Among participants who had a "multiple-lives" pattern, their streak-and-break behaviors appeared to be affected by seasonality. Participants began using the app in June/July, had the first break around 18 weeks (November/December) and returned for the second streak after 23 weeks (March/April) for BC participants or 32 weeks (July/August) for NL participants.
This finding suggests that researchers and designers of appbased PA interventions should consider seasonal variations in the analysis and modeling of app usage behaviors and the development and introduction of engagement strategies. In addition to seasonality, other factors may have influenced participants' streak-and-break decisions, such as loss of interest, life interruptions (e.g., pregnancy, new job situation) or no longer needing the app because of successful habit formation (12,57). Attig et al. (57) found that users who stopped using an app due to loss of interest and successful habit formation were less likely to resume usage. Future studies are suggested to incorporate periodic user surveys and ecological momentary assessments to explore the reasons for abandoning, taking breaks and re-engaging with the app, and the contexts in which users make these decisions (22).
Our results revealed a significant gender difference in app usage. Compared to male, females were more likely to use the app for a longer duration and have lower risks of having the first break. However, females are less likely to return after they take a break. These findings suggest that females are more likely to sustain their initial app adoption and abandonment decisions. Our results are supported by prior research identifying gender differences in health-related internet and app usage (11,43,58). Women may be more likely to adopt an app due to its concept while male may be more focused on the functionality of apps (43). These preferences may then somehow influence the sustainability of app usage. While reasons for the gender differences remain speculative the findings do underline a need to take gender into account when developing health apps to ensure they meet the needs and preferences of individuals (58). Evidence on the socioeconomic correlates of app usage was mixed. Our results demonstrated that older adults and living in a community with a higher percentage of population with post-secondary education (a proxy indicator of socioeconomic status) were associated with increased usage. Yang and Koenigstorfer (50) synthesized findings from 24 studies and found that usage increased with age and income levels. Pontin et al. (44) examined sociodemographic determinants that influenced usage of a commercial incentive-based physical activity app (Bounts) in over 30,000 users over a year. They found that usage was higher in older individuals and users who lived in areas with low socioeconomic status. Carroll et al. (45) found that younger users and those with higher education levels and income use health apps more. The difference in findings may be due to the heterogeneity in the app characteristics, definitions of usage and measures of the sociodemographic variables, and other confounding factors, such as intention to change physical activity behaviors and health status (45).
Strengths and limitations
Strengths of this study include the large sample size from two Canadian provinces, long follow-up period, the inclusion of objective measurement of app usage collected in a realworld setting. Our streak-and-break analysis is innovative and offers insightful perspectives on the everyday use of a commercial PA app. Our findings on a multiple-life usage pattern extend the literature in PA apps by providing an example of modeling streak-and-break behaviors, identifying re-engagement time-points and corresponding strategies to increase re-engagement.
The findings presented here should be interpreted in the context of the study's limitations. We determined usage patterns based on aggregated weekly data, which could result in very different usage patterns compared to studies using daily data (33,35). We have relied on objective usage metrics from the study's database. This study only focused on duration and frequency of use and did not measure intensity and type of use. Lin et al. (31) found that individuals could engage in different kinds of app features and at different intensities in each lifetime. Metrics on these usage aspects could facilitate our understanding of the psychological mechanisms underlying the streak and break behaviors (59). In this secondary data analysis, our analyses were limited to data collected in the original study design. Therefore, we cannot consider other psycho-social correlates that may explain the variations in usage among individuals, such as self-efficacy or habit strength.
Conclusion
We demonstrated a real-world example of how individuals used a commercial PA app over 12 months. It is common for users to take breaks and re-engage with an app. Interventionists need to adopt a 'multiple-lives' perspective when designing and evaluating app-based PA interventions. We also need more real-world studies with long-term follow-up to facilitate our understanding of individuals' usage behaviors and inform the development of engagement features of future apps. Gender and age may be significant correlates of long-term usage of app-based PA interventions.
Data availability statement
The data analyzed in this study is subject to the following licenses/restrictions: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Requests to access these datasets should be directed to erica.lau@ubc.ca.
Ethics statement
The studies involving human participants were reviewed and approved by University of British Columbia. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions
MM collected and prepared the data. EL and GF performed data analysis. EL wrote the first draft of the manuscript. All authors commented on previous versions of the manuscript, contributed to the study conception, design, read, approved the submitted manuscript, and have agreed to be personally accountable for their contribution.
Conflict of interest
Author MM received consulting fees from Carrot Insights Inc. from 2015 to 2018 as well as travel re-imbursement in January and March 2019. He had stock options in the company as well, but these are now void since Carrot Insights Inc. went out of business in June 2019.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2022-11-10T15:29:52.420Z | 2022-11-10T00:00:00.000 | {
"year": 2022,
"sha1": "01f7e8d8b19487d25e54cc090ee54e15827e92fd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "01f7e8d8b19487d25e54cc090ee54e15827e92fd",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253027125 | pes2o/s2orc | v3-fos-license | The Dawn of a New Era in Atopic Dermatitis Treatment
Atopic dermatitis (AD) is one of the most common chronic inflammatory skin diseases, and the condition is typified by barrier dysfunction and immune dysregulation. Recent studies have characterized various phenotypes and endotypes of AD and elucidated the mechanism. Numerous topical and systemic narrow targeting therapies for AD have been developed according to these findings. Topical medications, including Janus kinase (JAK) inhibitors, phosphodiesterase 4 inhibitors, and the aryl hydrocarbon receptor agonist tapinarof, are effective and safe for AD compared to topical corticosteroids. Oral JAK inhibitors and monoclonal antibodies targeting interleukin (IL)-4, IL-13, IL-31, IL-33, OX40, thymic stromal lymphopoietin, and sphingosine 1-phosphate signaling have displayed outstanding efficacy against moderate-to-severe AD. We are currently in a new era of AD treatment.
Introduction
Atopic dermatitis (AD) is a chronic, relapsing, inflammatory skin disease characterized by persistent pruritus with barrier dysfunction, microbial dysbiosis, and immune dysregulation [1]. The estimated prevalence of AD is 15-20% in children and 6-10% in adults, among whom 40% are classified as having moderate-to-severe disease [2][3][4][5]. In recent decades, patients have been treated with topical corticosteroids/calcineurin inhibitors, phototherapy, and systemic immunosuppressants. However, many patients require frequent laboratory monitoring during systemic immunosuppressant therapy, and they are undertreated because of concerns regarding adverse effects [6]. Patients with moderate-to-severe AD harbor systemic inflammation/immune abnormalities such as strong Th2 activation, expansion of T cell subsets, and increased levels of pro-inflammatory cytokines, including interleukin (IL)-4, IL-13, and IL-31 [7][8][9][10][11][12][13][14]. Therefore, new agents have been developed to target these cytokines, and they have displayed outstanding efficacy for patients with moderate-to-severe AD. Interestingly, topical phosphodiesterase (PDE) 4 inhibitors and aryl hydrocarbon receptor (AhR) agonists are also effective for AD skin lesions in terms of the restoration of skin barrier function and the regulation of inflammatory cytokine production [15][16][17]. This review discusses the molecular mechanisms and therapeutic targets involved in the pathogenesis of AD.
Emerging Systemic/Topical Agents
In the past few years, numerous systemic (Table 1) and topical ( Table 2) emerging agents have been developed for the treatment of patients with AD. Narrow targeting agents for AD have been developed based on its pathogenesis ( Figure 1). Accumulating evidence indicates that AD features multiple abnormalities in terms of epidermal barrier dysfunction, immunologic dysregulation, and microbial dysbiosis (e.g., increased abundance of Staphylococcus aureus and loss of commensal bacterial species) [1]. AD is considered a disease of Th2 predominance, and blockade of Th2 signaling is highly effective in treatment [37]. Dupilumab, an emerging narrow targeting agent that blocks both IL-4 and IL-13 signaling, has exhibited significant clinical benefits in patients with AD [18,38]. Skin IL-13 expression is correlated with disease severity in patients with AD [39,40]. Furthermore, recent studies have illustrated that the IL-13-specific antagonists tralokinumab and lebrikizumab have similar effects as dupilumab [19,20]. These data indicate that IL-13 acts as a critical cytokine in moderate-to-severe AD [41]. Traditionally, CD4 + helper T cells have been implicated as the source of Th2 cytokines. However, group 2 innate lymphoid cells (ILC2s) recently emerged as important contributors to AD through their production of IL-5 and IL-13 [42]. ILC2s, which belong to the larger ILC family, also include group 1 and group 3 ILCs [43]. At the cell surface, ILC2s express receptors for the cytokines IL-25, IL-33, thymic stromal lymphopoietin (TSLP), IL-2, IL-9, and IL-7 [44][45][46][47]. IL-33, an alarmin belonging to the IL-1 family, is mainly produced by keratinocytes in skin after cell death or in response to various stimuli, such as antigen challenges and scratches [48]. Human ILC2s in steady-state skin respond to IL-33 and IL-25 but not to TSLP [49]. Etokimab, a human monoclonal IgG1 antibody that neutralizes the activity of IL-33, proved efficacious for AD in a phase 2a trial [21,22]. TSLP is highly expressed in the skin of patients with AD, similar to IL-33, and it activates human myeloid dendritic cells to induce an inflammatory Th2 response [50,51]. However, tezepelumab, the monoclonal antibody targeting TSLP, did not provide significant improvements in patients with moderate-to-severe AD compared to the effects of placebo in a phase 2a trial [23]. These results indicate that IL-33 might contribute to AD aggravation by being more closely associated with ILC-mediated IL-13 production than TSLP. Conversely, the ligand for OX40 (OX40L, also known as CD134L and CD252) is primarily induced by TSLP [50,51]. OX40L is mainly expressed on antigen-presenting cells, such as activated B cells, dendritic cells, monocytes, and Langerhans cells [52][53][54][55]. OX40 (CD134), the receptor for OX40L, transiently expresses after antigen recognition [56]. It is predominantly expressed on activated/memory CD4 + T cells and Tregs, whereas it displays lower expression on CD8 + T, NK, and NKT cells [56]. The OX40-OX40L interaction is crucial for Th2 responses generating memory T cells by promoting the survival of effector T cells after antigen priming [57][58][59][60][61]. The OX40L-OX40 axis is a novel therapeutic target in autoimmune and inflammatory diseases, as it directly targets antigen-specific T cells responsible for clinical phenotypes without causing widespread immunosuppression [52,56]. A recent phase 2a clinical trial demonstrated that GBR 830, a humanized monoclonal antibody against OX40 that inhibits OX40-OX40L binding, induced significant progressive tissue and clinical changes in patients with moderate-to-severe AD [24]. the monoclonal antibody targeting TSLP, did not provide significant improvements in patients with moderate-to-severe AD compared to the effects of placebo in a phase 2a trial [23]. These results indicate that IL-33 might contribute to AD aggravation by being more closely associated with ILC-mediated IL-13 production than TSLP. Conversely, the ligand for OX40 (OX40L, also known as CD134L and CD252) is primarily induced by TSLP [50,51]. OX40L is mainly expressed on antigen-presenting cells, such as activated B cells, dendritic cells, monocytes, and Langerhans cells [52][53][54][55]. OX40 (CD134), the receptor for OX40L, transiently expresses after antigen recognition [56]. It is predominantly expressed on activated/memory CD4 + T cells and Tregs, whereas it displays lower expression on CD8 + T, NK, and NKT cells [56]. The OX40-OX40L interaction is crucial for Th2 responses generating memory T cells by promoting the survival of effector T cells after antigen priming [57][58][59][60][61]. The OX40L-OX40 axis is a novel therapeutic target in autoimmune and inflammatory diseases, as it directly targets antigen-specific T cells responsible for clinical phenotypes without causing widespread immunosuppression [52,56]. A recent phase 2a clinical trial demonstrated that GBR 830, a humanized monoclonal antibody against OX40 that inhibits OX40-OX40L binding, induced significant progressive tissue and clinical changes in patients with moderate-to-severe AD [24]. Nemolizumab S1PR1 S1PR4 Etrasimod S1P S1PR1
Targeting Th17-Associated Cytokine IL-17
Psoriasis, along with AD, is one of the most common inflammatory skin diseases. While AD has a strong Th2 component associated with IL-4 and IL-13 over-production, psoriasis is largely driven by Th17 T cells and associated IL-17 activation [79]. IL-17 expression is also enhanced in acute lesions in AD skin compared to uninvolved skin [80], and a correlation between the number of Th17 cells in peripheral blood and acute AD severity has been reported [81]. However, secukinumab, the monoclonal antibody targeting IL-17, did not provide significant improvements in patients with moderate-to-severe AD compared to the effects of placebo in a phase 2 trial [26].
Targeting Immunomodulatory Effects and Sphingosine 1-Phosphate (S1P) Receptors (S1PRs)
S1P, a bioactive lipid mediator, regulates various cell activities, including cell growth, differentiation, apoptosis, migration, inflammation, metabolism, and angiogenesis [82][83][84]. S1P is secreted by red blood cells, endothelial cells, and platelets into the extracellular environment, and it contributes to several cardiovascular, autoimmune, inflammatory, neurological, oncologic, and fibrotic diseases [85]. In patients with AD, it has been reported that serum S1P levels are elevated and associated with severity [86]. Five subtypes of S1PRs (S1PR1-5) have been identified as seven-membrane-spanning proteins, a characteristic feature of G protein-coupled receptors. S1PR1, S1PR2, and S1PR3 are widely expressed in various tissues, including the brain, lungs, spleen, heart, and kidneys [87]. Unlike S1PR1-3, S1PR4 is expressed in the lungs and lymphoid tissues, and S1PR5 expresses in the brain and skin [87]. Igawa et al. reported that the expression of S1PR1 and S1PR2 is increased in impetigo, a common bacterial skin infection mostly caused by Staphylococcus aureus [88]. S1PRs are considered therapeutic targets for patients with AD because agents targeting S1PRs have displayed immunomodulatory effects [89]. In addition, a study using mice reported that S1PR3-TRPA1 signaling contributes to the onset of itches in sensory nerves [90]. Currently, the safety and efficacy of systemic treatment with etrasimod, which targets S1PR1, S1PR4, and S1PR5, has been illustrated in patients with moderate-to-severe AD in a phase 2 clinical trial (NCT04162769), opening the door for this compound to enter phase 3 development.
Janus Kinase (JAK) Inhibitors
IL-4, IL-13, IL-31, and TSLP require downstream JAK-signal transducer and activator of transcription (STAT) signaling [91]. The involvement of all four JAK family members (JAK1-3 and TYK) has been observed in AD, mediating downstream inflammation [92,93]. Phosphorylation of JAK following the binding of a cytokine to its cognate receptor induces the phosphorylation and dimerization of STAT proteins [94]. These STAT proteins regulate target genes after translocating to the nucleus [94,95]. JAK inhibitors inhibit the activity of one or more JAKs, thereby interfering with the JAK-STAT signaling pathway (Figure 2). IL-4 and IL-13 induce JAK1 and JAK3, which activate STAT6 [96]. TSLP and IL-31 induce JAK1 and JAK2 expression, which activates STAT5 [91]. The oral JAK inhibitors baricitinib (JAK1/2), abrocitinib (JAK1-selective), and upadacitinib (JAK1-selective) have been approved for the treatment of AD. All three met primary and secondary endpoints across numerous trials in moderate-to-severe AD [94]. Of patients receiving baricitinib at doses of 1, 2, and 4 mg, EASI-75 scores were significantly higher with the 2 and 4 mg dosages (17% and 21%) than placebo (6%) at week 16 in a phase 3 trial (BREEZE-AD2) [27]. Of patients receiving abrocitinib at doses of 100 and 200 mg, EASI-75 scores were significantly higher with both dosages (45% and 61%) than placebo (10%) at week 12 in a phase 3 trial (JADE-MONO2) [28]. Of patients receiving upadacitinib at doses of 15 and 30 mg, EASI-75 scores were significantly higher with both dosages (60% and 73%) than placebo (13%) at week 16 in a phase 3 trial [29]. These results highlight the importance of Th2 signaling in the pathogenesis of AD. In addition, topical JAK inhibitors such as ruxolitinib (JAK1/2) and delgocitinib (a JAK1/2/3 and Tyk2 inhibitor, i.e., pan-JAK) have also been approved. Ruxolitinib, a first-generation small molecule-inhibitor approved by the FDA, was well tolerated and associated with a low frequency of treatment-emergent adverse events in patients with mild-to-moderate AD [31,32]. Delgocitinib, the world's first approved topical JAK inhibitor, has been studied in Japan, where it was approved for treating AD in adults and children based on long-term efficacy and safety data [33][34][35].
induces the phosphorylation and dimerization of STAT proteins [94]. These STAT p teins regulate target genes after translocating to the nucleus [94,95]. JAK inhibitors inh the activity of one or more JAKs, thereby interfering with the JAK-STAT signaling pa way ( Figure 2). IL-4 and IL-13 induce JAK1 and JAK3, which activate STAT6 [96]. TS and IL-31 induce JAK1 and JAK2 expression, which activates STAT5 [91]. The oral J inhibitors baricitinib (JAK1/2), abrocitinib (JAK1-selective), and upadacitinib (JAK1-sel tive) have been approved for the treatment of AD. All three met primary and second endpoints across numerous trials in moderate-to-severe AD [94]. Of patients receiv baricitinib at doses of 1, 2, and 4 mg, EASI-75 scores were significantly higher with th and 4 mg dosages (17% and 21%) than placebo (6%) at week 16 in a phase 3 trial (BREEZ AD2) [27]. Of patients receiving abrocitinib at doses of 100 and 200 mg, EASI-75 sco were significantly higher with both dosages (45% and 61%) than placebo (10%) at w 12 in a phase 3 trial (JADE-MONO2) [28]. Of patients receiving upadacitinib at doses 15 and 30 mg, EASI-75 scores were significantly higher with both dosages (60% and 73 than placebo (13%) at week 16 in a phase 3 trial [29]. These results highlight the importa of Th2 signaling in the pathogenesis of AD. In addition, topical JAK inhibitors such ruxolitinib (JAK1/2) and delgocitinib (a JAK1/2/3 and Tyk2 inhibitor, i.e., pan-JAK) ha also been approved. Ruxolitinib, a first-generation small molecule-inhibitor approved the FDA, was well tolerated and associated with a low frequency of treatment-emerg adverse events in patients with mild-to-moderate AD [31,32]. Delgocitinib, the worl first approved topical JAK inhibitor, has been studied in Japan, where it was approved treating AD in adults and children based on long-term efficacy and safety data [33-35
PDE4 Inhibitors
PDE4 is a key regulator of inflammatory cytokine production in AD through the degradation of cyclic adenosine monophosphate [97,98]. PDE4 inhibitors increase the levels of cyclic adenosine monophosphate in patients with AD and thereby reduce the expression of pro-inflammatory cytokines [99]. The systemic PDE4 inhibitor apremilast did not meet its primary endpoint for patients with moderate-to-severe AD in a double-blind, placebo-controlled PoC trial (NCT02087943) [30]. Conversely, the topical agents crisaborole and difamilast were approved for treating AD in adults and children based on long-term efficacy and safety data in phase 3 trials [15,16,36].
AhR-Modulating Agent
Tapinarof (GSK2894512, previously WBI-1001) is a naturally derived small molecule produced by bacterial symbionts of entomopathogenic nematodes [100]. It directly binds AhR and activates signaling in multiple cell types, including CD4 + T cells and keratinocytes [101].
The ligation of tapinarof and AhR improves the expression of skin barrier genes, regulates the expression of Th2 cytokines, and protects against inflammation-associated oxidative damage [101]. A phase 2b trial revealed that topical tapinarof improved both eczema area and severity index and itch numerical rating scale scores in patients with moderate-tosevere AD with largely mild adverse events [17].
Conclusions
Emerging topical and systemic targeted agents have been developed on the basis of expanding knowledge of the pathogenesis of AD. These specific cytokine/receptor-targeted agents have displayed safety and efficacy. Moreover, upcoming trials will provide additional therapeutic options for patients with AD. These new therapies also raise problems, such as the long-term socioeconomic burden associated with monoclonal antibody treatments. Thus, we need to choose more appropriate treatments, including combinations of existing therapies. We are currently at the dawn of a new era in the treatment of AD. | 2022-10-21T15:15:12.886Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "b1effb3839a42c6b26117c041320e5c2b9b38021",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/20/6145/pdf?version=1666102562",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f928d04c1566cec0867ffc3f57dceb3a65ef68f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257264827 | pes2o/s2orc | v3-fos-license | Robot-assisted radical cystectomy: Where we are in 2023
Open radical cystectomy (ORC) is associated with high rates of perioperative morbidity and mortality, owing to its extensive surgical nature and the high frequency of multiple co-morbidities among patients. As an alternative, robot-assisted radical cystectomy (RARC) has been increasingly adopted worldwide, being a reliable treatment option that utilizes minimally invasive surgery. Seventeen years have passed since the advent of the RARC, and comprehensive long-term follow-up data are now becoming available. The present review focuses on the current knowledge of RARC in 2023, and analyzes various aspects, including oncological outcomes, peri/post-operative complications, post-operative quality of life (QoL) change, and cost-effectiveness. Oncologically, RARC showed comparable oncological outcomes to ORC. With regard to complications, RARC was associated with lower estimated blood loss, lower intraoperative transfusion rates, shorter length of stay, lower risk of Clavien–Dindo grade III–V complications, and lower 90-day rehospitalization rates than ORC. In particular, RARC with intracorporeal urinary diversion (ICUD) performed by high-volume centers significantly reduced the risk of post-operative major complications. In terms of post-operative QoL, RARC with extracorporeal urinary diversion (ECUD) showed comparable results to ORC, while RARC with ICUD was superior in some respects. As the RARC implementation rate increases and the learning curve is overcome, more prospective studies and randomized controlled trials with large-scale patients are expected to be conducted in the future. Accordingly, sub-group analysis in various groups such as ECUD, ICUD, continent and non-continent urinary diversion, etc. is considered to be possible.
INTRODUCTION
Bladder cancer is a serious health risk, both functionally and oncologically, and is a continuously increasing socioeconomic burden [1,2]. It is now the sixth most common type of cancer in the US [3] and the tenth most common cancer in the world, with its incidence steadily rising worldwide each year [4]. Regarding surgical treatments, radical cystectomy (RC) with pelvic lymph node dissection is considered the gold standard for muscle-invasive bladder cancer and selected patients with high-risk non-muscle invasive bladder cancer [5,6]. However, owing to its unavoidably extensive surgical nature and the high frequency of multiple co-morbidities among patients, it is associated with high rates of perioperative morbidity and mortality [7].
Minimally invasive strategies have gained popularity in various fields because of their potential to reduce surgical morbidity and shorten hospital length of stay (LOS). In particular, robot-assisted laparoscopic RC (RARC), since its introduction in 2003, has gradually become adopted as a surgi-cal option with the goal of improving perioperative outcomes and survival [8]. From 2004 to 2012, the number of RARCs increased 30-fold, from 0.6% to 18.5% [9,10]. In the early days of RARC, extracorporeal urinary diversion (ECUD) was most commonly implemented; however, most RARCs are now performed using the intracorporeal urinary diversion (ICUD) method in the high-volume centers [11].
In analyzing the outcomes of previous studies, one important issue to consider is the shallow learning curve for RARC, arising from its complex surgical nature. Without careful examination, confusing and counter intuitive conclusions could be drawn. For instance, in terms of complication rates, Clavien-Dindo grade 3-5 (major) complications in RARC with ICUD decreased significantly, from 25% in 2005 to 6% in 2015, as the learning curve was gradually overcome [12].
In this review, we discuss the current knowledge of RARC as of 2023. We compared not only the conventional robotic and open approaches but also the ECUD and ICUD diversion methods within the robotic approach. Furthermore, by organizing oncological outcomes, peri/post-operative complications, post-operative quality of life (QoL) changes, and conducting a cost-effectiveness analysis of RARC, we assessed its advantages and disadvantages in each aspect compared to open RC (ORC).
Robotic approach vs. laparoscopic approach vs. open approach
Sathianathen et al. [13] performed a systematic review and meta-analysis including five randomized controlled trials (RCTs) (one multicenter and four single-center), and compared RARC with ORC [14][15][16][17][18][19]; they concluded that surgical technique does not have a considerable impact on oncological outcomes. More recently, the randomized open vs. robotic cystectomy (RAZOR) trial showed comparable recurrencefree survival (RFS), progression-free survival, and overall survival (OS) rates for up to 3 years [20]. Data from the International Robotic Cystectomy Consortium (IRCC) suggested that oncologic outcomes are comparable for up to 10 years after RARC [21]. Ip et al. [22] also compared the 10-year oncological outcomes of ORC and RARC. The results showed no difference between RARC and ORC patients with respect to OS and RFS, despite the fact that RARC patients were older and had more co-morbidities.
Feng et al. [25] conducted a systematic review comparing robot-assisted and laparoscopic RCs (LRCs), including 10 studies (two RCTs, four prospective studies, and four retrospective studies). They demonstrated that the relative risk of positive surgical margins was not significantly different between the RARC group and LRC group. There was a significantly higher lymph node yield and longer OS (HR, 0.26; 95% CI, 0. 17-0.37; p<0.00001) in the RARC group than in the LRC group.
In the meanwhile, Elsayed et al. [24] investigated the rates and patterns of recurrences af ter RARC. Result showed that RARC was not associated with different patterns or higher relapse rates compared to historic ORC data. According to Zennami et al. [26], this trend was also shown in locally advanced (≥cT3) disease.
Robotic ECUD vs. ICUD
So far, no RCTs have been conducted on this topic. Katayama et al. [27] performed a systematic review and metaanalysis comparing ECUD and ICUD RARC. Twelve studies including a total of 3,067 patients were analyzed. With regard to oncological outcomes, patients receiving an ICUD had a significantly higher lymph node (LN) yield than those who received an ECUD (mean difference [MD], 3.68; 95% CI, 0.80-6.56; p=0.01), while PSM of positive LN was not significantly different between ICUD and ECUD. Cai et al. [28] also performed a pooled analysis of 13 retrospective studies that included a total of 4,755 patients. The average follow-up time was 21.3 months in the ICUD group and 23.3 months in ECUD group, respectively. In the three studies that assessed recurrence rates in 2,613 patients, the ICUD group showed a lower recurrence rate than the ECUD group (OR, 0.74; 95% Robotic radical cystectomy in 2023 CI, 0.61-0.91; p=0.004). In the two studies that assessed the mortality rates in 2,251 patients, no significant difference was observed between the two groups (OR, 1.00; 95% CI, 0.79-1.26; p=0.98). Additionally, Ham et al. [29] recently reported the results of 11 multicenter studies. They showed that although the overall recurrence (36.5% vs. 25.5%, p=0.013) and pelvic recurrence (12.1% vs. 5.9%, p=0.031) rates were higher in the ECUD group, there was no significant difference in the 5-year RFS (43.2% vs. 58.4%, p=0.516), cancer-specific survival (79.3% vs. 89.7%, p=0.392) and OS (74.3% vs. 81.4%, p=0.411) between the ICUD and ECUD groups. This is supported by the two institution prospective study by Bertolo et al. [30], that there were comparable RFS (log-rank p=0.08) and metastasis-free survival (log-rank p=0.02) between two groups at a mean follow-up of 18 months.
PERI-AND POST-OPERATIVE OUTCOMES
Metabolic, infectious, genitourinary, and gastrointestinal complications were identified as the primary causes of readmission after RARC in 39.5%, 23.5%, 22.3%, and 17% of patients, respectively [31]. Fifty percent of readmissions occurred in the first two weeks after hospital discharge. Male sex (OR, 3.5; p=0.02) and in-hospital infections (OR, 4.35; p=0.002) were independent predictors for multiple readmissions [31]. The shallow learning curve of RARC (10 to 75 cases) is one of the important issues to consider because it may affect various peri-and post-operative outcomes, including LOS, complication rates, etc [32].
Robotic ECUD vs. ICUD
Randomized data comparing the outcomes of ICUD vs. ECUD are lacking. RARC with ICUD, as a completely minimally invasive procedure, may provide benefits in terms of smaller incisions, reduced pain, accelerated bowel recovery, and reduced risk of fluid imbalance [37,38]. The use of ICUD has increased over the past decade, especially in high-volume institutions, showing improved perioperative outcomes over time [12]. A large cohort study from the IRCC compared ICUD and ECUD after RARC, and showed that ICUD was associated with a shorter operative time and less blood loss [39]. However, ICUD was associated with more overall (but not high-grade) complications. Nevertheless, the complication rates significantly decreased over time [12,39]. There is one prospective study by Bertolo et al. [30] comparing two robotic approaches performed by two institutional surgeons. They showed that no differences were found in postoperative complications either overall (ICUD: 26 age-adjusted Charlson Comorbidity Index had a lower risk of complications than ECUD patients [40]. This may be due to reduced surgical stress, including less blood loss, lower transfusion rates, or avoidance of excessive bowel manipulation, and less time exposed to external air. ICUD neobladder level increased significantly over time. Patients who underwent RARC with ICUD neobladders had shorter hospital stays and fewer 30-day reoperations, but were readmitted more frequently than those who received ECUD neobladder [41]. Teoh et al. [42] reported the results of 9 multicenter Asian RARC registries. RARC with ICUD was safe and technically feasible with similar post-operative complication rates to ECUD, with the additional benefits of reduced blood loss and shorter hospitalization. A meta-analysis and systemic review by Tanneru et al. [43] reported that the overall complication rates at 30 and 90 days were comparable between ICUD and ECUD. More experienced centers and those with higher volumes had decreased operative times for ICUD compared to ECUD. According to Katayama et al. [27], complications of RARC with ICUD in the short-and mid-term periods were equivalent to those of ECUD, with a trend toward faster bowel recovery. This study also showed that ICUD performed at high-volume centers significantly reduced the risk of post-operative major complications. The fast recovery, evidenced by time to flatus passage, oral intake, and length of hospital stay was also observed in Korean multicenter study by Shim et al. [44]. In terms of functional outcome, Khan et al. [45] compared the functional outcome between RARC with ICUD and ECUD of the Studer method. The result showed that there was no significant difference between the groups as regards urodynamic parameters. However, continence was attained a little earlier in the ICUD group.
HEALTH-RELATED QOL AND FUNCTIONAL CAPACITY
Patients with bladder cancer are usually elderly, have lower functional capacity, and have multiple co-morbidities [46]. Furthermore, RC is one of the most common operations performed in urology. Thus, recovery of QoL after RC is a critical issue in the field of urology. RARC undoubtedly offers the benefits of less morbidity, shorter hospital stays, faster recovery, and fewer narcotic analgesic requirements, which all contribute to increasing the patient's QoL [47]. Health-related QoL (HROoL) improved and returned to baseline within 6 months after RARC with ICUD, and the development of early and late complications after surgery were the primary factors impacting global HRQoL after RARC with ICUD [48]. However, there is still little evidence regarding whether RARC is superior to ORC in improving HRQoL outcomes.
Seven prospective RCTs were performed to compare HRQoL between extracorporeal ORC and RARC ( Table 1). The first study, which was reported by Messer et al. [49], used the Functional Assessment of Cancer Therapy-Vanderbilt Cystectomy Index (FACT-VCI) to compare patients. There was no significant difference in scores between ORC and RARC at 3, 6, 9, and 12 months post-operatively. However, a significantly lower physical well-being score at six months was reported in the ORC group (mean difference, -2.5; p=0.04).
The second trial, conducted by Bochner et al. [18], analyzed HRQoL between extracorporeal ORC and RARC by comparing the European Organization for the Research and Treatment of Cancer Quality of Life 30-item core questionnaire (EORTC QLQ-C30) at 3 and 6 months post-operatively. There were no significant differences at 3 or 6 months postoperatively between the two groups in any domain.
The third RCT study by Khan et al. [17] compared extracorporeal ORC, LRC, and RARC with QoL assessed using the FACT-Bladder Cancer and FACT-General questionnaires. Most patients underwent an ileal conduit. Similar to prior studies, this study did not find any significant differences among the three approaches. However, they did not report preoperative baseline QoL or subdomain scores. Furthermore, the period over which post-operative QoL was measured differed for each patient, which was a limitation.
The fourth trial, the RAZOR trial, included the largest number of patients [50]. The FACT-VCI and Short-Form 8 Health Survey (SF-8) were used to compare extracorporeal ORC and RARC cohorts at 3 and 6 months post-operatively (n=178). There were no significant differences between cohorts at any time point for any of the FACT-VCI or SF-8 composite scores. Using data from the RAZOR trial, Venkatramani et al. [51] recently reported that patients require 3 to 6 months to recover baseline levels after RC, irrespective of the surgical approach. Hand grip strength and activities of daily living (ADL) tended to recover to baseline earlier after RARC; however, there was no difference in the percentage of patients who recovered compared with that of ORC. To summarize the results of trials conducted up to 2020, there was generally no difference in QoL between RARC and ORC, while RARC was shown to be superior in terms of early recovery of ADL and physical well-being. However, these studies were limited by an extracorporeal to urinary diversion, jeopardizing the benefits expected of a minimally invasive procedure [52].
The fifth RCT by Mastroianni et al. [5] compared HRQoL between ORC and RARC with ICUD. In their interim analysis, 1-year HRQoL outcomes were compared between ORC and RARC with ICUD [5]. EORTC QLQ-C30 and QLQ-BLM30 were collected at baseline and at 1 year. Overall, both groups reported significant worsening of body image and physical and sexual function (all p=0.012). Patients receiving ORC were more likely to report significant 1-year impairments in role functioning, symptom scales, and bowel symptoms (all p=0.048). On generalized linear mixed-effect regression, patients undergoing ORC experienced a significant increase in insomnia (p=0.047) and abdominal bloating and flatulence (p=0.035) compared to the RARC cohort. Patients receiving RARC reported significant urinary symptoms and problems (p=0.018).
Sixth is a single center, double-blinded RCT, named the BORARC trial. Similar to the fifth trial, they also compared HRQoL between ORC and RARC with ICUD. They used the EORTC QLQ-C30 and QLQ-BLM30 QoL questionnaires, and demonstrated that 90 day post-operative QoL did not differ between ORC and RARC [53].
Seventh is a multicenter study RCT from the UK [33]. They additionally analyzed early period HRQoL at 5 weeks. Results showed that RARC with ICUD showed superior results at 5 weeks compared to ORC (both European Quality of Life 5-Dimension, 5-Level instrument scores, and World Health Organization Disability Assessment Schedule 2.0 scores). But as in the previous studies, the differences were not significant after 12 weeks.
Recently, Wijburg et al. [54] reported the results of a prospective comparative effectiveness study conducted in 19 Dutch centers. There was no statistically significant difference in HRQoL between ORC and RARC. Although this study was not an RCT, it has the advantage of being a large population multicenter study, and 88% of patients underwent intracorporeal reconstruction.
Collectively, most RCTs demonstrated that there is no significant difference in QoL between extracorporeal RARC and ORC. However, the actual impact of the RARC learning curve on clinical outcomes is unknown, although both the learning curve [55,56] and hospital volume [57] are likely to influence the outcomes of RARC. In addition, there is a current surgical trend of utilizing intracorporeal RARC, encompassing 95% of all RARCs [39], which may have greater benefits [58]. Thus, more RCTs are needed to reflect real-world clinical practice, to provide concrete and practical evidence.
Still, the above RCT study results have a limitation in that they have a heterogeneous urinary diversion type and a biased distribution towards ileal conduit in most of the studies. Although the ileal conduit has the advantages of faster and easier surgery and low complications, orthotopic neobladder generally offers significantly better QoL by maintaining body image and normal voiding function in suitable patients [59,60]. They have a better physical function and a more active lifestyle [61], including better sexual function [60]. Further RCTs are needed to perform subgroup analysis of different urinary diversion types in comparing the HRQoL of RARC and ORC. Meanwhile, there is strong evidence that functional prehabilitation, including aerobic physical activity, psychosocial counseling, and nutrition programs, have a positive impact on health, survival, and QoL [46,62]. Optimization of functional capacity before and after RC is considered an important factor in achieving better post-operative QoL [63]. The CanMoRe RCT is currently in progress and seeks to provide new knowledge on rehabilitation after RARC [62].
COST-EFFECTIVENESS
The need to set priorities in health care is becoming increasingly apparent, and thus, cost-effectiveness analysis, which defines cost-effectiveness quantitatively through objective measurements of net costs and health effects, is widely used to assess the relative value of different treatment option [64].
Before examining cost effectiveness, several studies conducted cost analyses for RARC (Table 2). Smith et al. [65] performed a comparative cost analysis between RARC and ORC, which included variability in operation time, transfusion requirements, and hospital stay, and concluded that RARC is associated with a higher financial cost (+$1,640) than ORC. Several other studies have also noted that RARC itself incurs approximately 16%-19% higher costs than ORC [1,66]. However, one point to consider is that several extra costs arise from readmission, and these are known to occur more frequently in ORC than in RARC. Those who underwent readmission had direct costs 1.42 times higher than those who did not require readmission [67].
Some authors have conducted detailed analyses of the cost items in RARC and the best ways to reduce costs. According to the European Association of Urology-Young Academic Urologists, patients who underwent RARC with ICUD were recruited from 11 European centers in four European countries (Belgium, France, the Netherlands, and the UK) from 2015 to 2020 [68]. Eighty-four percent of the costs of RARC were due to hospital stay (42%), ICU stay (3%), and operative time (39%), while 16% of the costs were due to robotic (8%) and stapling (8%) instruments. They suggested that decreasing the LOS and reducing operative time could help decrease the cost of RARC and make it more widely accessible. Another group suggested scenarios potentially resulting in significant cost savings for RARC, specifically an operating time ≤175 minutes, LOS ≤4 days, and RARC equipment ≤€281 [69].
Cost-effectiveness analysis has recently been performed by several study groups (Table 2). Bansal et al. developed a cost-decision tree model by considering data on LOS, operation times, transfusion rates, volume, and complication rates [1]. They revealed that although RARC is 18.9% more expensive than ORC, only minimal improvements in the QoL (quality-adjusted life years [QALYs] of 0.0988) are required for RARC to be considered a cost-effective alternative to ORC. In another study by Kukreja et al. [70], a cost-decision tree model using complications, readmissions, and/or transfusions, and QALYs were included in a 90-day time horizon model. They found that RARC costs $2,969 less per QALY than ORC. RARC may be the preferred strategy if complications can be prevented 74% of the time or transfusion can be avoided 70% of the time. As long as RARC can prevent complications and transfusions, it is more cost-effective than ORC. Recently, Machleid et al. [71] reported a similar result using a cost-decision tree model and incremental cost-effectiveness ratio (ICER). The model considered readmission or transfusion, short-term complications, and QALYs converted into net monetary benefits. They concluded that the intervention costs of RARC or ORC and the probabilities of complications had the greatest impact on ICER. At the £30,000/ QALY threshold, RARC was more cost-effective and could result in improved utility in patients with bladder cancer.
However, evidence that RARC is the most suitable treatment remains inconclusive, because a recently released Dutch prospective study does not support the superiority of RARC over ORC in terms of cost-effectiveness [72]. They performed incremental cost per QALY at 1 year (post-operative) analysis and concluded that RARC shows no difference in terms of QALYs, but is more expensive than ORC. Hence, the RARC does not seem to provide better value for money compared to the ORC. Although there is a rough consensus, we still need stronger evidence that RARC is more costeffective than ORC.
CONCLUSIONS
Oncologically, RARC showed comparable oncological outcomes to ORC. With regard to complications, RARC was associated with lower estimated blood loss, lower intraoperative transfusion rates, shorter LOS, lower risk of Clavien-Dindo grade III-V complications, and lower 90-day rehospi- talization rates than ORC. In particular, RARC with ICUD performed by high-volume centers significantly reduced the risk of post-operative major complications. In terms of QoL, RARC with ECUD showed comparable results to ORC, while RARC with ICUD was superior in some respects. Highvolume center-based RCTs are required. Finally, although there is a rough consensus that only minimal improvements in QALYs are required for RARC to be a cost-effective alternative to ORC, more evidence is needed to draw more definite conclusions. Collectively, outcomes of RARC with ECUD were similar to those of ORC in several respects, while RARC with ICUD showed a tendency to produce better outcomes. It is necessary to solidify these conclusions through additional RCTs with patients undergoing RARC with ICUD.
CONFLICTS OF INTEREST
The authors have nothing to disclose.
FUNDING
None.
AUTHORS' CONTRIBUTIONS
Research conception and design: Ja Hyeon Ku. Data acquisition: Jang Hee Han. Statistical analysis: Jang Hee Han. Data analysis and interpretation: all authors. Drafting of the manuscript: all authors. Critical revision of the manuscript: all authors. Administrative, technical, or material support: all authors. Supervision: all authors. Approval of the final manuscript: all authors. | 2023-03-02T16:14:09.314Z | 2023-02-28T00:00:00.000 | {
"year": 2023,
"sha1": "4ca77a006f776b9a63ae19956ba11c8843174849",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4111/icu.20220384",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "026d03486af219e4350760c95bd031f2f2f53040",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252917788 | pes2o/s2orc | v3-fos-license | Interacting Jump Processes Preserve Semi-Global Markov Random Fields on Path Space
Consider a system of interacting particles indexed by the nodes of a graph whose vertices are equipped with marks representing parameters of the model such as the environment or initial data. Each particle takes values in a countable state space and evolves according to a (possibly non-Markovian) continuous-time pure jump process whose jump intensities depend only on its own state (or history) and marks as well as the states (or histories) and marks of particles and edges in its neighborhood in the graph. Under mild conditions on the jump intensities, it is shown that the trajectories of the interacting particle system exhibit a certain local or semi-global Markov random field property whenever the initial condition satisfies the same property. Our results complement recent works that establish the preservation of a local second-order Markov random field property for interacting diffusions. Our proof methodology in the context of jump processes is different, and works directly on infinite graphs, thereby bypassing any limiting arguments. Our results apply to models arising in diverse fields including statistical physics, neuroscience, epidemiology and opinion dynamics, and have direct applications to the study of marginal distributions of interacting particle systems on Cayley trees.
Introduction
A pure jump interacting particle system (IPS) describes a collection of randomly evolving particles indexed by the nodes of an underlying graph, where the dynamics of each particle in the collection is described by a pure jump process on a discrete state space, with jump rates depending not only on its own state but also on the states of neighboring particles in the graph. Commonly studied IPS include the voter model, contact process and Glauber dynamics for various statistical physics spin models like the Ising and Potts models [30], as well as many other models arising in engineering and operations research (see [18] for a list of references). Several works over the last two decades have shown that such IPS do not preserve Gibbsianness [12-15, 23, 25, 26]. In fact, as demonstrated in [12][13][14][15]25], even if the initial states of particles form a Markov random field (MRF) with respect to the underlying interaction graph, the collection of states of all particles at some future time may fail to form a Markov random field (MRF) of any order (with respect to the same graph). In fact, this can occur even if the IPS is ergodic with a stationary distribution that is an MRF (see, e.g. [13]). As is well known, given a locally finite graph G = (V, E) and a Polish space Z, the Z V -valued random element Z is said to form a (local) MRF if for any disjoint partition A, B, S of V such that S is equal to N A (G), the neighborhood of the finite set A in G, (1.1) Furthermore, for any α ∈ N, Z is said to form an α-MRF if instead S is equal to N α A (G), the α-neighborhood of the finite set A in G (which is the set of vertices in A c that lie at a distance of at most α from A) above. On infinite graphs, one can also consider global MRFs: an MRF or α-MRF is said to be global if (1.1) holds even when A is infinite. Instead, we introduce the intermediate notion of a semi-global MRF (SGMRF), which will turn out to be more relevant for our purposes. An SGMRF or α-SGMRF is defined analogously to the MRF (respy. α-MRF) property except that (1.1) must hold even for infinite A whose α-neighborhood S is finite. In this article we show (under general conditions on the jump intensities and the interaction graph) that if the initial states form an α-MRF (respy. α-SGMRF) with α ≥ 2, then the trajectories also form an α-MRF (respy. α-SGMRF). In particular, this shows that there is preservation of the α-MRF and α-SGMRF properties at the level of trajectories, even if not at the level of states. In addition, we also show this in general fails to hold if α = 1 (see Example 3.8).
The definition of an α-SGMRF is a natural extension of the definition of a "Markov chain on a tree," as stated in [35,Section 2] and [20,Chapter 12], to higher-order Markov chains and general graphs (see Appendix B for further discussion). An important motivation for establishing the second-order SGMRF property is that it can be used to obtain autonomous descriptions of marginal dynamics for IPS on trees as unique solutions to certain associated local equations [17,Chapter 6]. As shown in [18,Theorem 4.3 and Corollary 4.7], the local equations describe the limit of both the neighborhood empirical measure as well as the marginal dynamics at the root of IPS on sequences of uniformly rooted random regular graphs whose sizes grow to infinity, much in the spirit of meanfield limits for IPS on complete graphs [32]. The MRF property by itself is insufficient for such a characterization (as also observed in [28] in the context of diffusions).
Our results in fact apply to a far more general class of IPS characterized as solutions to Poissondriven stochastic differential equations (SDEs) that may be non-Markovian or heterogeneous. Non-Markovian dynamics are crucial to model a variety of applications in neuroscience, epidemiology and engineering, and heterogeneities arise naturally in many settings, including load balancing models [1,16,34]. We capture heterogeneities in the dynamics by equipping the interaction graph with (possibly random) marks on the vertices, which specify the initial states and/or initial histories of the particles, as well as heterogeneities in the dynamics, random environments and asymmetries in the local interactions with respect to the neighboring particles. The jump rates of each particle are allowed to depend on the histories of neighboring particles as well as the marks of vertices in the neighborhood (a precise model description is given in Section 3.1). When the rates satisfy some mild regularity conditions (stated in Assumptions 3.1 and 3.4), our main result (Theorem 3.7) shows that if the random marks form an MRF or SGMRF of order α ≥ 2, then the trajectories of the IPS also exhibit the same MRF property.
To the best of our knowledge, this article is the first exploration of MRF properties of trajectories of IPS described by jump processes. However, there exist results of a similar flavor in the context of diffusions (see [7,10,11,27] and references therein). Specifically, Theorem 2.7 of [27] establishes conditions under which trajectories of interacting diffusions preserve the second-order local MRF property. Our results generalize those of [27] in the jump process context by considering higher-order MRFs, as well as SGMRFs rather than just MRFs, and weakening assumptions on the initial data, allowing for more general initial data than the initial state of the process. Specifically, unlike in [27], we do not require that the initial conditions be absolutely continuous with respect to any product measure. This is of particular interest in the study of marginals of stationary Markov processes, as well as non-Markov processes, for which the initial data includes the history of the process before time zero. These "infinite histories" are typically highly singular so that even on finite graphs, the initial data will typically fail to be absolutely continuous with respect to any product measure.
Despite some similarity in the results, it is worth emphasizing that our proof technique differs from that used for diffusions in [27]. In the latter work, the trajectories of interacting diffusions are first shown to preserve the 2-MRF property on finite graphs, and then a limiting argument is used to extend to infinite graphs. This argument exploits the fact that interacting diffusions on infinite graphs arise as local weak limits of interacting diffusions on finite graphs [29,Theorem 3.7], and shows that the 2-MRF property is preserved along suitably constructed convergent sequences. This approach requires one to impose certain assumptions about the continuity of the dynamics with respect to the initial condition (to ensure the aforementioned local weak convergence). In contrast, we directly prove our main result for IPS on infinite graphs without invoking of limiting arguments. A brief outline of our approach, which allows us to handle both the MRF and SGMRF properties in a unified manner, is as follows. First, given an IPS we construct an associated sequence of reference processes on infinite graphs, whose (initial data and) trajectories satisfy a certain conditional independence property that is akin to an MRF property. We then show that the law of the IPS is absolutely continuous with respect to that of each of the reference processes and use the form of the Radon-Nikodym derivative to deduce the MRF or SGMRF property of the IPS from the conditional independence properties of the reference processes. An intermediate step in this process that may be of independent interest is an infinite-dimensional Girsanov theorem for IPS on possibly non-locally finite graphs even when the defining SDE may have multiple weak solutions (see Proposition 4.9). The proof of this proposition proceeds by first establishing a duality between the IPS and a point process and then applying extensions of standard results for non-explosive point processes to the explosive marked point process setting to deduce the resulting Radon-Nikodym derivative.
In Section 2 we establish basic definitions and notations that will be in use throughout the article. In Section 3, we introduce the model, state our main results and provide certain counterexamples that suggest our results cannot, in general, be reasonably strengthened. In Section 4, we prove the main result taking for granted the absolute continuity of the IPS with respect to the reference processes and the form of the associated Radon-Nikodym derivatives. The latter are derived in Section 5. In Appendix A, we prove a technical lemma that is used to describe the conditional structure of the reference processes. In Appendix B we provide an alternate characterization of the SGMRF property and derive associated properties. In Appendix C we show that the point process dual is a well-defined property. Lastly, in Appendix D, we apply the results of [18] to prove that the the sequence of reference IPS is well defined under the conditions we impose upon it.
Preliminaries and Notation
For any real numbers a, b ∈ R, we write a ∧ b := min{a, b} and a ∨ b := max{a, b}.
Graph Notation: Given a set A, let |A| denote its cardinality. Let G := (V, E) represent a graph, with countable vertex set V and edge set E. Graphs are always assumed to be simple (i.e., they do not have self-loops or multi-edges) and undirected. For u, v ∈ V, a path between u and v in G is defined to be a sequence of vertices u = v 0 , v 1 , . . . , v n−1 , v n = v for some n ∈ N 0 such that for all i ∈ {1, . . . , n}, {v i−1 , v i } ∈ E and v i = v j whenever i = j except possibly when (i, j) = (0, n), in which case the path is said to be a cycle. The length of the path is the number of edges in the path. Let d G (u, v) denote the usual graph distance, which is the length of the shortest path between u and v in G. If there are no paths between u and v, then d G (u, v) = ∞. Note that for v ∈ G, the sequence {v 0 = v} is a path of length 0 so that d G (v, v) = 0.
For any subset U ⊆ V , let N U := N U (G) := {v ∈ V \ U : {u, v} ∈ E for some u ∈ U } denote the neighborhood of U in G and let cl U := cl U (G) := U ∪ N U denote its closure. Also, for α ∈ N, is a singleton and the graph is clear from the context, then we write The degree of a vertex v is equal to |N v |, the graph G is said to be locally finite if each of its vertices has finite degree and the graph G is said to be of bounded degree if sup v∈V |N v | < ∞. Unless otherwise specified, all graphs are assumed to be locally finite. On occasion, we may slightly abuse notation by writing Path Space Notation: Given any countable index set U and Polish space Z, let Z U = {(z v ) v∈U : z v ∈ Z for all v ∈ U } denote the corresponding configuration space, equipped with the product topology. For any z ∈ Z V , z U ∈ Z U denotes the restriction of z to Z U , that is, z U = (z v ) v∈U . We consider IPS with a countable state space X , which we identify with a subset of the integers Z equipped with the discrete topology. Given U ⊆ V and a (closed, half-open or open) interval I ⊆ [0, ∞), let D(I, X U ) denote the space of càdlàg functions from I to X U . Given 0 < t < ∞, and I = [0, t] or I = [0, t), for conciseness denote D(I, X U ) by D U t or D U t− , respectively. Also, set D U := D([0, ∞), X U ) and omit the superscript U from the notation when |U | = 1. If x ∈ D U and v ∈ U , then x v (t) denotes the value of the vth component of x at time t ≥ 0. For any t ≥ 0, the restrictions of x to [0, t] and [0, t) are respectively denoted by x[t] ∈ D U t and x[t) ∈ D U t− . Also, set ∆x(t) := x(t) − x(t−). For 0 ≤ s < ∞, an interval I ⊆ [0, ∞), finite U ⊆ V and x ∈ D(I, X U ), define the set of jump times as follows: for every t / ∈ Disc (x). Next, for any fixed t ∈ R + and any strictly increasing locally Lipschitz function ψ : [0, t) → [0, ∞) with a locally Lipschitz inverse (e.g., ψ(s) : This can be used to show that D U t− is also Polish under the J1 topology.
Measure Notation: For any Polish space Z, let B(Z) be the Borel σ-algebra on Z, and let P(Z) be the space of probability measures on (Z, B(Z)) equipped with the topology of weak convergence. Given U ⊆ V and η ∈ P(Z V ), let η[U ] be the marginal distribution of η restricted to Z U . Given any η ∈ P(Z) and a Z-valued random element Z, we say Z ∼ η if the distribution of Z is given by and Y 2 are independent (respy. conditionally independent given Y 3 ). If η ∈ P (D(R + , Z)) for some Polish space Z, then η t ∈ P (D([0, t], Z)) and η t− ∈ P (D([0, t), Z)) denote the restrictions of η to the respective Borel σ-algebras B(D([0, t], Z)) and B(D([0, t), Z)).
A filtration is said to satisfy the usual conditions if it is complete and right-continuous. Unless otherwise stated, all filtrations are assumed to be augmented so as to satisfy the usual conditions. Filtrations will typically be represented by the letters F, G and H, indexed by R + or [0, T ] and for each t ∈ R + , the corresponding σ-algebras will be denoted by F t , G t , H t respectively. Given a filtration G := {G t } t∈R + , recall that a simple sufficient condition for a process Z to be G-predictable is that t → Z t is almost surely left-continuous and G-adapted. Given two filtrations F and G, as usual F ∨ G := (F t ∨ G t ) t∈R + denotes the smallest filtration containing both F and G. Given a random element ζ, we use H ζ to denote the completion of the σ-algebra generated by ζ (with respect to a probability measure that will be expressed explicitly if not clear from the context), and for any càdlàg stochastic process Z, we define H Z := {H Z t } t∈R + to be the smallest filtration satisfying the usual conditions such that Z is adapted to H Z . For all processes Z considered in this paper, H Z will be equal to the completion of the natural filtration of Z so that for all t ∈ R + , ; see the discussion in [9, page 357] for more details.
Poisson Point Processes: Let Z be a Polish space equipped with its Borel σ-algebra and a metric d Z that induces the Polish topology. On intervals I ⊆ R + and on countable spaces (which will be assumed to have an implicit embedding in N), this will be the standard absolute difference metric, and on càdlàg spaces it will be the J1 metric. Finally, if Z := i∈I Z i for some finite index set Let N(Z) be the space of locally finite, nonnegative integer-valued measures, that is, for any p ∈ N(Z) and A ∈ B(Z), p(A) ∈ N 0 ∪ {∞} and p(A) < ∞ for every A that is bounded with respect to d Z . We equip N(Z) with the weak-hash topology, which then makes it a Polish space [9, page 2 and Proposition 9.1.IV(iii)], [31]. Also, note that the map N(Z) ∋ p → p(A) is Borel-measurable for any A ∈ B(Z).
Let η be any nonnegative, locally finite Borel measure on Z, that is, η(B) < ∞ for every bounded set B ∈ B(Z).
Model Description and Assumptions
We consider IPS in which each particle takes values in a countable state space X (viewed without loss of generality as a subset of Z) and has state transitions that lie in some finite jump set J ⊆ {i − j : i, j ∈ X , i = j}. We restrict consideration to the case |J | < ∞ because this setting leads to simpler and more transparent expressions and seems to cover most examples of interest, although it is straightforward to generalize our results to the case of a countable jump set J . The data specifying the model consists of a deterministic (simple, locally finite, undirected) graph G = (V, E) that encodes the interaction structure, the initial data κ ∈ K V , where K is a Polish space, and a family of jump rate functions r v j : R + × (K × D) V → R + , j ∈ J , v ∈ V , that specifies the dynamics, where D is the space of càdlàg functions taking values in X (using the notation from Section 2). The initial data κ can not only capture the initial state at time zero (for a Markov process) or history before time zero (for a non-Markovian process), it can also be used to encode other state parameters of the model such as random environments and heterogeneities in particle dynamics (see [18,Section 4.4] for concrete examples). We assume interactions between particles are local (with respect to the graph G), predictable and with regular paths, as encapsulated in the following assumption.
Assumption 3.1. The family of rate functions r := {r v j } v∈V,j∈J consists of Borel measurable functions from R + × (K × D) V to R + that satisfy the following three conditions: 1. (locality) for every v ∈ V and j ∈ J , there exists a function r v j : R + × (K × D) clv such that for every (t, ϑ, In what follows, Leb is Lebesgue measure on R + and # J is the counting measure on J . Definition 3.2. The solution space associated with the K V -valued (random) initial data κ is a complete filtered probability space (Ω, F, F, P) with F satisfying the usual conditions, which supports the initial data κ with F 0 ⊇ σ(κ), and a collection of i.i.d. F-Poisson processes N := {N v } v∈V on the space R 2 + × J with intensity measure Leb 2 ⊗ # J , referred to as driving Poisson processes.
Given the solution space (Ω, F, F, P), initial data κ and jump rate function family r := {r v j } v∈V,j∈J that satisfy Assumption 3.1, we now describe the associated IPS X as a solution to the following Poisson-driven SDE: Note that Assumption 3.1 implies that (s, κ, X) → r v j (s, κ, X) only depends on X and κ only via X clv [s) and κ clv . Since, as mentioned above, the initial data κ may contain more than the initial state, we will find it convenient to express the latter as a Borel measurable function ξ : K → X of the initial data: We call ξ the initial condition map, and we will refer to (κ, ξ) as the initial data pair. 2) for the initial data κ is a F-adapted càdlàg stochastic process X defined on an associated solution space (Ω, F, F, P) that satisfies (3.1)-(3.2) almost surely. The SDE (3.1)-(3.2) is said to be strongly well-posed for the initial data κ if on any solution space (Ω, F, F, P) associated with κ, there exists a weak solution X to (3.1)-(3.2) for the initial data κ and the SDE (3.1)-(3.2) is pathwise unique in the sense that given any other weak solution Y to (3.1)-(3.2) on the same solution space (and hence, with the same driving Poisson processes and initial data) it follows that X = Y almost surely.
Our main result holds under the following mild condition that the jump rate functions at a vertex satisfy a certain degree-dependent bound.
Assumption 3.4. There exists a function C : N × R + → R + that is non-decreasing in each of its arguments and such that for any v ∈ V, j ∈ J , k ∈ N and t ∈ R + , r v j (t, ·, ·) ≤ C(|cl v |, t). ] that every such weak solution is necessarily also strong in the sense that it is adapted to the filtration H κ ∨ H N . Given this equivalence, we omit the qualifier "weak" or "strong" for solutions to (3.1)-(3.2) when the SDE is strongly well-posed.
Unless otherwise specified, we use (Ω, F, F, P) and N to denote the solution space and driving Poisson processes of (3.1)-(3.2).
Main Result and Counterexamples
Our main result shows that the trajectories of the IPS propagate certain MRF properties. To state these precisely, recall the definition of MRF, α-MRF, SGMRF and α-SGMRF from Section 1.
Theorem 3.7. Suppose the jump rate function family r := {r v j } v∈V,j∈J satisfies Assumptions 3.1 and 3.4, and G is either a graph of bounded degree or an a.s. realization of a Galton-Watson tree whose offspring distribution has a finite first moment. Given an initial data pair (κ, ξ), let X = X[∞) be a solution to (3.1)-(3.2). If κ forms an α-MRF (respy. α-SGMRF) with respect to G for some integer α ≥ 2, then for each t ∈ (0, ∞], (κ, X[t)) forms an α-MRF (respy. α-SGMRF) with respect to G. Theorem 3.7 is a direct consequence of a more general result, Proposition 4.3, which establishes this "preservation of MRF" property for IPS on a broader class of graphs (that satisfy the condition stated in Assumption 4.1), and Lemma 4.2, which shows that this class includes the graphs mentioned in Theorem 3.7. The proof of Proposition 4.3 is presented in Section 4.4.
Theorem 3.7 is used in forthcoming work [19] to obtain an autonomous characterization of the marginal distribution on the root neighborhood of an IPS with homogeneous jump rate functions on the d-regular tree. As elaborated in the next section, the theorem is also relevant to the study of Gibbs-non Gibbs transitions of spin models.
We now describe two counterexamples that demonstrate that the results in Theorem 3.7 cannot in general be improved. Specifically, the first example shows that the analog of Theorem 3.7 does not in general hold when α = 1.
Example 3.8. There exists an IPS X on a finite graph G = (V, E) with jump rate functions r and initial data pair (κ, ξ) such that the components of κ are mutually independent and for which Consider the following (Markovian) jump rate functions, . This model satisfies Assumptions 3.1 and 3.4, and the maximum degree of any vertex in G is 2. Now, X 1 (0) = X 3 (0) with probability 1/2 and X 2 ≡ 0 on the event {X 1 (0) = X 3 (0)}. Note that the function f : D t− → R defined by f (y) = y(0) is bounded and measurable, and on the event ). Hence, the trajectories X[t) do not form an MRF for any t > 0.
Next we observe that for some t > 0, the states X(t) = {X v (t)} v∈V (as opposed to the trajectories) may fail to form an α-MRF for any α ∈ N even if the initial data κ forms an SGMRF. Indeed, this follows from the substantial literature on the topic of dynamic transitions of IPS from Gibbs to non-Gibbs states as demonstrated in the example below.
Example 3.9. Fix d > 2. Let G be the infinite d-regular tree and let ν be the positive-boundary ferromagnetic Ising model for an inverse temperature-magnetic field pair (β, h) at which the Ising model experiences a phase transition (as described in [20,Section 12.2]). If X is the IPS corresponding to infinite-temperature Glauber dynamics (as described in [13]) with initial condition The fact that X(0) is an MRF follows from the fact that it is a Gibbs measure associated with a Markovian specification (see [20,Section 12.2]). Furthermore, as an extremal countable state MRF [20,Theorem 12.31], it is also a Markov chain on the tree [35, Corollary 2]. By Lemma B.2, this implies that it is also an SGMRF and therefore an α-SGMRF for every α ∈ N. The assertion of the example then follows from [12, Theorem 3.9 and Remark 3.10]. Since Law(X(t)) is non-Gibbs (i.e., non-quasilocal) in the stated interval, X(t) fails to form an α-MRF for any α ∈ N.
Statement of the More General Result
For simplicity of formulation, in Theorem 3.7 we only addressed certain classes of graphs G. However, as shown in Proposition 4.3 below, the conclusion of Theorem 3.7 in fact holds for any graphjump rate function pair that satisfies the following more general (but less transparent) assumption.
The assumption is expressed in terms of a certain family of reference processes which are modified versions of the original family of jump rate functions r := {r v j } v∈V,j∈J defined as follows: for any W ⊂ V , let r W := { r W,v j } v∈V,j∈J be the family of jump rate functions defined by The property that each reference IPS is a strong solution to the associated SDE plays a crucial role in the proof of Proposition 4.3 (see Remark 3.6 and Proposition 4.4). This motivates the following assumption.
Assumption 4.1. The graph G, the initial data pair (κ, ξ) and the family of jump rate functions r := {r v j } v∈V,j∈J are such that for every finite (and possibly empty) W ⊆ V , the SDE (3.1)-(3.2) is strongly well-posed for the initial data κ when r is replaced with the modified jump rate function family r W := { r W,v j } v∈V,j∈J defined in (4.1). The next lemma shows that this assumption holds under the conditions of Theorem 3.7.
Lemma 4.2. Suppose r := {r v j } v∈V,j∈J satisfies Assumptions 3.1 and 3.4 and G is either a graph with finite maximal degree or an a.s. realization of a Galton-Watson tree whose offspring distribution has a finite first moment. Then G, (κ, ξ) and r satisfy Assumption 4.1.
The proof of Lemma 4.2 is a technical extension of results in [18], and thus deferred to Appendix D. As shown therein, Assumption 4.1 in fact holds for the large class of finitely dissociable graphs introduced in [18, Definition 5.11] whenever the jump rate function family r satisfies Assumptions 3.1 and 3.4.
We now state the generalization of Theorem 3.7. The proof of Proposition 4.3 is given in Section 4.4. Its outline is as follows. Fix G, (κ, ξ) and r as in the proposition, and let X = {X v } v∈V be a solution to the associated SDE (3.1)-(3.2). Also, fix α ≥ 2 and assume that κ forms an α-MRF on K V with respect to G. For any t ∈ (0, ∞), our proof that (κ, X[t)) also forms an α-MRF can be broken into four main steps. First, in Section 4.2, we construct a sequence of X V -valued "reference processes" { X n } n∈ N , with initial data pair (κ, ξ) having the property that for any partition A, B, S of V for which S = N α A and A is finite, we have for all n sufficiently large (depending on S). Next, in Section 4.3 we compute the Radon-Nikodym derivative of the law on path space of the IPS X with respect to that of the reference process X n for any n ∈ N by first establishing a duality relation between IPS and point processes (see Proposition 5.2) and then leveraging results from point process theory. Next, combining an explicit factorization of this Radon-Nikodym derivative with a slight modification of a result from [33] (see Lemma 4.12), we prove in Section 4.4 that (4.2) must hold with X n replaced by X, which implies that (κ, X[t)) forms an α-MRF with respect to G for all t ∈ (0, ∞). Finally, we extend the result to the case t = ∞ via a standard martingale argument. The proof of preservation of the SGMRF property proceeds in a similar fashion, first assuming κ forms an α-SGMRF and following the above argument, where now A may be infinite, but S must still be finite.
Reference Processes and their Conditional Independence Properties
For the remainder of the article, we assume that X = Z. Note that this is without loss of generality because the IPS X may be regarded as a process with state space Z V such that X v (t) ∈ X almost surely for all v ∈ V and t ∈ R + . Given initial data pair (κ, ξ) from (3.2), for any finite vertex set W ⊆ V , let X W be the solution to the SDE The equation (4.3) can be rewritten in terms of the modified rate function family r W := { r W,v j } v∈V,j∈J as follows: Thus, by Assumption 4.1, the SDE (4.3) is strongly well-posed and X W is a.s. uniquely defined.
Fix an arbitrary vertex ø ∈ V and define The main result of this section, Proposition 4.4 below, shows that (κ, X n ) has a conditional structure that partially resembles an α-SGMRF.
The proof of Proposition 4.4 is given after the following abstract technical lemma, which provides sufficient conditions under which conditional independence properties can be transferred from one collection of random elements to another.
Lemma 4.5. For each i, j = 1, 2, 3, let Z j i be a random element taking values in some Polish space Z j i that satisfies the following properties: Then Relegating the proof of the lemma to Appendix A, we first apply it to prove Proposition 4.4.
First, note that κ D 1 ⊥ ⊥ κ D 2 |κ D 3 by assumption and so (Z 1 j ) j=1,2,3 satisfies (4.6). Further, since D i , i = 1, 2, 3, are disjoint, {N D i } i=1,2,3 are mutually independent and, by assumption, also independent of {κ D i } i=1,2,3 . Thus, Z 1 and Z 2 satisfy property 1 of Lemma 4.5. It only remains to verify the measurability condition stated in property 2. The first inclusion H Z 1 i ⊆ H Z 3 i holds trivially for i = 1, 2, 3. To prove the second inclusion, note that X n is the a.s. unique solution to the SDE (4.3) with W = V n and so substituting the local jump rate functions r := { r v j } v∈V,j∈J from condition 1 of Assumption 3.1, we see that its marginal X n A on the set A solves the following SDE: (4.7) We now claim that cl v ⊆ A for every v ∈ A \ V n , and thus the SDE (4.7) for the marginal X A is autonomously defined. The claim holds because for any v ∈ A \ V n , the fact that v ∈ A, Since A, B and S form a partition of V , this shows that cl v ⊆ A.
We now show that the SDE (4.7) is strongly well-posed. Fix any solution space (Ω, F, F, P) supporting the driving Poisson processes N A (in the sense of Definition 3.2) and κ A , and consider an extension (Ω,F ,F,P) of the solution space that also supports i.i.d. Poisson processes N V \A and κ V \A such that N := ( N v ) v∈V is a collection of driving Poisson processes of (4.3) with W = V n . Let Y A and Z A be two weak solutions to (4.7) on the solution space (Ω, F, F, P) with the same initial data κ A , and, recalling that (4.3) is strongly well-posed, let X = X n be the a.s. unique solution to (4.3) with W = V n and initial data κ. Then, using the property that the marginal on A of any solution to (4.7) is autonomously defined, the processes are both solutions to (4.3) on (Ω, F, F, P) with W = V n . Therefore X = Y = Z a.s., and so in particular Y A = Z A a.s.. Thus, (4.7) is strongly well-posed. In turn, the strong well-posedness of (4.7) implies X n A is the a.s. unique solution to (4.7) driven by N A , and hence by Remark 3.6, (κ A , X n A [t)) must be H N A t− ∨ H κ A -measurable. This proves the second inclusion in property 2 for the case i = 1.
The case i = 2 can be argued similarly. We first claim that N α B (G) ⊆ S. Indeed, if the claim were not true, then since A, B, S is a (disjoint) partition of V , it must be true that N α B (G) ∩ A = ∅ or equivalently, there must exist u ∈ A and v ∈ B such that d G (u, v) ≤ α. However, since S = N α A (G) by assumption, this implies v ∈ N α A (G) ∩ B = S ∩ B = ∅, which is a contradiction. This proves the claim. Given the claim, an identical argument as that used for i = 1 shows that X n B is autonomously defined and (κ B , X n B [t)) is H N B t− ∨ H κ B -measurable, thus proving the second inclusion in property 2 of Lemma 4.5 when i = 2. The proof for the case i = 3 is much simpler. By (4.3) with W = V n , for each v ∈ S ⊂ V n , X n v is given by the one-dimensional, H κ S ∨ H N S -adapted process t → ξ(κ v ) + (0,t]×(0,1]×J jN v (ds, dr, dj). This shows that X n v is a measurable function of (κ v , N v ). Since this is true for each v ∈ S, this proves that property 2 of Lemma 4.5 also holds for i = 3. This completes the verification of the conditions of Lemma 4.5 for the {Z j i } i,j∈{1,2,3} defined at the start of the proof, and thus proves the proposition.
Change of Measure Results on Infinite Graphs
In this section, we identify the form of the Radon-Nikodym derivative of the law of the solution to the SDE (3.1)-(3.2) with respect to that of the reference process X W for finite W . To state our result, we first introduce the notions of proper trajectories and their so-called jump characteristics. Recall the definition of Disc t (x) and ∆x from Section 2.
is an increasing sequence, and ∆x v k (x) (t k (x)) = j k (x) for each k < |Disc ∞ (x) | + 1. When the trajectory x is clear from the context, we simply write We now establish conditions under which the jump characteristics of a process exist.
Lemma 4.8. Suppose the jump rate function family r satisfies conditions 2 and 3 of Assumption 3.1. Given the initial data pair (κ, ξ), for any finite W ⊆ V , let X W be any weak solution to the SDE (4.3) and let X be any weak solution to (3.1)-(3.2) for the same initial data pair (κ, ξ). Then for any finite U ⊆ V , the jump characteristics of X W U and X U are almost surely well defined. Proof. Note that X ∅ is a weak solution to (3.1)-(3.2) so we may write X = X ∅ without loss of generality. Thus, for any fixed, finite U, W ⊆ V , it suffices to prove that the jump characteristics of X W U exist. This occurs precisely when X W U is proper and its discontinuity times can be enumerated in increasing order. Suppose X W solves (4.3) on the solution space (Ω, F, F, P) supporting the driving Poisson processes N. Fix any u, v ∈ V and t ∈ R + . Note that by (4 by the independence of N u and N v . Therefore X W is almost surely proper, which implies that X W U is proper. Furthermore, because U is finite, X U is discrete and equipped with a complete metric. Therefore, |Disc t X W U | < ∞ for every t ∈ R + , which shows that the jump characteristics of X W U exist. Our next result is a general change of measure result that characterizes the Radon-Nikodym derivative of the law of the solution to the SDE (3.1)-(3.2) with respect to the reference process X W in (4.3) in terms of the jump characteristics of X W W , which are a.s. well defined by Lemma 4.8. Proposition 4.9. Let G be a deterministic (not necessarily locally finite) graph. Suppose the jump rate function family r := {r v j } v∈V,j∈J satisfies conditions 2 and 3 of Assumption 3.1. Let W ⊆ V be any finite set and denote µ W := Law(κ, X W ) where X W is any weak solution to the SDE (4.3) for the initial data pair (κ, ξ). Also assume that for any (t, j, v) ∈ R + × J × V , t 0 r v j (s, κ, X W ) ds < ∞ a.s.. Define the filtration G := H κ ∨ H X W and define the process L W = (L W t ) t≥0 as follows: where {(t k , j k , v k )} are the jump characteristics of X W W . Then L W is a G-local martingale. Moreoever, if L W is also a G-martingale, then there exist a measure µ ∈ P((K×D) V ) and a weak solution X to the SDE (3.1)-(3.2) for the initial data pair (κ, ξ) such that where µ t− and µ W t− are the restrictions of the respective measures µ and µ W to the space B((K × D t− ) V ), and Law(κ, X) = µ.
When G is finite, this result is well known if the IPS is Markov (e.g. [5,Example 15.2.10]). When G is finite and the IPS is non-Markov, the result can be deduced by combining duality characterizations of the IPS in terms of a point process (as discussed in Section 5.1) with standard change of measure theorems for point processes (e.g., [5,Theorem 15.2.7], [9,Proposition 14.4.III] or [4, VIII T10]) and simple estimates to prove that the candidate Radon-Nikodym derivative is indeed a martingale (rather than just a local martingale). However, the case when G is infinite is more subtle even for Markov IPS due to the possibility of explosions, and Proposition 4.11 in fact addresses the more general case when the graph may not even be locally finite, necessitating further care. This more general setting is of interest for the study of analogous properties of IPS on random graphs. In fact, by letting G be the infinite, complete graph, Proposition 4.9 becomes applicable to solutions of a more general class of infinite-dimensional Poisson-driven SDEs not necessarily arising as locally interacting processes with respect to some graph. The proof of Proposition 4.9 relies on duality characterizations and is hence deferred to Section 5.2. However, in the presence of well-posedness, as a corollary of Proposition 4.9, we obtain the following change of measure result, which is used to prove Proposition 4.3.
Corollary 4.11. Suppose G, (κ, ξ) and r satisfy Assumptions 3.1, 3.4 and 4.1. For any n ∈ N, let V n be as in (4.4), let X n := X Vn and X be solutions to (3.1)-(3.2) and (4.3), respectively, and let µ n = Law(κ, X n ) and µ = Law(κ, X). Then, for any t ∈ (0, ∞), r v j (s, κ, X n ) − 1 ds a.s., Proof. Fix n ∈ N and let G := H κ ∨ H X n . Applying Proposition 4.9 with W = V n , the process L n := L Vn in (4.9) is a G-local martingale. We first prove that it is in fact a G-martingale. Let {θ ℓ } ℓ∈N be a localizing sequence for L n and for each ℓ ∈ N, let τ ℓ := inf{t : |Disc t X n Vn | ≥ ℓ} where the infimum of an empty set is taken to be infinite. Note that τ ℓ is a G-stopping time. Because X n Vn is a.s. càdlàg and X Vn is a discrete space, X n Vn can only have finitely many discontinuities in any finite time interval. Thus, lim ℓ→∞ τ ℓ = ∞ a.s., so {τ ℓ ∧ θ ℓ } ℓ∈N is also a localizing sequence for L n . Recall the definition of C : N×R + → R + from Assumption 3.4 and let d : However, note that |Disc t X n Vn | ∼ Poiss(|V n ||J |t) because by (4.1) r Vn,v j (·, κ, X n ) = 1 whenever (j, v) ∈ J × V n . Since J is assumed to be finite, this implies E sup ℓ∈N |L n t∧τ ℓ ∧θ ℓ | ≤ exp (t|V n ||J |) E |C(d, t)| |Disct( X n Vn )| < ∞, which shows that for each t ≥ 0, the sequence {L n t∧τ ℓ ∧θ ℓ } ℓ∈N is dominated by an integrable random variable, and is therefore uniformly integrable. By [8,Proposition 1.8], it follows that L n is a G-martingale and for any t ∈ R + , E[L n t ] = 1. Proposition 4.9 then implies that the measure µ ∈ P((K × D) V ) defined by d µ t− /d µ n t− (κ, X n ) = L n t− a.s. for all t > 0 is the law of a weak solution X to (3.1)-(3.2) for the initial data κ on the graph G. By Assumption 4.1, the SDE (3.1)-(3.2) is wellposed, which implies that (κ, X) (d) = (κ, X). This shows dµ t− /d µ n t− (κ, X n ) = d µ t− /d µ n t− (κ, X n ) = L n t− a.s. for every t > 0, as desired.
Proof of Proposition 4.3
As mentioned earlier, the proof of the preservation of the MRF property over any finite time interval proceeds by transfering analogous conditional independence properties for the reference processes established in Proposition 4.4 to the original IPS. This makes use of a modified version of a result from [33] stated in Lemma 4.12 below.
Lemma 4.12. Let ( Ω, F ) be a measurable space, let F i ⊂ F , i = 0, 1, 2, be sub-σ-algebras, and let P 0 and P 1 be two probability measures on ( Ω, F ) such that P 1 ≪ P 0 . Assume that under P 0 , F 1 and F 2 are conditionally independent given F 0 . If in addition, the Radon-Nikodym derivative ρ := d P 1 /d P 0 with respect to ∨ 3 i=1 F i satisfies ρ = ρ 1 ρ 2 almost surely for some F i ∨ F 0 -measurable random variables ρ i , i = 1, 2, then under P 1 , F 1 and F 2 are also conditionally independent given F 0 .
Proof. If the filtrations were required to be complete, then this result would follow from [33, Theorem 3.6] under the stronger assumption that P 1 is equivalent to P 0 . However, as elaborated below, the same argument used in [33] also shows that the result holds under the weaker assumption of a not necessarily complete filtration and only absolute continuity of P 1 with respect to P 0 (rather than equivalence). Let Z be any bounded, F 1 -measurable random variable. Then, applying [3, Proposition B.41] and using the P 0 -conditional independence of F 1 and F 2 given F 0 and the F i -measurability of ρ i , i = 1, 2, we have P 1 -a.s., Thus, E P 1 Z| F 0 ∨ F 2 is F 0 -measurable and therefore P 1 -a.s. equal to E P 1 Z| F 0 . Since Z is arbitrary, F 1 and F 2 are conditionally independent given F 0 under P 1 as well.
Proof of Proposition 4.3: Assume κ forms an α-MRF (respy, α-SGMRF) for some α ≥ 2. Fix n ∈ N. By Assumption 4.1, the SDE (3.1)-(3.2) and the SDE (4.3) with W = V n , are both strongly well-posed. Let X and X n be the respective solutions. For t > 0, let µ t− := Law(κ, X[t)), µ n t− := Law(κ, X n [t)), µ := µ ∞− and µ n := µ n ∞− . Suppose that A, B, S ⊆ V partition V with |A| < ∞ (respy, |S| < ∞ if κ is an α-SGMRF) and let n be sufficiently large so that S = N α A ⊆ V n−1 . By assumption, we have κ A ⊥ ⊥ κ B |κ S and so if we fix t > 0, it follows from Proposition 4.4 that under µ n t− , To transfer this to µ t− , we would like to apply Lemma 4.12 with the following substitutions: ( Ω, F ) = (K × D t− ) V , B((K × D t− ) V )) , F i , i = 0, 1, 2 as just defined, P 0 = µ n t− , P 1 = µ t− , ρ = dµ t− /d µ n t− . Also, note that by Corollary 4.11, P 1 ≪ P 0 on B((K×D t− ) V ) with Radon-Nikodym derivative ρ := dµ t− /d µ n t− . To verify the remaining condition of Lemma 4.12, it only remains to show that ρ factorizes in the right way. To this end, recall the definition of the local jump rate function family r := { r v j } v∈V,j∈J from Assumption 3.1, and for v ∈ V , define g v : (K × D t− ) cl v (G) → R + as follows: x is proper, and set g v (ϑ, x) = 0 for any other x ∈ D t− . Letting {(t n k , j n k , v n k )} k∈N denote the jump characteristics of X n Vn (which are a.s. well defined by Lemma 4.8), Corollary 4.11 and the condition 1 of Assumption 3.1 imply Since α ≥ 2 and N α A (G) = S, the term in the first bracket on the right-hand side of (4.12) depends only on (κ A∪S , X n A∪S [t)) and is thus B((K × D t− ) A∪S = F 1 ∨ F 0 -measurable while the term in the second bracket depends only on (κ B∪S , X n B∪S [t)) and is thus B((K×D t− ) B∪S = F 2 ∨ F 0 -measurable. This completes the verification of all the conditions of Lemma 4.12, which allows us to conclude that Thus, we have shown that (κ, X[t)) forms an α-MRF (respy. α-SGMRF) with respect to G. We now extend this to the infinite time interval to show that the same is true for (κ, X). The argument we use is similar to the one applied in the proof of [27,Theorem 2.4]. Fix an element a / ∈ X . For any x ∈ D and t ∈ R + , we may embed the truncated function x[t) in D a := D(R + , X ∪ {a}) by setting x[t)(s) = a whenever s ≥ t and x[t)(s) = x(s) when s < t. Thus, x[t) → x as t → ∞ in D a . Let A, B, S ⊆ V be a partition such that S = N α A is finite, and if µ κ is not an α-SGMRF, then assume A is also finite. For each U ⊂ V , define f U : (K × D a ) U → R to be bounded and continuous. Then, as justified below the display, we have where the first equality uses the bounded convergence theorem and continuity of the functions f A , f B , f S , the second equality uses the relation (4.13) with t = s 1 , the third equality uses σ(κ S , X S ) = ∨ s∈R + σ(κ S , X S [s)), the Doob martingale convergence theorem, the continuity of f S and the bounded convergence theorem, and the final equality holds due to the bounded convergence theorem and the continuity of f A and f B . This proves (κ, X) forms an α-MRF (respy. α-SGMRF) with respect to G, as desired.
Proof of the Radon-Nikodym Derivative Characterization
The goal of this section is to prove the change of measure result in Proposition 4.9. The proof, which is given in Section 5.2, relies on a key duality characterization of the IPS that is first established in Proposition 5.2 of Section 5.1.
Dual Processes
We start in Section 5.1.1 by introducing some standard notation related to point processes.
Point Processes
Fix a filtered probability space (Ω, F, F, P) supporting a filtration G ⊆ F and a Polish space Z.
s. for all T ∈ R + , then P is said to be non-explosive or locally finite. All point processes P that we consider will be simple in the sense that sup t∈R + P ({t} × Z) ∈ {0, 1} almost surely. A marked point process P on Z is said to be G-adapted if for every t ∈ R + and A ∈ B([0, t] × Z), P (A) is G t -measurable. For any point process P on Z, H P is defined to be the minimal filtration satisfying the usual conditions such that P is H P -adapted.
Suppose that Z is equipped with a nonnegative, locally finite reference measure ℓ. Then a random function Γ : Ω × R + × Z → R + is said to be G-mark predictable if it is measurable with respect to P(G) ⊗ B(Z), where P(G) is the predictable σ-algebra generated by G [9, page 379]. As an immediate consequence, it follows that t → z∈A Γ(t, z) ℓ(dz) is G-predictable for all A ∈ B(Z). The G-intensity of P with respect to the measure ℓ is a G-mark predictable process Γ such that for each bounded A ⊆ Z, the process t → P ([0, t] × A) − z∈A t 0 Γ(s, z) ds ℓ(dz) is a G-local martingale [9, Definitions 14.1.I,14.3.I].
Dual Characterizations
Recalling the Definition 4.7 of the jump characteristics {(t k (x), j k (x), v k (x))} of any proper trajectory x, we now define the notion of a dual to x.
It is easy to see that the duals of weak solutions to (3.1)-(3.2) generally exist, see Lemma C.1 for a complete proof. In what follows, # S denotes the counting measure on any countable set S. Then for any deterministic (not necessarily finite) subset U ⊆ V , the following properties are satisfied: (a) The dual P U of X U has a G-intensity on J × U with respect to the reference measure # J ×U that is given explicitly by Let X be an a.s. proper X V -valued càdlàg process and let κ be a K V -valued random element, defined on a common probability space (Ω, F, P) and such that X v (0) = ξ( κ v ) for each v ∈ V . Set G := H κ ∨ H X . If the dual of X U is a simple marked point process P U with mark space J × U and G-predictable intensity Λ U with respect to the reference measure # J ×U given by then it is possible to extend the probability space (Ω, F, P) to support a filtration G ⊇ G satisfying the usual conditions and a collection of i.i.d. G-Poisson processes N : When G is finite and the IPS is Markov, duality results analogous to Proposition 5.2 are well known (e.g. [9, Example 10.3(a)]). When G is finite and the IPS is non-Markov, both parts of Proposition 5.2 follow easily from scaling arguments (applied to the driving Poisson processes of the SDE (3.1)-(3.2)) that allow the applicatoin of standard Poisson embedding theorems such as [5,Theorems 15.3.3 and 15.3.4]. However, when U and G are infinite, the dual point processes P U and P U of X U and X U respectively may be explosive, in which case standard embedding theorems cannot be applied directly (although see [22,Section XIV.4]). In the proof of Proposition 5.2(a), this is easily resolved by noting that for all finite W ⊂ U , Λ W is the restriction of Λ U to (0, ∞) × J × W . The argument above can then be applied to Λ W . However, in the proof of Proposition 5.2(b), a subtlety arises. For each finite W ⊆ U , it is possible to construct Poisson processes N W := { N W v } v∈W such that (5.3) holds when U is replaced by W by appealing to standard Poisson embedding theorems. However, it is not clear how to piece together the processes N W for different finite W ⊆ U to construct a collection G of i.i.d. processes that are Poisson with respect to a common filtration G. Instead, to circumvent this problem, we provide an explicit construction of the different collections of i.i.d. Poisson processes, N W for finite W , with respect to a common filtration. Such explicit constructions appear to be available only for unmarked point processes [5,Theorem 15.3.4], [6,Lemma 4]. For marked point processes, partial results can be found in [9, Exercise 14.7.I and Proposition 14.7.I(b)] but without proof or with the stringent condition that the point process intensity is adapted to the natural filtration. However, in our setting, the intensity of P W is not adapted to its natural filtration. In order to provide a fully rigorous argument and be self-contained, we include a complete proof below.
Proof of Proposition 5.2: Proof of (a): Let N denote the driving Poisson processes associated with the solution X, and define G : Since G ⊆ F and {N v } v∈W are i.i.d. F-Poisson processes that are also G-adapted (by definition), it follows that they are also i.i.d. G-Poisson processes. Thus N W is also a G-Poisson process on R 2 + × J × W . Recalling our assumption that J is finite, let Q W = # J ×W /(|J ||W |) be the uniform distribution on J × W . For any A ∈ B(R 2 + ), let A ′ := t, r |J ||W | : (t, r) ∈ A and for B ⊆ J × W and v ∈ W , define which implies that N W has intensity measure Leb 2 ⊗ Q W .
Let P W be the dual of X W , and note from Definition 5.1 that P W = P U | R + ×J ×W . Then the SDE (3.1)-(3.2) and the definitions of P W and N W imply that for any A ∈ B(R + ) and B ⊆ J × W , Since this form coincides with Equation (15.45) of [5], it follows from [5,Theorem 15.3.3] that P W has G-intensity Given that P U | R + ×J ×W = P W for every finite W ⊆ U , it follows that P U has G-intensity Λ U where for every (t, j, v) ∈ R + × J × U , By condition 3 of Assumption 3.1, for each (j, v) ∈ J × U , Λ U (·, j, v) has a.s. càglàd trajectories and by condition 2 of Assumption 3.1 it is G-adapted. Thus, Λ U is G-mark predictable (in the sense defined in Section 5.1.1) and hence, Λ U is also the G-intensity of P U .
Proof of (b): Let X and P U be as stated in the proposition. The proof proceeds via three steps.
In step 1, we construct a candidate point process N on R 2 + × J × U for the Poisson embedding which we will use to construct { N v } v∈U . In step 2, we prove that, for an explicitly constructed filtration G, N is a G-Poisson point process on R 2 + × J × U . Finally, in step 3, we construct the i.i.d. G-Poisson processes { N v } v∈U and prove that (5.3) holds.
Step 1: Construct a candidate Poisson process N.
Let { N v } v∈U be a collection of i.i.d. Poisson processes independent of G ∞ with intensity measure Leb 2 ⊗ # J . Also, define the collection of i.i.d. uniform [0, 1]-random variables {R v k } k∈N,v∈U to be independent of G ∞ and { N v } v∈U . Additionally, for each v ∈ U , let {(t v k , j v k )} k∈N be the collection of events in P U (dt, dj, {v}) ordered so that {t v k } is strictly increasing. Since P U is the dual of an a.s. càdlàg proper process, P U (· × J × {v}) a.s. has finitely many events in any finite interval, and the {t v k } can be ordered to be strictly increasing with the caveat that if P U (R + × J × {v}) = K < ∞, then t v k = ∞ for all k > K. We now define N to be the following point process on R 2 + × J × U : for any collections of Borel measurable subsets {A v } v∈U of R 2 + and {B v } v∈U of J , and C : In the event that t v k = ∞ for some v ∈ U and k ∈ N, , so (5.4) is still well defined. This concludes step 1.
Next, let P R,J be the point process on Note that G satisfies the usual conditions.
Step 2: Show that N is a G-Poisson point process.
Let H : R 2 + × J × W → R + be a nonnegative, G-mark predictable random function that is leftcontinuous (with respect to its first input). Then note that (t, r, j, v) → I {r>r v j (t, X, κ)} H(t, r, j, v) is also G-mark predictable. Lastly, note that because Let P R,J W := P R,J | R + ×[0,1]×J ×W and note that P R,J W is a G-adapted non-explosive point process. Furthermore, because P R,J W (·, [0, 1], ·, ·) = P W (·, ·, ·) and {R v j } v∈U,j∈J are i.i.d. and independent of P W with density 1, P R,J W has G-intensity Λ R,J (t, ρ, j, v) := r v j (t, X, κ) with respect to the reference measure Leb ⊗ Leb| By [5,Theorem 15.1.22], since H is an arbitrary left-continuous, nonnegative G-mark predictable function, N W is a G-Poisson process on R 2 + ×J ×W with intensity measure Leb 2 ⊗# J ×W . Because we fixed W ⊆ U to be finite and arbitrary, it follows that N is a G-Poisson process on R 2 + × J × U with intensity measure Leb 2 ⊗ # J ×U .
Step 3: Show that X satisfies the SDE (5.3) driven by N.
For each v ∈ V , let N v (ds, dr, dj) := N(ds, dr, dj, {v}). To complete the proof of the proposition, note that by (5.4), for every v ∈ U and t ∈ R + , which proves (5.3). This completes the proof of the proposition.
Proof of Proposition 4.9
As alluded to earlier, the main difficulty in the proof of Proposition 4.9 is that X and X W will typically have explosive duals, and thus we cannot apply standard point process change of measure theorems directly. Instead, the proof will exploit the duality results of the previous section along with a classical change of measure result from [5], which is reproduced in Section 5.2.1 below.
A Classical Change-of-Measure Result
For convenience we rephrase here the result of [5,Theorem 15.2.7] in the specific case that the mark space is finite, also taking L(0) therein to be 1. This result relies on the notion of local characteristics of a marked point process which we now define specialized to the case (relevant to us) when the mark space of the point process is finite.
Definition 5.3. Given a finite space Z equipped with the counting measure # Z , fix a complete, filtered probability space (Ω, G, G, η) and let N Z be a G-adapted marked point process on R + × Z with G-intensity (t, z) → λ(t, z). If there exist functions t → λ g (t) := z∈Z λ(t, z) and Ψ : R + × Z → R + such that for any t ∈ R + , and z∈Z Ψ(t, z) = 1. Then we say that N Z admits the (η, G)-local characteristics (λ g , Ψ).
Proof of Proposition 4.9:
Proof of Proposition 4.9: For simplicity of notation, we fix the finite set W ⊂ V , let X := X W and µ := µ W . Let (Ω, F, F, P) be the solution space associated with X and κ (in the sense of From (4.9) it follows that L is G-adapted and a.s. càdlàg. Fix n ∈ N and finite U ⊆ V such that W ⊆ U . Let P be the dual of X and P U := P | R + ×J ×U be the dual of X U in the sense of Definition 5.1. Note that the dual P U is locally finite because the jump characteristics of X U exist and that the jump characteristics {(t k , z k ) := (t k , (j k , v k ))} k∈N are precisely the events of P U . By Proposition 5.2(a), P U has G-intensity Λ U (t, z) := r(t, z) for all (t, z) ∈ R + × J × U . Then the point process P admits the (P, G)-local characteristics ( λ U , Ψ U ) where for t ∈ R + and z ∈ J × U.
Then by (5.8), the quantity L(t) from (4.9) can be rewritten in a form suitable for the application of Theorem 5.4 as Note that since θ U (t)h U (t, z) = 1 for z ∈ J × (U \ W ), L does not depend upon the choice of U so long as W ⊆ U . We now verify that the conditions of Theorem 5.4 are satisfied for N z := P U , Ψ := Ψ U , λ g := λ U , θ := θ U and h := h U . By the definition of θ U and λ U above, (5.8) and (4.8), we have r v j (s, X, κ) ds < ∞ a.s., which verifies (5.5). Also, for any t ∈ R + , which verifies (5.6). Moreover, (5.7) follows from (5.9). Furthermore, we have already shown that on the complete probability space (Ω, G, P), P U is a locally finite point process on R + with marks in J × U that admits the (P, G)-local characteristics ( λ U , Ψ U ). Moreover, θ U is nonnegative and G-predictable while h U is nonnegative and G-mark predictable. Since the conditions of Theorem 5.4 are satisfied, it follows that L is a G-local martingale. This proves the first assertion of the proposition. Next, suppose that L is also a G-martingale. Then E[L(t)] = E[L(0)] = 1 for all t ∈ R + . Fix a probability measure P on the space (Ω, G), where G is the P-completion of G. For t ≥ 0, fix P t := P| σ(κ, X[t]) and P t := P| σ(κ, X[t]) and suppose that for each t ≥ 0, d P t /dP t = L(t) a.s.. Define µ to be the law of Law(κ, X) under P. Because L(t) does not depend on the choice of U ⊇ W , neither does P t . Fix t ∈ R + and in the spirit of (4.9), define L t = L W t : (K × D t ) V → R + by if the jump characteristics {(t k , v k , j k )} of x W exist, and 0 otherwise. Then L t (κ, X[t]) = L(t) P t -a.s.. So for any A ∈ B((K × D t ) V ), which shows that d µt d µt (κ, X[t]) = L(t) P-a.s.. In fact, because X(t) = X(t−) P t (and hence P t )-a.s., we also have To complete the proof of Proposition 4.9, it only remains to show that µ = Law(κ, X) for some weak solution X to (3.1)-(3.2). It follows from Theorem 5.4 that P U admits the ( P, G)-local Thus, under P, for (t, z) ∈ R + × J × U , P U has G-intensity for z = (j, v) ∈ J × U where the last equality above uses (5.8). However, since P U = P | R + ×J ×U , and the above display holds for all finite U ⊇ W , this implies that P has G-intensity for z = (j, v) ∈ J × V with respect to P. Note that G is a deterministic graph with a countable vertex set, and the jump rate function family r is assumed to satisfy conditions 2 and 3 of Assumption 3.1. Moreover, because X v (0) = ξ(κ v ) P-a.s. for all v ∈ V , and because P 0 ≪ P 0 , the same must hold P-a.s.. Then by (5.10), under P, the intensity of the dual P of X satisfies (5.2). Thus, Proposition 5.2(b) states that, by extending the probability space if necessary, we may assume without loss of generality that (Ω, F, P) supports a filtration G ⊇ G satisfying the usual conditions and a collection of i.i.d. G-driving Poisson processes N := { N v } v∈V such that X satisfies (3.1)-(3.2) driven by N P-a.s. for the initial data pair (κ, ξ). Thus, with respect to P, X is a weak solution to (3.1)-(3.2) for the initial data κ and µ = Law(κ, X). This concludes the proof of the proposition.
A Proof of the Conditional Independence Lemma
The goal of this section is to prove Lemma 4.5 from Section 4.2. The proof relies on several technical properties of conditional independence established in Lemmas A.2 and A.3. We start with a basic measure-theoretic result, which establishes the equivalence of conditional expectations with respect to any σ-algebra G and its completion G.
We now show that Lemma A.1 immediately implies that conditional independence relations of σ-algebras are insensitive to completions.
Proof. Suppose that G 1 ⊥ ⊥ G 2 |G 3 and let A ∈ G 1 . By [24,Lemma 1.27], there must exist a G 1measurable function f : Ω → R such that f = I {A} a.s.. Then the set A := f −1 ({1}) ∈ G 1 and the symmetric difference of A and A is null. By application of Lemma A.1 in the second and fourth equalities below, we see that The proof of the converse is similar, but in fact simpler. Suppose G 1 ⊥ ⊥ G 2 |G 3 . Then for any A ∈ G 1 ⊆ G 1 , Lemma A.1 implies Thus, G 1 ⊥ ⊥ G 2 |G 3 , which completes the proof.
The next ingredient is a list of basic properties about conditional independence, whose proofs, for example, can be found in [33]. Note that [33] includes the additional assumption that all σalgebras considered are complete, but we can remove that assumption by repeated application of Lemma A.2.
We are now ready to prove Lemma 4.5.
Proof of Lemma 4.5. For convenience, we will freely apply the symmetry of conditional independence outlined in Lemma A.3(a) without reference. Due to Lemma A.2, property 1 of the lemma and (4.6) imply the following: 1'. H Z 2 i , i = 1, 2, 3, are mutually independent and independent of By property 1', it follows that H Z 2 G. Thus, Z A ⊥ ⊥ Z B |Z S ′ , and because A, B are arbitrary finite subsets of A ′ and B ′ respectively, it follows that Z A ′ ⊥ ⊥ Z B ′ |Z S ′ . Thus Z forms an α-SGMRF.
To show the reverse implication, now assume that Z forms an α-SGMRF and let A, B, S ⊆ V be finite, disjoint vertex sets such that S α-separates A and B. Then we claim there exists a partition A ′ , B ′ , S of V that satisfies the following two conditions: If the claim holds, then an application of Lemma A.3(d) with , along with the observation that B and N α A ′ (G) are disjoint by assumption, and two applications of Lemma A.2, shows that Z A ′ ⊥ ⊥ Z B ′ |Z S . By C1, this directly implies that Z A ⊥ ⊥ Z B |Z S as desired.
Thus, it suffices to prove the claim. To this end, let A ′ be the set of elements in V \ S that are not α-separated from A in G. Let B ′ = V \ (S ∪ A ′ ). Then B ⊆ B ′ because S α-separates A and B. Moreover, by construction, S α-separates A ′ and B ′ . Thus, N α A ′ (G) ⊆ S, and C1 follows. Moreover, A ′ , (S ∪ B) \ N α A ′ (G) and N α A ′ (G) partition V , and N α A ′ (G) ⊆ S is finite by assumption. Thus, C2 also holds because Z forms an α-SGMRF. That concludes the proof of the claim and therefore the lemma.
Given a locally finite tree G = (V, E), a Z V -random vector Z is said to form a tree-indexed Markov chain if it is an MRF such that for any finite, connected U ⊆ V , Z U forms an MRF with respect to G[U ] (see, e.g., [20,Chapter 12] or [35,Section 2]).
Lemma B.2. When G is a tree, the Z V -random vector Z forms an α-SGMRF if and only if for any connected, finite U ⊆ V , Z U forms an α-MRF with respect to the graph G[U ].
Proof. By uniqueness of paths in trees, note that for any finite, disjoint sets A, B, S ⊆ V , S αseparates A and B in G if and only if it α-separates A and B in G[U ] for every finite connected U ⊆ V such that A ∪ B ∪ S ⊆ U . Then by Lemma B.1, this implies that Z forms an α-SGMRF with respect to G if and only if Z U forms an α-MRF with respect to G[U ] for all finite, connected U ⊆ V .
Clearly, all global MRFs are SGMRFs, which in turn (since G is locally finite) are also MRFs, and all three concepts coincide on finite graphs. Furthermore, the SGMRF property is strictly stronger than the MRF property (see [20,Corollary 11.33] and Lemma B.2) and strictly weaker than the global MRF property (see [21,Section 2]). Also see [20,Example 8.24] as well as part B of the bibliographical notes of [20,Section 8.2] for further discussion of MRFs and global MRFs.
C The Existence of Point Process Duals
We now justify the existence of duals of weak solutions to the SDE (3.1)-(3.2).
Lemma C.1. Suppose the family of jump rate functions r satisfies conditions 2 and 3 of Assumption 3.1. Then for any (possibly infinite) U ⊆ V and any weak solution X to (3.1)-(3.2) for some initial data pair (κ, ξ), the dual P U of X U exists a.s..
Proof. By Lemma 4.8, for each v ∈ U , the (countable sequence of) jump characteristics {(t v k , j v k , v)} of X v are a.s. well-defined. It follows that the random measure P U defined by P U ({(t, j, v)}) = 1 if and only if there exists a k such that t v k = t and j v k = j (equivalently, ∆X v (t) = j) is a.s. expressible as a sum of a countable number of delta masses and is therefore a well-defined random integervalued measure. To complete the proof, it remains to prove that P is a point process, or equivalently that P ∈ N(R + × J × U ) a.s.. Recall from Section 2 that any bounded set A ⊆ R + × J × U is a subset of [a, b] × J × W for some 0 ≤ a < b < ∞ and finite W ⊆ U . Note that P U ([a, b] × J × W ) = |Disc [a,b] (X W ) | = v∈W |Disc [a,b] (X v ) |, which is finite since X v is a D-valued random element. Therefore, a.s. for any bounded A ⊂ R + × J × U , P U (A) < ∞ which proves that P U is a.s. an element of N(R + × J × U ) and is therefore a point process.
D Verification of the General Well-Posedness Assumption
In this section we prove Lemma 4.2. Specifically, given a graph G = (V, E) that belongs to one of the classes specified in Theorem 3.7, and a jump rate function family r := {r v j } v∈V,j∈J that satisfies Here, we use a simple trick to show that the latter theorem in fact also applies to SDEs with heterogeneous rates such as the modified rate function family r W := { r W,v j } v∈V,j∈J in (4.3). The idea is to assume without loss of generality that the vertices of G are labeled by (distinct) integers in N, that is, we treat G as a marked graph in which each vertex of G is equipped with an integer mark that is equal to its label. This ensures that each vertex has a unique mark so that the automorphism group of this new marked graph is trivial, and thus the symmetry condition from [18] automatically holds, so the result therein can be directly applied. The details are given below.
Proof of Lemma 4.2. Fix G, (κ, ξ) and the original jump rate function family r := {r v j } v∈V,j∈J satisfying the stated assumptions. We begin by considering the modified rate functions with W = ∅ so that r W,v j = r v j for all v ∈ V and j ∈ J . Assume without loss of generality that V ⊆ N and recall from [18, Section 6.1] that for any Polish space Z, G * [{1}, Z] is the space of graphs whose vertex sets are subsets of N and whose vertices are equipped with marks lying in the space Z. By [18,Lemma B.5], this space is Polish. Then there exists a measurable, injective map ψ X : K V → G * [{1}, N × K] given by ψ X (ϑ) := (G, (k, ϑ)) where for each v ∈ V , k v = v. Likewise there exists a measurable, injective map ψ D : (K × D) V → G * [{1}, N × K × D] given by ψ D (ϑ, x) = (G, (k, ϑ, x)) where k is defined similarly. We now proceed in four steps.
Step 1: Construct a regular family of local rate functions in the sense of [18, Definition 3.1].
Let H = (V H , E H , ø H ) ∈ G 1, * , where G 1, * is the space of unmarked, rooted graphs of radius one (i.e. all non-root vertices are adjacent to the root). For each v ∈ V , let H v = (G[cl v ], v) ∈ G 1, * and let I(H v , H) be the set of (rooted) isomorphisms from H v to H. Then for each j ∈ J , define the measurable function r H j : r v j (t, (ϑ ϕ(u) ) u∈clv , (x ϕ(u) ) u∈clv )I {kw=ϕ −1 (w),w∈V H } (D. 1) for (t, x, k, ϑ) ∈ R + × D V H × N V H × K V H , where r v j is the local jump rate function from condition 1 We have shown that the SDE (3.1)-(3.2) is strongly well-posed when the jump rate function family r satisfies Assumptions 3.1 and 3.4. However, note that when r satisfies these assumptions, then r W := { r W,v j } v∈V,j∈J also satisfies Assumptions 3.1 and 3.4. Therefore, (3.1)-(3.2) is strongly wellposed even when r is replaced by r W . | 2022-10-18T01:16:24.018Z | 2022-10-17T00:00:00.000 | {
"year": 2022,
"sha1": "65c5ff38a0dcdf9d4e44620aadb3b7c553ae24f0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "65c5ff38a0dcdf9d4e44620aadb3b7c553ae24f0",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
232270077 | pes2o/s2orc | v3-fos-license | Behavioural Approach to Distributed Control of Interconnected Systems
This paper formulates a framework for the analysis and distributed control of interconnected systems from the behavioural perspective. The discussions are carried out from the viewpoint of set theory and the results are completely representation-free. The core of a dynamical system can be represented as the set of all trajectories admissible through the system and interconnections are interpreted as constraints on the choice of trajectories. We develop a structure in which the interconnected behaviour can be directly built from the behaviours of the subsystems in an explicit way without any presumed forms of representations. We show that the interconnected behaviour can also be fully obtained from local observations of the subsystem. Furthermore, we develop the necessary and sufficient conditions for the existence of distributed controller behaviours and their explicit construction. Due to the entirely representation-free nature of this framework, it unites various representations and descriptions of features of dynamical systems (e.g. models, dissipativity, data, etc.) as behaviours, allowing for the formation of a unified platform for the analysis and distributed control for interconnected systems.
Introduction
The advancement of technology has made rapid collection and huge storage of data of large-scale, complex interconnected systems possible. The data sets contain rich information of the dynamics of these systems and as a result, a new paradigm of data-enhanced operations and data-driven control is emerging [1]. However, the complex dynamics caused by the incredibly convoluted interconnections among subsystems pose grand challenges to the understanding of the system dynamics. To begin with, the dynamics of each subsystem can be vastly different from when it is a stand-alone system because it is always under the constraints posed by interconnections. While the changes in dynamics can be relatively easily captured in model-based approaches with models describing the dynamics, it is less so with data-driven methods. Furthermore, the characterisation of input and output is rather difficult for the interconnecting variables because the "directions" of their flows are unclear. To illustrate this point, consider a simple double-tank system depicted in Figure 1, in which the liquid in the two tanks are maintained at a relatively steady levels against possible disturbances in F 1 and F 2 through the manipulation of exit flowrates Q 1 and Q 2 . The interconnecting flow Q 12 , although theoretically measurable, cannot be manipulated in any way. While it Email addresses: y.yan@unsw.edu.au (Yitao Yan), j.bao@unsw.edu.au (Jie Bao), biao.huang@ualberta.ca (Biao Huang)
Tank 1
Tank 2 Figure 1: A Double-tank System is possible to construct an approximate model for this system from first principle, it is not clear whether Q 12 should be treated as the input or output for each tank and the direction of the flow Q 12 depends on the liquid levels in both tanks. If the two tanks are two "black boxes" instead, the data set is the best way to describe the dynamics of the system because any empirical models can at best describe the dynamics of a system as good as the data set. However, data obtained for each "box" is always with the presence of interconnection, and the complexity of the dynamics escalates rapidly as the number of subsystems increases. Additionally, the concept of input/output becomes more vague because the complex interactions among subsystems may change the direction of information flow at any time.
The complex dynamics due to interconnection also add considerable difficulty to the effective control of an interconnected system. With a good balance among complexity, performance and flexibility, distributed control strat-. . . egy is often a preferred choice to control such a system [2]. A typical distributed control system is depicted in Figure 2, in which a network of dynamical systems are controlled by a network of controllers to achieve global control requirements in a flexible and robust manner. Currently, most efforts are focused on data analytics and modelling from a process database, and control design is based on these empirical models. However, the accuracy of these models can at most describe the systems as good as the original data sets. Furthermore, models obtained this way are inherently erroneous and the convoluted interconnections may well magnify such errors, leading to significant deterioration of control performance. The reason for such unavoidable sacrifice is because models are placed as the central role in defining a dynamical system while it is in fact not. This calls for a new way of thinking: a model is only to summarise some of the characteristics of a dynamical system rather than to define it. We need to analyse dynamical systems from a different angle and put the trajectories admissible through the system as the central role.
Initially proposed by Willems [3], behavioural systems theory views a dynamical system as a set of functions mapped from a time axis to a signal space, or more commonly known as trajectories. This set, called the behaviour, is the centre of a dynamical system. As such, the theory views a dynamical system from a set-theoretic point of view. Analogous to the very nature of set theory that a set is defined by its elements, the trajectories that are admissible through a dynamical system define the system. The behavioural approach provides not only a fresh viewpoint fundamentally different from model-based analysis, but also an excellent way in dealing with interconnections [4]. It does not distinguish between input and output but treats them as a single set of variables for the system, and interconnection of two dynamical systems is the sharing of trajectories between the two systems. As a result, the interconnected behaviour is simply the common trajectories between the two systems. This gives a flexible and scalable representation of an interconnected system, in that an additional subsystem integrated is essentially an additional set of constraints on the existing behaviour. With this view, control can be viewed as interconnection and controllers are essentially restricting the set of behaviour that can happen in the to-be-controlled systems.
A behavioural set admits different types of represen-tations and each representation describes the set from a different perspective. Most notably, behaviours that are linear and time-invariant (LTI) are well studied and the relevant theory fits into the classical linear systems theory perfectly [5]. Furthermore, the concept of dissipative dynamical systems is also well-developed both as a property of an existing dynamical system [6][7][8][9] and as a dynamical system itself with a "dissipative behaviour" [10]. In terms of control design, controller synthesis for LTI behaviour with various representations have been discussed in [11][12][13] for stand-alone systems, and in [2,8,9] for interconnected systems. In [14], a condition to represent a finite-length LTI behaviour using persistently exciting trajectories has been given, and was modified to a more relaxed condition in [15]. Recent years have witnessed a proliferation of developments along this line in both analysis and control of LTI systems based on data [16][17][18][19][20][21][22][23][24] as well as some extensions to a certain class of nonlinear systems [25]. However, to the best of the authors' knowledge, a systematic, completely representation-free framework for the analysis and control is yet to be developed even for stand-alone systems, let alone interconnected ones. As discussed above, the observable behaviour of each subsystem is always under the influence by the complex interconnection among subsystems. The following questions then naturally arise: Are these restricted subsystem behaviours (i.e., behaviours of subsystems restricted by interconnections) sufficient for the analysis and control of interconnected systems? How to design such distributed controllers?
In this paper, we develop a systematic approach to the analysis and distributed control of interconnected systems in an entirely representation-free fashion. By the novel definition of system network as a dynamical system with its own behaviour and through the use of the projection operation, we show that it is enough to determine the complete behaviour (i.e., all possible trajectories admissible through the system) of the interconnected system from the restricted behaviours of its subsystems or from the complete behaviours of some of the subsystems and the restricted behaviours of the rest. We then give necessary and sufficient conditions for the existence of the desired controlled behaviour that is both admissible through the system network and implementable through the controller network and we subsequently construct the behaviour of the distributed controllers. Following the rationale of behavioural systems theory, the control design procedure treats all variables equally without any assumption of causality, hence it is no longer an inverse problem. Furthermore, while the design requires rather delicate description of the controlled and controller behaviours, it is philosophically simple and intuitive. It should be pointed out that if all subsystems were described by models, then the proposed framework quickly reduces to the classical approaches such as H ∞ control [12,26] and dissipativitybased control [2,9]. At another extreme, for example, if all possible trajectories were given, then a naïve realisation of the proposed framework is through brute force pat-tern matching. The proposed approach therefore builds a unified framework for distributed control design for interconnected systems whose subsystems admit a variety of, or even a mixture of, representations of their behaviours.
The rest of this paper is organised as follows: preliminary information about set operations and behavioural systems theory is introduced in Section 2, the various ways of constructing the behaviour of an interconnected system through the projection operation is illustrated in Section 3, conditions to verify the existence of a desired controlled behaviour implementable through the controller network and the synthesis of the behaviours of distributed controllers are presented in Section 4, and we conclude the paper in Section 5.
Notations. We denote the generic variable of a space W by w and its dimension by w. For an interconnected system, we denote the jth element of the variables in the ith subsystem as w i j and its respective space as W i j . The omission of subscript means that the focus is on all variables in the ith subsystem and the omission of superscript means that the interest is on the internal dynamics of a subsystem. Z + N denotes the set of all positive integers less than or equal to N . We use the conventional ∩, ∪ and \ to denote set intersection, union and difference, respectively. The Cartesian product of two sets A and B, with their elements denoted by a and b, respectively, is defined as
Behavioural Systems Theory
In behavioural system theory, a dynamical system is viewed as a triple Σ = (T, W, B) where T is the time axis, W is the signal space and B ⊂ W T is the behaviour, which contains the set of trajectories w : T → W admissible through the system [3]. The generic variable w of this system is called the manifest variable, which contains all variables of interest such as exogenous inputs and outputs. However, input and output are not a priori distinguished from each other in w. Among the elements of a manifest variable, there is a set of variables called free variables. For a dynamical system Σ = (T, W 1 × W 2 , B), w 1 is said to be free if for all w 1 ∈ W T 1 , there exists a w 2 ∈ W T 2 such that (w 1 , w 2 ) ∈ B, i.e. the set of possible trajectories for it is W T 1 [5]. Free variables include all exogenous inputs such reference and disturbance. If all variables in w 1 are free variables while none of the variables in w 2 are, then (w 1 , w 2 ) defines an input/output partition of B. Other than the manifest variable, the system may also contain auxiliary variables called latent variables that aid the description of a dynamical system (e.g., state variable in the classical state-space representation). In such a case, the full system is the quadruple Σ f ull = (T, W, L, B f ull ) where B f ull ⊂ (W × L) T and the manifest behaviour B can be obtained as B = w | ∃ , (w, ) ∈ B f ull . As an example, in the double-tank system depicted in Figure 1, variables F 1 , F 2 , Q 1 , Q 2 and Q 12 are manifest variables, among which F 1 and F 2 are free variables, while the liquid levels in the two tanks can be seen as latent variables.
While a behaviour is in essence a set of trajectories, it can be represented in various ways and each description reveals the insights of a dynamical system from a different perspective. Here we give two examples of the well-known representations.
is defined similar toŵ(k). 2. Data Banks, in which B is entirely described by a set of data (e.g., trajectories of w). In this case, due to the finite number of trajectories with finite length, it only represents partial behaviour up to a certain length.
If the stored trajectories have length T , then they can partially represent the finite-length behaviour It is seen here that by letting trajectories rather than the representations be the definitions of dynamical systems, we have the freedom of choosing our perspectives in viewing the dynamical systems and thus can capture much richer characteristics of them. A note to make here is that a dynamical system does not necessarily admit a "dynamic" representation [5]. For example, a proportionalonly state feedback controller u = −Kx obtained in a standard linear-quadratic control design has a static representation, but it only reveals the static relationship among variables. The behaviour of the system is still a set of trajectories that evolves with time, hence dynamic. Another merit of this framework is its insights in understanding interconnected systems, particularly for datadriven control due to the representation-free nature of the framework. Instead of viewing interconnections as signals flowing from one system to another, they are viewed as variables sharing the same trajectories. The behaviour of the interconnecting variable is hence the set of trajectories admissible through both systems. This is one of the key concepts in this framework: all trajectories are already contained within the dynamical system and interconnection is restricting the possible choices of trajectories rather than forging new ones [4,5]. An interconnection is called a full interconnection if all variables between two variables are shared and a partial interconnection if a part of its variables are shared. All partial interconnections can be augmented into full interconnections by viewing variables from the other system that are not interconnected as free variables [4]. In this way, interconnection is simply excluding trajectories that are not admissible through any of the interconnecting systems. This view, adding to the representation-free nature of the behaviours themselves, allows for a systematic and generic approach to the analysis of complex interconnected dynamics.
Representation of Interconnected Systems
In this section, we propose a new structure to obtain interconnected behaviour directly from the behaviours of the subsystems. We begin by introducing the concept of abstracting the network of an interconnected system as a dynamical system, thereby making it a stand-alone object with its own trajectories instead of being a feature of the interconnected system. This abstraction is quite useful in practice. For example, in the design of a large-scale chemical process, a flexible interconnection scheme is often implemented so that the plant is able to manufacture a variety of products to suit different needs. The dynamics of the process differ dramatically from each other with different interconnections but what is actually happening is that the network itself is dynamical. With this viewpoint, we give an explicit representation of the interconnected behaviour in a general form built up from its components without assuming any form of representation of the local behaviours.
Network as a Dynamical System
As proposed by [3], interconnection of dynamical systems can be thought of as variable sharing, and henceforth one of the two variables interconnected together can be eliminated. This procedure yields a compact representation of an interconnected system and shows clearly what variables are left unconnected (i.e., the free variables). However, after this process, the membership of the interconnected variables to the subsystems becomes ambiguous: one variable is shared between two subsystems while it is in fact two distinct variables that happen to share the same value. It is only when all variables of the interconnected subsystems are shared variables that the ambiguity disappears. We therefore propose a structure in which all subsystems are isolated but sharing all of their variables to a central system which can be seen as a generalised "topology". To explain the rationale, we lead in with an example. Figure 3a, in which four dynamical systems are interconnected in a network with a switch. Based on the outcome of the rest of the plant, v 5 can connect with either v 5 or v 5 . Assuming v 5 , v 5 and v 5 are of the same dimension, the interconnected system can be represented as Σ = (T, Figure 3: Four Systems with a Switching Network
Example 1. Consider a network depicted in
While an insightful description of the interconnected behaviour, representing the behaviour in this slightly overcompacted fashion creates two obstacles towards the analysis of the system. Firstly, due to variable elimination, only one variable is used to represent variables that originally come from several different systems, making it difficult to construct the interconnected behaviour from that of its subsystems directly and explicitly. A more natural representation is that each subsystem has its own manifest variables and several of them "happen to" coincide during interconnection. Secondly, for different networks, variables shared among subsystems are different. If the above method of representation were adopted, a new representation would be needed every time the interconnection changes. What is actually changing is the network itself, and pushing the "dynamics" of the network into the subsystems makes the analysis of the interconnected behaviour more complicated. It is therefore reasonable to treat the network as a dynamical system itself with its own behaviour. With this thinking, the interconnected system in Figure 3a can be equivalently depicted as in Figure 3b, in which four (isolated) dynamical systems Σ i = (T, W i , B i ) are "plugged" into a dynamical system Σ Π = (T, W 1 ×W 2 ×W 3 ×W 4 , B Π ) that is the network. By setting w = col(w 1 , w 2 , w 3 , w 4 ), the interconnected system can be described as Σ = (T, In this way, the interconnected system can be constructed directly from its components.
Note that this description of a interconnected behaviour can be generalised into arbitrary numbers of subsystems. This leads to the definition of a network as a dynamical system. Definition 1. The network of an interconnected system consisting of N subsystems, with the ith subsystem de- where B Π is the network behaviour.
The idea of giving the network a representation has already appeared in literature. In [6], the network was represented by a static interconnection function, and in [2,27,28], it was represented by a static LTI system using input/output representation. Definition 1 encompasses these cases and generalises the network to be a dynamical system on its own, which allows a more systematic description of an interconnected system with an arbitrary and probably time varying topology. It is, however, important to note the difference between the network behaviour defined in Definition 1 and the topology matrix/interconnecting function in the literature: the network behaviour has it own set of behaviour whereas those in the literature are entirely defined by the interconnection inputs and outputs. To name but one distinct difference, if all subsystems were isolated, the topology matrix defined in [2], or H P in Figure 2, for example, would be 0, whereas the behaviour defined in Definition 1 would be which means that all mappings from the time axis to the signal space are admissible in Σ Π . A closer observation reveals that in this particular case, the former is in fact a representation of B Π . In this way, variables with no physical interconnection can also be interpreted as "interconnected" with the network. In the next few sections, we will show that not only does the proposed structure provide a clean and flexible representation of the interconnected system that can be constructed from its subsystems explicitly, but the network behaviour is also the key to the construction of interconnected behaviour in various ways.
The Interconnected Behaviour
As illustrated above, the proposed structure allows for the representation of all interconnections as full interconnections. A result of this is that the behaviour of a dynamical system formed by the interconnection of two subsystems, denoted by Σ = Σ 1 ∧Σ 2 , can be directly constructed as B = B 1 ∩ B 2 . If Σ 1 and Σ 2 have two distinct signal spaces W 1 and W 2 , the behaviour of Σ can be more clearly represented by B = B 1 × B 2 . We denote the interconnected system constructed through Cartesian product as Σ = Σ 1 Σ 2 . Note that the relationship between and ∧ is similar to that between × and ∩. It is also straightforward to construct the interconnected behaviour from the relationship among different components if all behaviours were fully known by replacing Σ with B, ∧ with ∩ and with ×. For example, a dynamical system As shown in Figure 4, assuming that the plant consists of N subsystems with the ith subsystem denoted as Σ i = (T, W i , B i ), then all subsystems can be written together compactly as a large system Σ ss with a set of isolated subsystems, i.e., The final interconnection is the interconnection of Σ sys with Σ Π . Since they have exactly the same signal space and they have full interconnection, the final interconnected system can be obtained as Σ = Σ sys ∧ Σ Π , i.e., where Note that in the last part we still adopt the variable elimination procedure by naming the manifest variable of the network Σ Π as w (the collection of all manifest variables in the subsystems) because Σ sys and Σ Π are indeed sharing all variables. In this way, the manifest variables of each subsystem is defined unambiguously because the "sharing of variables" happen inside the network Σ Π .
While (3) gives a representation of the interconnected behaviour B, it is not how B is defined. In other words, (3) should be interpreted as an equivalence relationship rather than a definition of B. This means that B admits other representations as well. In fact, in many cases, the complete behaviours of the subsystems B i are hardly available. For example, if only data sets were available, the data set of local measurements of a subsystem, say the ith one, is no longer B i , but rather the projection of the interconnected behaviour B onto the space of the behaviour of w i in a network configuration. In the next section, a detailed treatment of the projection operation will be carried out.
Construction of Interconnected Behaviour through
Projection In this section, we first provide relevant properties of the projection operation and then we show that it can be used to construct the exact interconnected behaviour from local observations. Since the interconnected behaviour may not be obtained directly through the complete behaviour of the subsystems, using (3) to express the behaviour of (2) may no longer be feasible. In such a case, the interconnected systems are only given in terms of their subsystems Σ i and the network Σ Π to show how they physically interconnect.
Projection is one of the key operations in relational algebra, the algebra of data sets [29]. We use this operation in this paper in the context of behaviour. Given a dynamical system Σ = (T, W, B), the projection of the behaviour B onto the space This map allows for the extraction of the set of trajectories of any specific manifest variable from B. Obviously, if the dynamical system is one with latent variables Σ f ull = (T, W, L, B f ull ), then the manifest behaviour is given by The projected behavioural set can be understood in two ways: from the point of view of the dynamical system Σ, π wi (B) can be interpreted as the observation of all possible trajectories of w i ; from the point of view of the manifest variable w i itself, π wi (B) can also be interpreted as a virtual "dynamical system" with manifest variable w i having full interconnection with another virtual "dynamical system" with the same manifest variable w i . In this view, all other manifest variables in Σ are treated as latent variables(see Lemma 7(i) for a mathematical representation). Since the main focus of this paper is on the latter interpretation, the definition of π wi (B) in (4) deliberately uses j instead of w j to emphasise that the choices of j may not be unique and that w j may be only one of the choices. If, however, the choices of w j are actually unique, then the system is said to be observable. In fact, we have the following definition.
Definition 2 (Observability [3]). Given a dynamical system Σ = (T, W, B) with manifest variable w partitioned as w = (w 1 , w 2 ), w 1 is said to be observable from Using the projection operation, this definition shows that if w 1 is observable from w 2 in Σ, then for a given trajectory of w 2 ∈ π w2 (B), there exists only one trajectory of w 1 ∈ π w1 (B) such that (w 1 , w 2 ) is an admissible trajectory in Σ. In other words, the complete behaviour of w 2 can be fully determined from that of w 1 , although w 1 being observable from w 2 does not necessarily mean that each trajectory of w 2 corresponds to a distinct trajectory of w 1 . It is perfectly possible for different trajectories of w 2 to have the same corresponding trajectory of w 1 .
We now present the main result of this section. It claims that the behaviour of an interconnected system can be reconstructed from the projections of the behaviour onto the behaviour space of each subsystem as well as the network behaviour. Theorem 1. Given an interconnected system (2), then (i) assuming that behaviours B i are not known but the projections on their manifest variables are known, the interconnected behaviour can be fully obtained as (ii) assuming, without loss of generality, that the first n behaviours are fully known while the rest only have information of the projections, the interconnected behaviour can be fully obtained as Proof. See Appendix A.1.
The first claim of Theorem 1 is that all of the projections of the interconnected behaviour, together with the network behaviour, determine the interconnected behaviour completely. This provides an insight to an interconnected system: each subsystem within an interconnected system contains a set of trajectories that can never happen. Therefore, it is in fact not necessary to obtain the complete information of each subsystem. The behaviour of each subsystem as an integrated part of the interconnected system is enough to determine the complete interconnected behaviour. The most interesting part of this statement is that we still need the network behaviour to construct the interconnected behaviour even though the projections already contain the network information. This can be understood from the property of the projection operation: by projecting B onto (W i ) T , all manifest variables of other subsystems are viewed as latent variables with respect to w i . As such, there may be trajectories that are not admissible through the interconnected system but indistinguishable from w i . The network behaviour precisely eliminates this problem because any trajectory that is admissible in the interconnected system must be admissible through the network behaviour. In many cases, π w i (B) can be obtained to a high level of completeness (e.g., with a large data bank of measured trajectories) and B Π is essentially known completely, making it possible for data-driven control design under this framework.
The second claim of Theorem 1, on the other hand, is a more powerful result. It states that if an interconnected system contains several subsystems with fully known behaviours (e.g., behaviour described by models), then the complete interconnected behaviour can be fully obtained using these complete behaviours, the observations of the behaviours of the rest of the subsystems and the network behaviour. In other words, the proposed construction allows for a unified platform for a hybrid interconnected system whose subsystems can be described by deterministic representations, data sets, or both. If the interconnected system contains only two subsystems, then Theorem 1 reduces to the corollary below.
Corollary 2. Given a dynamical system
we have Proof. This is a direct result from Theorem 1 by setting N = 2 and n = 1.
This result is a key stepping stone in the synthesis of controlled behaviour in the next section as the interconnected system and the distributed controllers can be seen as two large subsystems.
Distributed Control Design
This section presents the procedure of obtaining the behavioural sets for distributed controllers that, when integrated into the system, yield the desired behaviour for the variables of interest. We provide necessary and sufficient conditions for the existence of the controlled behaviour, from which the controller behaviours can also be obtained.
Control Structure and Problem Formulation
Consider an interconnected system Σ p consisting of N subsystems, where the ith subsystem is denoted as Σ i p = T, W i p , B i p . The subsystems are interconnected with a network Σ Π p = T, W p , B Π p . As a result, the interconnected, uncontrolled behaviour can be constructed according to (3) as where B p can be constructed according to (3), (5) or (6). Suppose that a set of N c controllers Σ j c = T, W j c , B j c are employed to control Σ p and the controllers have their own network Σ Π c = T, W c , B Π c . Then the interconnected controllers can be represented as an interconnected system Note that the number of controllers is not necessarily the same as that of the subsystems, nor does w c have any relationship with w p at this stage. When interconnecting the system with the distributed controllers, another network is needed. This network is defined as Σ Π pc = T, W p × W c , B Π pc . With these building blocks, a controlled system can be constructed as in Figure 5. As is depicted, the controlled system can be viewed as the interconnection of two interconnected systems, which defines a latent variable dynamical system Figure 5: The Interconnected System Layout The controlled system can then be expressed as the triple and we say that B pc is implemented by the controllers through w c [11]. Note that by defining we have which shows that the distributed control design is equivalent to decentralised control design for an augmented system with controller network and system-controller network integrated. By treating the networks as dynamical systems, they have their own behavioural sets and can thus be treated and rearranged like physical subsystems.
Remark 1. The proposed structure is general and encompasses a range of system configurations. As an example, for the system depicted in Figure 2 with fixed topology, we have The behaviours of Σ Π p and Σ Π c can then be described as kernel representations B Π p = ker(Π p ) and B Π p = ker(Π c ), respectively, where Π p and Π c are two matrices of proper dimensions (see [5] for details about kernel representation). Σ Π pc , on the other hand, has manifest variable w pc = w T p w T c T and behaviour B Π pc = ker(Π pc ), where Π pc is defined in similar way as Π p . Note that these matrices are not describing the selection of process variables (as what H P in Figure 2 does), but rather a dynamical system with its own internal trajectories (i.e., behaviour), which can be described by the aforementioned representations. It will be shown in the coming section that these trajectories are what enables the control design. The objective of control is to find the set of behaviour from the uncontrolled system such that all trajectories in the set meet certain specifications. These specifications can be formulated as a behavioural set B ps imposed on all manifest variables w p . Therefore, the control design aims to implement a subset of B p ∩ B ps through w c . On the other hand, the controllers themselves may have restrictions and objectives such as control saturation and minimum gain requirement, which can also be formulated as a set of behaviour B cr on the control variables w c . Although for illustration purpose we assumed that the system and specifications share the same signal space, it is easy to formulate such a B ps even if the requirements are specified otherwise. As depicted in Figure 6, suppose the desired requirements are described by a set B s ⊂ W T s with manifest variable w s , then for B s to be able to restrict B p , there must exist a network behaviour B Π ps ⊂ (W p × W s ) T such that for all w p ∈ B p , there exists w s ∈ B s such that (w p , w s ) ∈ B Π ps . In this case, the set describing the requirements can be constructed as Similarly, if the restrictions are imposed on the variable w r ∈ B r ⊂ W T r , then there must exist a network behaviour B cr linking w c to w r . In such a case, B cr can be constructed as For the clarity of presentation, we will use the notations B ps and B cr and we assume that B ps and B cr always have the same signal spaces as B p and B Π c , respectively. With these components, the control problem to be solved can be formulated as follows: Problem 1. Given an interconnected dynamical system Σ p = (T, W p , B p ) constructed according to (8), the control objective described by B ps , the controller network Σ Π c = T, W c , B Π c and the system-controller network Σ Π pc = T, W p × W c , B Π pc , design, if possible, a distributed control system with N c controllers Σ j c = (T, W j c , B j c ) such that 1. the controlled behaviour (11) after interconnection of Σ p and Σ c (as shown in Figure 5) satisfies where w f denotes the free variable after interconnection; 2. the resulting distributed controller Σ c = (T, W c , B c ) with B c described in (9) satisfies B c ⊂ B cr .
The first objective is about the controlled behaviour. As specified in (13a), the control design should result in the manifest controlled behaviour to be the subset of the uncontrolled system behaviour whose trajectories satisfy the requirements. Furthermore, as required by (13b), the free variable w f (which normally contains exogenous inputs such as disturbances) should still be able to choose any trajectories it prefers after the integration of the controller network. The second objective relates to the physical constraints of the controllers: the resulting distributed controller must meet the restrictions specified by B cr , which can include constraints such as maximum range of the controller variables and economic cost involving them.
Controller Behaviour Synthesis
This section gives the main result of this paper: the construction of behaviours of the distributed controllers. We firstly explain, intuitively, the rationale of the control design. As stated in Problem 1, the given components are the subsystems Σ i p , the system network Σ Π p , the controller network Σ Π c and the system-controller network Σ Π pc . The specifications on the system and the restriction on the controllers can be constructed as two virtual "systems" Σ ps = (T, W p , B ps ) and Σ cr = (T, W c , B cr ), respectively. By doing so, the two virtual systems can be integrated into the given components, resulting in a desired objective dynamical system as shown in Figure 6. Obviously, the full behaviour of this system, denoted by B d , is The projections of the behaviour of this dynamical system onto the spaces W T p and W T c give the largest possible set of behaviours for w p and w c , respectively, that are admissible through the network. It is easy to see that π wp (B d ) ⊂ B p ∩ B ps according to Lemma 7(i). Then, as shown in (12), since the controllers can be viewed as decentralised, the projection on the control variables of each controller w j c gives the behavioural set for the corresponding controller Σ j c . Therefore, if the controllers with the aforementioned behavioural sets are integrated into the system depicted in Figure 5, the resulting behaviour should, in some sense, resemble that shown in Figure 6.
We now rigorously formulate the above illustration with the following theorem. It shows that under certain conditions, the control design can indeed be carried out through this rationale, but both the largest possible set for the controlled behaviour and the resulting controller behaviours require much more delicate descriptions.
Theorem 3. The desired controlled behaviour (13) exists and is implementable through a distributed control system described in Problem 1 if and only if where In such a case, the largest possible set of controlled behaviour that can be implemented is and all controller trajectories that implement B pc are given by Proof. See Appendix A.2.
We provide a brief explanation of the construction of the various sets in Theorem 3. The detailed construction is given in the proof in Appendix A.2. Since B d in (14) gives the largest possible set of trajectories that are both within the desired set and admissible through the network, the set B pc , should it exist, must then be a subset of π wp (B d ). All corresponding trajectories of w c are thus given by π wc (B d ). However, for a given trajectory of w c ∈ π wc (B d ), there may be a set of trajectories of w p such that (w p , w c ) ∈ B f ull pc . We call these trajectories the multiplicities of a trajectory w p and we denote the set of all these trajectories as In general, we have that B m p \ π wp (B d ) = ∅, i.e., some trajectories of w p ∈ π wp (B d ) would have multiplicities outside of the desired behavioural set. The manifest behaviours of B in and B ex , i.e., π wp (B in ) and π wp (B ex ), are all trajectories of w p ∈ π wp (B d ) with all multiplicities inside and those with multiplicities outside of π wp (B d ), respectively, π wp (B out ) contains all w p ∈ π wp (B d ) that are admissible through the networks, and π wp (B xi ) contains all w p ∈ π wp (B ex ) with multiplicities in π wp (B in ).
It is immediately obvious that (15b) is necessary: it is simply impossible to find a controlled behaviour with w f being the free variable if it is not free to start with. Furthermore, some manipulations (see the proof in Appendix A.2) will show that the right hand side of (15a) is the projection of B pc given in (16) onto the space W T f . The necessity of (15a) then follows because if (16) is the largest possible controlled behaviour set, then so it must be for all of its components. The sufficiency of it, however, requires much more delicate proof.
Remark 2. The process of checking (15a) can be elusive for model-based representations, but is much easier if the system is represented by data sets. In the ideal case (i.e., where all trajectories of the system are available), (15a) reduces to a trajectory selection/elimination problem and the control design is effectively open-loop in the classical sense. However, real data sets are always incomplete and with noise. In such a case, (15a) can be checked recursively in a receding-horizon fashion using updated measurements, similar to MPC. The design procedure can also be modified to include a probability measure. Methods to introduce probability descriptions into the framework is currently under investigation.
Remark 3. While the satisfaction of (15b) may seem trivial, its triviality actually comes from the (not necessarily true) modelling assumption that the exogenous inputs admit any arbitrary trajectories. In reality, exogenous inputs may be subject to restraints, hence it is necessary to check whether the chosen free variable w f is indeed free in B p . For LTI systems represented by the column span of a Hankel matrix constructed by one of its measured trajectories [14][15][16]23], it can be verified by checking whether the submatrix of the Hankel matrix concerning the chosen free variables is of full row rank. For data sets, a probability measure can also be introduced similar to Remark 2.
In this theorem we see again the importance of defining the network behaviour in Definition 1 (its importance is more apparent in the proof of Theorem 3, see Appendix A.2 for details): it is precisely the network behaviour that builds correspondence between trajectories of w p and w c and allows for the construction of various interconnected behaviours from projections. A notable feature in this distributed control design process is that, unlike classical control design where control is based on the inverse of system dynamics, all variables are treated equally and there is no prescribed causality from the to-be-controlled variable to the manipulated variables. They are simply two sets of variables whose trajectories need to be admissible through the system and the controllers are simply restricting the trajectories of the system to a subset that is also a subset of the behaviour describing the desired behaviour rather than inverting the system dynamics in any way.
Another intriguing observation is that in general the largest possible desired set π wp (B d ) cannot be fully implemented. While it is understandable that some trajectories in B p ∩B ps are not implementable because they are not admissible through the network, the interesting result is that even π wp (B d ), whose trajectories all have corresponding trajectories of w c , cannot be fully implemented. This is because, while all trajectories in π wc (B d ) are coming from the desired controlled behaviour, trajectories in B p that are admissible with those in π wc (B d ) may not come from π wp (B d ). To make sure that these trajectories are not going to appear, we must eliminate the corresponding trajectories of w c from B c , which is why B j c in (17) is also a smaller set than that is depicted in Figure 6, i.e., π wc (B d ). However, by doing so, we also eliminate some trajectories within π wp (B d ) because they are indistinguishable from w c . While π wp (B xi ) may "revive" some eliminated trajectories in π wp (B d ) (see Appendix A.2), it is generally not possible to have them all back. Therefore, the largest set of B pc always satisfies because all trajectories in π wp (B in ) are definitely implementable and (15) guarantees the non-emptiness of this set (See the proof of Theorem 3 in Appendix A.2). Interestingly, the largest and smallest sets in (18) are obtained when w p is observable from w c and when w c is observable from w p , respectively. This is summarised in the following corollary. (i) if, additionally, w p is observable from w c , then the largest implementable behaviour is and all controller trajectories are given by (ii) if, additionally, w c is observable from w p , then the largest implementable behaviour is which can be implemented by (17).
While the case where w c is observable from w p is less common, the case where w p is observable from w c appears much more frequently (for example, it is generally the case for decentralised control systems and stand-alone systems). Therefore, if one were to search for the controller trajectories, one should begin by checking if either special case in Corollary 4 applies, in which case the design procedure can be simplified. However, in a distributed control system, w p and w c are typically not observable from each other, hence the general conditions in Theorem 3 should be used.
Conclusion
In this paper, a framework for the analysis and distributed control design of interconnected systems from settheoretic point of view using behavioural systems theory has been proposed. The network of an interconnected system has been viewed as a dynamical system with its own internal dynamics, which enables the representation of interconnected behaviour to be constructed explicitly from its components, regardless of their respective representations. Furthermore, we have shown that the interconnected behaviour can be completely constructed using the projections of the behaviours of the subsystems from the interconnected system and the behaviour of the network. We have also shown that the same effect can be achieved with any numbers of complete behavioural sets for some subsystems, the projections of the others and the behaviour of the network, allowing for a hybrid platform for model-based/data-driven interconnected systems. Necessary and sufficient conditions for the existence of distributed controllers have been provided and controller behaviours have been constructed explicitly. We believe that this is a more natural view of a dynamical system and is a promising direction for the development of data-driven and hybrid control methods.
Appendix A. Proofs
Before presenting the proofs, we firstly summarise the useful set operations other than the standard operations of ∩ and ∪ (commutativity, associativity, distributivity and De Morgan's laws) in the following lemma.
(i) Let A be a set and let
Lemma 5(ii) is a generalised version of the distributivity of × over ∩, and setting N = 2 yields a useful identity . Furthermore, we give two auxiliary results that are useful in all subsequent proofs.
T with manifest variable w = (w 1 , w 2 ), we have Similar argument can be made to show that π w1 B 1 ∩ B 2 ⊂ π w1 B 2 . The two relationships give the result in (i).
(ii) This is a standard result (see [31], for example).
(iii) The set on the left hand side is while the one on the right hand side is We see that the latter includes the former with 21 = 22 .
Lemma 7. Given a dynamical system of the form (7), the following relationships hold: Proof. (i) This is straightforward by seeing that the projection of the entire system onto certain spaces is the same as the intersection of the behaviour(s) containing the variables and the rest of the system with the said variables regarded as "manifest" variables.
(ii) Comparing the sets of the right hand side of the expression of B with the standard construction B = B 1 × B 2 ∩ B Π , we have that The two sets are equal if and only if = w 1 , and this should hold true for all w 1 , which is true if and only if w 1 is observable from w 2 .
A.1. Proof of Theorem 1
To prove this theorem, we need another auxiliary result, which is stated in the following lemma.
Proof. (i) This is a direct generalisation from Lemma 7(i).
(ii) Using the definitions of the two sets: we see that the former is the latter with an extra condition: Now we are ready to prove the theorem.
(i) From Lemma 7(i), we have It follows from Lemma 5(iii) that According to Lemma 5(i), it suffices to prove that B ⊂ Obviously, for all i, we have It then follows from Lemma 8(ii) that (ii) Note that according to Lemma 8(ii), Combining with the result in (i), it then follows that This completes the proof of Theorem 1.
To show (15a), we begin by explaining the construction of the various sets in the theorem. In the situation where the multiplicity set B m p \ π wp (B d ) = ∅, the corresponding trajectories in w c need to be excluded from control design because all multiplicities are indistinguishable from w c . The remaining valid controller behaviour can be constructed in the following way: 1. Find all trajectories of w c projected from integrating a dynamical system containing all trajectories of w p belonging to B p but not to π wp (B d ) into the system. This gives π wc (B out ); 2. All excluded trajectories of w p can be found by projecting all of w c found from the previous step to w p through the system and intersecting with π wp (B d ). This gives the set π wp (B ex ); 3. The largest possible set of the controller behaviour B c is hence the subset of B Π c ∩ B cr containing all trajectories of w c projected from integrating π wp (B d ), excluding π wp (B ex ), into the network, which is precisely π wc (B in ).
While the above procedure gives π wc (B in ), integrating this into the network may result in a corresponding behaviour of w p that is larger than π wp (B in ) and a portion of it may end up in π wp (B ex ). This set of trajectories, should they exist, are also implementable, and they are given by π wp (B xi ). Therefore, if there exists B c such that (13) holds, we must have B c ⊂ π wc (B in ), hence Therefore, according to Lemma 6(ii), we have Then (15a) follows due to (13b). Furthermore, according to Corollary 2, we have As a result, which gives the largest possible implementable set (16). This completes the only if part of the proof.
(if ): Suppose that conditions in (15) hold. We begin by showing that B in = ∅. In other words, the desired controller behaviour is guaranteed to exist.
Now, note that π wc (B out ) ⊂ B Π c ∩ B cr , then according to Corollary 2 and Lemma 7(i), we have This means that Therefore, This means that B pc given in (16) can be implemented by choosing B c = π wc (B in ) .
(A.5)
It is easy to see that B pc satisfies (13a) because Furthermore, similar to the derivation in (A.1), (15a) means that Since π w f (B p ) = W T f due to (15b) and π w f (B in ) ∪ π w f (B xi ) is equivalent to B pc in (16) according to (A.2), we achieve (13b).
Suppose that there exists a subset of π wc (B d ), call it the residual set π wc (B res ), such that π wc (B res ) = π wc (B d ) \ π wc (B in ) \ π wc (B ex ) and suppose that π wc (B res ) = ∅. Since π wp (B ex ) ⊂ π wp (B d ) and it follows that π wp (B in ) ∪ π wp (B ex ) = π wp (B d ) and that π wp (B in ) ∩ π wp (B ex ) = ∅. Therefore, for all w c ∈ π wc (B d ) there must exist at least one w p that belongs to either π wp (B in ) or π wp (B ex ) such that (w p , w c ) ∈ B Π pc . We first show that for w c ∈ π wc (B res ), the corresponding trajectories of w p cannot come from π wp (B in ), i.e., π wp (B in ) × π wc (B res ) ∩ B Π pc = ∅. Notice that As a result, Therefore, all corresponding trajectories of w p ∈ π wp (B d ) for w c ∈ π wc (B res ) must belong to π wp (B ex ). On the other hand, all trajectories in π wp (B ex ) have multiplicities in B p \ B ps , it follows that but π wc (B res ) ⊂ π wc (B d ) by definition. Therefore, π wc (B res ) ⊂ π wc (B d ) ∩ π wc (B out ) = π wc π wp (B d ) × π wc (B out ) ∩ B Π pc = π wc (B ex ) .
But π wc (B res ) ⊂ π wc (B ex ) by construction. This contradiction means that π wc (B res ) = ∅. Therefore, (A.7b), hence (A.6), is satisfied. This establishes the equivalence between the first two representations in (17) because π w j c (B in ) = π w j c (π wc (B in )). Furthermore, π wc (B d ) \ π wc (B ex ) (17). Notice that (17) is the smallest controller behavioural set to achieve (A.5). B j c may contain other trajectories but they must not be admissible through B Π c . This completes the if part, thus the proof of Theorem 3.
(ii) Similar to the previous case, we show that B xi = ∅ for this case. (21) then follows from (A.2). According to Lemma 7(ii), since w c is observable from w p , we have The emptiness then follows from (A.3). | 2021-03-19T01:15:29.023Z | 2021-03-18T00:00:00.000 | {
"year": 2021,
"sha1": "43cc594ed7a95311c8e8f4aab8c7a57081b22988",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "89ed2ced5fe882053957e481d50eb8d52dec7a9f",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
221293276 | pes2o/s2orc | v3-fos-license | Bias-Awareness for Zero-Shot Learning the Seen and Unseen
Generalized zero-shot learning recognizes inputs from both seen and unseen classes. Yet, existing methods tend to be biased towards the classes seen during training. In this paper, we strive to mitigate this bias. We propose a bias-aware learner to map inputs to a semantic embedding space for generalized zero-shot learning. During training, the model learns to regress to real-valued class prototypes in the embedding space with temperature scaling, while a margin-based bidirectional entropy term regularizes seen and unseen probabilities. Relying on a real-valued semantic embedding space provides a versatile approach, as the model can operate on different types of semantic information for both seen and unseen classes. Experiments are carried out on four benchmarks for generalized zero-shot learning and demonstrate the benefits of the proposed bias-aware classifier, both as a stand-alone method or in combination with generated features.
Introduction
Zero-shot recognition [15,23] considers if models trained on a given set of seen classes S can extrapolate to a distinct set of unseen classes U. In generalized zero-shot learning [8,38], we also want to remember the seen classes and evaluate over the union of the two sets of classes T = S ∪ U. Nevertheless, when evaluating existing models in the generalized scenario, the seminal work of Chao et al. [8] highlights that predictions tend to be biased towards the seen classes observed during training. In this paper, we consider the challenge of mitigating this inherent bias present in classifiers by proposing a bias-aware model.
An effective remedy to remove the bias towards seen classes is to calibrate their predictions during inference. Chao et al. [8] propose to reduce the scores for the seen classes, which in return improves the generalized zero-shot learning performance. Yet, the bias towards seen classes should also be tackled while training classifiers, and not only during the evaluation phase, to address the bias from the start. Towards this goal, seen and unseen classes can be addressed separately during training. Liu et al. [17] define two separate training objectives to calibrate the confidence of seen classes and the uncertainty of unseen classes. Atzmon and Chechik [4] break the classification into two separate experts, with one model for seen classes and another one for unseen classes. Their COSMO approach provides compelling results at the expense of a third additional expert to combine results. As generalized zero-shot c 2020. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. arXiv:2008.11185v1 [cs.CV] 25 Aug 2020 learning considers both seen and unseen classes simultaneously, learners should benefit from mitigating the bias in both directions by considering both sets jointly rather than separately.
The main objective of this paper is to mitigate the bias towards seen classes by considering predictions of seen and unseen classes simultaneously during training. To achieve this, we propose a simple bias-aware learner that maps inputs to a semantic embedding space where class prototypes are formed by real-valued representations. We address the bias by introducing (i) a calibration for the learner with temperature scaling, and (ii) a margin-based bidirectional entropy term to regularize seen and unseen probabilities jointly. We show that the bias towards seen classes is also dataset-dependent, and every dataset does not suffer to the same extent. Finally, we illustrate the versatility of our approach. By relying on a real-valued embedding space, the model can handle different types of prototype representation for both seen and unseen classes, and operate either on real features, akin to compatibility functions, or leverage generated unseen features. Comparisons on four datasets for generalized zero-shot learning show the effectiveness of bias-awareness. All source code and setups are released 1 .
Related Work
Generalized zero-shot learning has been introduced to provide a more realistic and practical setting than zero-shot learning, as models are evaluated on both seen and unseen classes [8]. This change in evaluation has a large impact on existing compatibility functions designed for zero-shot learning, as they do not perform well in the generalized setting [7,8,38]. Indeed, whether they are based on a ranking loss [1,2,11,27,37] or synthesis [5,6,7], compatibility functions empirically exhibit a very low accuracy for unseen classes. As identified by Chao et al. [8], this indicates a strong inherent bias in all classifiers towards the seen classes. To overcome the low accuracy for unseen classes, both Kumar Verma et al. [14] and Xian et al. [39] learn a conditional generative model to generate image features. Once trained, image features of unseen classes are sampled by changing the conditioning of the generative models. Classification then consists of training a one-hot softmax classifier on both real and sampled image features. Having access during training to generated unseen features leads to an increase in unseen class accuracy. Among the different generative models used in generalized zero-shot learning are generative adversarial networks [10,16,39], variational autoencoders [14,29] or a combination of both [40]. Still, a classifier trained on generated features suffers from a bias towards seen classes because generative models do not fully match the true distribution of unseen classes. In this paper, we strive for a bias-aware classifier, which can behave as a stand-alone model like compatibility functions and also leverage unseen features sampled from a generative model.
Addressing the bias in classifiers remains an open challenge for generalized zero-shot learning. Although Chao et al. [8] identify the critical bias towards seen classes, only a few works try to address it during training. Related works separate the seen and unseen classifications. Liu et al. [17] map both features and semantic representations to a common embedding space. Probabilities are then calibrated separately in this common space to make seen class probabilities confident and reduce the uncertainty of unseen class probabilities. Atzmon and Chechik [4] train expert models separately for seen and unseen class predictions. Their predictions are further combined in a soft manner with a third expert to produce the final decision. In this paper, we strive to address the bias by considering seen and unseen class probabilities jointly rather than separately. Having access during training to the joint class probabilities lets the bias-aware model learn how to balance them from the start.
Method
During training, a generalized zero-shot learner G : X → T is given a training set D S = {(x n , y n ), y n ∈ S} N n=1 , where x n ∈ R D is an image feature of dimension D and y n comes from the set S of seen classes, with S ⊂ T . For each c ∈ S there exists a corresponding semantic class representation φ (c) ∈ R A of dimension A. At testing time, G predicts for each sample in the testing set D T = {x n } M n=1 a label that belongs to T by exploiting the joint set of seen and unseen semantic class representations. This problem formulation can be extended with an auxiliary dataset D U = {( x n , y n ), y n ∈ U } N n=1 , where y n comes from the set of unseen classes U. D U mimics image features from unseen classes, and is typically sampled from a generative model. The joint set {D S , D U } now covers both seen and unseen classes.
In this paper, we propose a bias-aware generalized zero-shot learner f (·), which can operate during training with only D S similar to compatibility functions (Section 3.1) or the joint set {D S , D U } similar to classifiers in the generative approach (Section 3.2). In both scenarios, the learner includes mechanisms to mitigate the bias towards seen classes. Learning consists of mapping inputs x to their corresponding semantic class representations φ (c). In other words, the model regresses to a real-valued vector, which describes a class prototype. We denote the set of seen class prototypes as Φ S = {φ (c), c ∈ S}, unseen class prototypes as Φ U = {φ (c), c ∈ U}, and their union as Usually, the semantic knowledge used for class prototypes corresponds to semantic attributes [9,15], word vectors of the class name [11,23], hierarchical representations [1,2,37], or sentence descriptions [26,39]. To exploit this diversity in semantic knowledge, we propose to swap the representation types for seen and unseen prototypes (Section 3.3).
Stand-alone classification with seen classes only
We design the bias-aware generalized zero-shot learner as a probabilistic model with two key principles. First, it is calibrated towards seen classes such that inputs from unseen classes yield a low confidence prediction at testing time. In return, this reduces the bias towards seen classes for unseen class inputs. Second, it maps inputs to class prototypes in the semantic embedding space. Following these two principles, we propose: where s(·, ·) is the cosine similarity and T ∈ R >0 is the temperature scale. When T = 1, it acts as the normal softmax function. When T > 1, probabilities are spreading out. When T < 1, probabilities tend to concentrate similar to a Dirac delta function. Contrary to knowledge distillation [13], we seek to concentrate the probabilities with a low temperature scale for discriminative purposes. Learning the probabilistic model is done via minimizing the crossentropy loss function over the training set of seen examples D S : This probabilistic model behaves like a compatibility function, because it only sees samples from seen classes during training. At testing, the evaluation simply measures the similarity in the embedding space with respect to the union of seen and unseen prototypes Φ T . Variants of this prototype-based learner have been proposed in image retrieval [18,20,35,41] or image classification [17,31,36]. We differ by (i) fixing the prototypes to be semantic class representations rather than learning them; (ii) learning a mapping from the inputs to the class representations rather than learning a common embedding space; (iii) applying a softmax function to provide a probabilistic interpretation of cosine similarities; and (iv) calibrating the model with the same temperature scaling for both training and testing.
Classification with both seen and unseen classes
In the generative approach for generalized zero-shot learning, samples from unseen classes are generated. We can then use the generated data D U as an auxiliary dataset for calibration and for entropy regularization. In this context, given an input x the probabilistic model learns to predict a class from the union of both seen and unseen classes: The only and major difference with eq. 1 resides in the class prototypes that are considered to produce the prediction, while f (·) remains the same model. p(c|x, S) only evaluates over the set of seen class prototypes Φ S , while p(c|x, T ) evaluates over the union of seen and unseen class prototypes Φ T . In this case, the temperature scaling ensures the model is confident for both seen and unseen classes. This difference also makes the learning distinctive from related works (i.e., DCN [17] or COSMO [4]), as they consider seen and unseen classifications separately rather than jointly. Akin to eq. 2, we minimize the cross-entropy loss function on the joint set {D S , D U } of seen and unseen classes: This probabilistic model behaves like a classifier used in generative approaches, because it sees samples from both seen and unseen classes at both training and testing times, and the partition function normalizes over the union of seen and unseen sets of classes. Having a classification over the union enables regularization in both seen and unseen directions.
Bidirectional entropy regularization. Intuitively, when an image from an unseen class is fed to the classifier, probabilities for seen classes should yield a high entropy, while probabilities for unseen classes should result in a low entropy. In other words, the evaluation over seen classes of an unseen class input should be uncertain, because the image comes from a class the classifier has never encountered during training. Conversely, when an image from a seen class is fed to the classifier, the entropy of the probabilities for unseen classes should be high, while the entropy for seen classes should be low. To encourage this effect, given an image x, we compute the normalized Shannon entropy [30] of the probabilistic model p(c|x, T ) for both seen and unseen class directions: where H s and H u are the average entropy for seen and unseen classes, and | · | is the cardinality of the set. For training, we derive a margin-based regularization for both seen and unseen class directions: where [·] + = max(0, ·). R s ensures a margin of at least m between the average seen class entropy of seen inputs x n and generated unseen inputs x n . In other words, this formulation seeks to minimize H s (x n ) and maximize H s ( x n ). R u has a corresponding effect on the unseen class entropy. The final loss function for training then becomes: where λ Ent ∈ R ≥0 is a hyper-parameter to control the contribution of the bidirectional entropy.
Swapping seen and unseen class representations
As presented above, relying on a real-valued embedding space allows mechanisms to mitigate the bias in two scenarios. It also enables to swap class representations to less biased representations. Consider now the case where there exist multiple types of semantic information, which differ by their type of representation and by how expensive it is to collect them. For example, attribute descriptions require expert knowledge, while sentence descriptions can be crowd-sourced to non-expert workers. Practically, sentences tend to be less biased than attributes and perform better [39], but do not offer a comprehensive expert-based explanation [26]. One could then train a model for seen classes on attributes as they rely on expert-based explanations and rely for unseen classes on sentences as they are easier to collect. This results in different representation types for seen and unseen classes. Formally, we assume that we have access to seen prototypes {Φ S A , Φ S B } with representations from domain A and B. For evaluation, we have access to unseen prototypes Φ U A of domain A, but Φ U B of domain B is absent. The objective is then to learn a mapping β from Φ S A to Φ S B , in order to regressΦ U B from Φ U A at testing time. We define the mapping as a linear least squares regression problem with Tikhonov regularization, which corresponds to: where λ β controls the amount of regularization. Relying on a linear transformation prevents overfitting, as the mapping involves a limited set of class prototypes. During evaluation, we apply β to unseen prototypes of domain A to regress their values in domain B:Φ U B = β Φ U A . Swapping representations then corresponds to regressing from one domain to another.
Experimental Details
Datasets. We report experiments on four datasets commonly used in generalized zero-shot learning, e.g., [7,8,26,38]. For all datasets, we rely on the train and test splits proposed by Xian et al. [38]. Caltech-UCSD-Birds 200-2011 (CUB) [34] contains 11,788 images from 200 bird species. Every species is described by a unique combination of 312 semantic attributes to characterize the color, pattern and shape of their specific parts. Moreover, every bird image comes along with 10 sentences describing the most prominent characteristics [26]. 150 species are used as seen classes during training, and 50 distinct species are left out as unseen classes during testing. SUN Attribute (SUN) [25] contains 14,340 images from 717 scene types. Every scene is also described by a unique combination of 102 semantic attributes to characterize material and surface properties. 645 scene types are used as seen classes during training, and 72 distinct scene types are left out as unseen classes during testing. Animals with Attributes (AWA) [15] contains 30,475 images from 50 animals. Every animal comes with a unique combination of 85 semantic attributes to describe their color, shape, state or function. 40 animals are used as seen classes during training, and 10 distinct animals are left out as unseen classes during testing. Oxford Flowers (FLO) [22] contains 8,189 images from 102 flower plants. Every flower plant image is described by 10 different sentences describing the shape and appearance [26]. 82 flowers are used as seen classes during training, and 20 distinct flowers are left out as unseen classes during testing.
Features extraction. For all datasets, we rely on the features extracted by Xian et al. [38]. Image features x come from ResNet101 [12] trained on ImageNet [28] and sentence representations are extracted from a 1024-dimensional CNN-RNN [26]. As established by Xian et al. [38], parameters of ResNet101 and the CNN-RNN are frozen and are not fine-tuned during the training phase. No data augmentation is performed either.
Evaluation. We evaluate experiments with calibration stacking as proposed by Chao et al. [8], which penalizes the seen class probabilities to reduce the bias during evaluation. Following Xian et al. [38], we compute the average per-class top-1 accuracy of seen classes (denoted as s) and unseen classes (denoted as u), as well as their harmonic mean H = (2 × s × u)/(s + u). We report the 3-run average. Implementation details. In our model, f (·) corresponds to a multilayer perceptron with 2 hidden layers of size 2048 and 1024 to map the features x to the joint visual-semantic embedding space of size A. The output layer has a linear activation, while hidden layers have a ReLU activation [21] followed by a Dropout regularization (p = 0.5) [32]. We train f (·) using stochastic gradient descent with Nesterov momentum [33]. We set the following hyper-parameters for all datasets: learning rate of 0.01 with cosine annealing [19], initial momentum of 0.9, batch size of 64, temperature of 0.05, and an entropy regularization term of 0.1 with a margin of 0.2. For AWA, we reduce the learning rate to 0.0001 and increase the entropy regularization to 0.5 while keeping the same margin. When relying on sentence representations, we double the capacity of f (·) with twice the number of hidden units in each layer. We set hyper-parameters on a hold-out validation set and re-train on the joint training and validation sets. The source code uses the Pytorch framework [24].
Results
Bias variation. To verify whether the bias towards seen classes is dataset-dependent, we measure the average linkage between seen and unseen representations. Concretely, we compute the average of the pairwise cosine similarity between Φ S and Φ U . A high average linkage then refers to a high similarity between seen and unseen representations. Intuitively, a high average linkage is not desirable as unseen representations can easily be confused with seen ones, which makes the generalized zero-shot learning problem harder. Figure 1 depicts the average linkage per dataset. FLO exhibits the highest average linkage while SUN the lowest, with a 1.6 times difference. In other words, classifiers trained on FLO are highly affected by the bias towards seen classes. Figure 2 illustrates seen and unseen class samples with a very high pairwise similarity on CUB, AWA and FLO. Visually, these classes can be differentiated by their color or shape. Though, their semantic representations are very similar, which creates a high bias. Now that we have established that the bias towards seen classes differs across datasets, we can address the bias within generalized zero-shot learners.
Temperature scaling. Figure 3 varies the scale of the temperature in eq. 1. Following related metric learning works (e.g., [36,41]), we consider the temperature as a hyper-parameter. When treated as a latent parameter, the optimization diverges as its value goes down to zero to satisfy the loss function. The highest H score occurs when T = 0.05 on the validation set of all datasets. Performance starts to degrade substantially after T > 0.1. A temperature lower than T < 0.05 can yield even higher scores, but is usually prone to numerical errors. As such, we set T = 0.05 in all our experiments when training the model with only seen samples (eq. 2) or in combination with generated unseen samples (eq. 4). We also evaluate modifying T between training and testing phases. Setting it to 1 during training and testing, as in a normal softmax, drops H by 43.3% on AWA. And changing it to 0.05 when testing, drops the score by 25.6%. Keeping a fixed temperature value ensures f (·) maps inputs to prototypes similarly in training and testing. The temperature value should also be low to promote a more confident and discriminative model that yields narrow probabilities. Hence, the model reduces the bias by having a lower likelihood to classify an unseen class input as part of a seen class.
Entropy regularization. Figure 4 ablates the direction of the margin-based entropy term in eq. 8. For this experiment, we rely on unseen class features generated from Cycle-CLSWGAN [10]. When using a unidirectional entropy regularization, the improvement is either very low, or even negative, over a model without any regularization. Interestingly, this negative effect does not depend on the direction, as both H s and H u are affected when considered individually. Regularizing in only one direction forces the model to compensate for the other direction. Only the bidirectional regularization provides a consistent benefit Table 1: Swapping attribute (Att) and sentence (Sen) representations. While Att-Att and Sen-Sen are the usual non-swapped evaluation settings, our method can also swap them. When using sentences for unseen classes, it always improves upon attributes in swapped and non-swapped evaluations as they are less biased and more discriminative.
for all datasets. This positive effect indicates the importance of balancing out both seen and unseen probabilities when mitigating the bias. Regularizing in both directions jointly helps the model learn a correct bias trade-off.
Swapping representations. Table 1 presents the different combinations of attribute (Att) and sentence (Sen) representations for training and evaluation. Att-Att and Sen-Sen are the common non-swapped settings. Sen-Sen forms an upper-bound as sentences provide better class representations over attributes. Indeed, sentence descriptions exhibit a lower average linkage than attribute descriptions. In a swapped setting, the unseen representations are regressed from representations in another domain based on eq. 9. A model trained on Att can be improved by 1.2 points at testing time when using Sen to regress the unseen representations. However, a model trained on Sen degrades when using Att to regress unseen representations. Indeed, Sen-Att requires to map low-dimensional attribute representations of unseen classes to a high-dimensional space of sentence representations on which the classifier has been trained. Sen-Att then involves dimensionality expansion, which is a harder problem than dimensionality compression in Att-Sen. In the scenario where a model is trained on attributes for seen classes derived from experts, it is possible to leverage sentences for unseen classes derived from crowd-sourcing to further improve the results.
Comparison with the state of the art. One-hot softmax [39] n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a w/ f-CLSWGAN [39] 43 n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a w/ f-CLSWGAN [39] 60. 5 Table 2: Comparison with the state of the art, where classifiers are delimited by a horizontal rule and their combination with a generative model is in teletype font. "n/a" denotes a non-applicable setting to the method while "-" refers to non-reported results in the original paper. Compared with one-hot softmax and COSMO, our proposal is a stand-alone method that can also operate with seen class samples only. Compared with the other compatibility functions that also operate in this similar stand-alone setting, it achieves the best results (underlined). When extended with generated unseen class samples, we also improve over other classifiers (bold), leading to state-of-the-art results on the three most biased datasets out of four (see Figure 1). only observe the seen class inputs during training, i.e., without using any generated features. In this setting, our bias-aware formulation outperforms existing compatiblity functions [1,2,11,17,27,37] on all datasets. It is also interesting to note that recent formulations with one-hot softmax [39] or COSMO [4] cannot operate in this setting. Indeed, they rely on a discrete label space for classification while we rely on a real-valued embedding space. This enables our formulation to incorporate new unseen classes easily and at near zero cost, similar to compatibility functions. Second, our approach is easily extended with existing generative models to include an auxiliary dataset D U for unseen classes. We select f-CLSWGAN [39] and Cycle-CLSWGAN [10] as the authors provide source code to evaluate on all four datasets. Reproducing the models from their original source code yields results within a reasonable range, i.e., less than a 2-point difference in the H score. We obtain better results with Cycle-CLSWGAN [10] than f-CLSWGAN [39], which highlights the importance of the quality of the generated unseen class features. Moreover, our method profits more when generated samples better reflect the true distribution. When switching from f-CLSWGAN [39] to Cycle-CLSWGAN [10] on CUB, a one-hot softmax classifier leads to a 2.6% increase while our bias-aware classifier with a joint entropy regularization yields a 7.5% increase. We achieve state-of-the-art results on CUB, AWA and FLO. Only on the SUN dataset the one-hot softmax [39] and COSMO [4] provide higher scores. This originates from a lower bias towards seen classes in the SUN dataset (see Figure 1), which makes a bias-aware model less beneficial. When a dataset exhibits a low bias, separating the model for seen and unseen classes is preferred for equal treatment. Conversely, when a dataset exhibits a high bias, the training of the model should consider seen and unseen classes jointly to balance out their probabilities from the start. Overall, we produce competitive results in both scenarios, especially compared with classifiers without any bias-awareness.
Conclusion
The classification of seen and unseen classes in generalized zero-shot learning requires models to be aware of the bias towards seen classes. In this paper, we present such a model which calibrates the probabilities of seen and unseen classes jointly during training, and ensures a margin between the average entropy of both seen and unseen class probabilities. Learning consists of regressing inputs to real-valued representations. Relying on a mapping to a realvalued embedding space enables to swap seen and unseen representation types, and to evaluate the model in a stand-alone scenario or in combination with generated unseen features. Overall, our proposed bias-aware learner provides an effective alternative to separate classification approaches or classifiers without bias-awareness. | 2020-08-26T01:00:35.012Z | 2020-08-25T00:00:00.000 | {
"year": 2020,
"sha1": "3e53f94215998a91b1f8124914cb6cd15f2a03c9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3e53f94215998a91b1f8124914cb6cd15f2a03c9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238215385 | pes2o/s2orc | v3-fos-license | Distributed Feedback Optimisation for Robotic Coordination
Feedback optimisation is an emerging technique aiming at steering a system to an optimal steady state for a given objective function. We show that it is possible to employ this control strategy in a distributed manner. Moreover, we prove asymptotic convergence to the set of optimal configurations. To this scope, we show that exponential stability is needed only for the portion of the state that affects the objective function. This is showcased by driving a swarm of agents towards a target location while maintaining a target formation. Finally, we provide a sufficient condition on the topological structure of the specified formation to guarantee convergence of the swarm in formation around the target location.
I. INTRODUCTION
Feedback optimisation is an emerging technique aimed at steering a system to a trajectory computed online that is optimal with respect to a selected objective function, relying on little information on the controlled plant. In this paper, we apply feedback optimisation on distributed systems and extend convergence results to systems where a part of the state is only asymptotically but not exponentially stable. The results are then used to drive a swarm of robots in a given target formation towards a target location in a distributed manner.
A thorough review on the feedback-based optimisation methodology can be found in [1]. Remarkably, the plant dynamics need not be known. Instead, this approach relies only on the knowledge of the steady-state input-output sensitivity, allowing model-free optimisation and constraint handling [2], [3]. Furthermore, being a feedback-based approach results in the well-known advantages of feedback control, namely robustness to model mismatch and disturbances [4], [5]. A classical example is congestion control for cybernetworks, where source controllers and link dynamics are the modelled feedback loop [6]. In recent years, gradient projection algorithms and feedback optimisation gained traction within the field of power systems. The fact that operational constraints are satisfied at all times make these techniques feasible for online implementation [7], [8]. A detailed overview on offline and online control techniques for electric power systems can be found in [9]. Stability of such systems is studied in [10] and a recent experimental validation obtained on power grids empirically shows the premises of this approach [11]. The applicability of the methodology for linear time-invariant systems with saddleflow dynamics and constrained convex optimisation problems is shown in [12], and non-smooth dynamical systems arising The authors are with the Automatic Control Laboratory, ETH Zürich, Physikstrasse 3, 8092 Zürich, Switzerland. Emails: {aterpin, frickers, miperez, mbadyn, dorfler}@ethz.ch. This research is supported by the SNSF through NCCR Automation.
in time-varying optimisation are addressed in [13], [14]. [15] reviews a broad class of algorithms for time-varying optimizations and shows how this can be applied to drive a single robot towards a target while avoiding collision. Convergence and stability analysis for regulation of linear time-invariant systems towards the optimal solution of a time-varying convex optimisation problem is studied in [16]. Constraints are included in [17], [18] and [19] extends to non-linear systems and non-convex problems.
Much of the work on robotic coordination for flocking relies on classical approaches inspired by the so-called Reynolds principles [20] and typically employs the concept of potential forces [21]. Along the lines of closed-loop optimisation for robotics coordination, feedback optimisation is used in [22] and [23] to learn generalised Nash Equilibrium in a non-cooperative game-theoretical setting. However, both of these require centralised or semi-decentralised algorithms. To the best of our knowledge, this paper is the first work that uses feedback optimisation for robotic coordination in a distributed manner. Our theoretical analysis on the asymptotic convergence of the closed-loop system builds instead on [24], where the authors quantify the required timescale separation to ensure stability and convergence of the interconnection of an exponentially stable plant and different schemes of feedback optimisation.
Our contributions are threefold. First, we show that it is possible to implement a feedback optimisation scheme in a distributed manner using robotic coordination as running example. Second, we prove the convergence of the swarm to the configuration that minimises the selected cost function. In particular, we build on [24] to show that exponential stability is required only for the portion of the state that affects the considered cost function whereas for the remainder of the state variables, only asymptotic stability is necessary. Furthermore, we derive conditions on the objective function coefficients and topology of the formation graph. These guarantee that the optimal closed-loop steady-state is such that the agents asymptotically gather in formation around a target location.
The remainder of this paper is organised as follows. section II introduces the problem and the control scheme we want to pursue. In section III, we analyse the closed-loop system and we present the main results of our work. Finally, section IV presents empirical results and concrete instances of the considered problem.
A. Notation
Given a n-tuple (x 1 , x 2 , . . . , x n ), x = [x 1 , x 2 , . . . , ..,n} is its associated vector and diag(x T ) is the square matrix with the components of x on its diagonal. The 2-norm is denoted x and for a matrix M , M is the norm induced by the 2-norm · . The spectrum of M is denoted by spec[M ] and its null space is null [M ].
The identity matrix of size n is I n , whereas 1 n and 0 n denote the column vector of ones and zeros, respectively. The Kronecker product is denoted by ⊗ and we define M n = M ⊗ I n and M n = 1 n ⊗ M .
Given a function f : is the gradient of f , and with y = [ ∂xi ] T i∈I . The cardinality of a set is denoted by |I|. Finally, the Jacobian of g : R n → R m is denoted by Jg. We adopt the definition Jg(x) = [∇g i (x)] T i∈{1,...,n} .
II. PROBLEM STATEMENT
The goal of this work is to show how to apply feedback optimisation in a distributed manner by driving a swarm of N agents into a given target formation around a given target location τ . Recall that for feedback optimization in general, we only need to know the steady-state input-output sensitivities. However, for the sake of a more detailed and design-oriented analysis, and to show how to relax some of the assumptions made in [24], we consider the plant model to be known.
We consider unicycle dynamics for the generic i-th agent with state is the position of the i-th agent in the a − b plane and θ i ∈ (−π, π] its orientation with respect to the a-axis. The dynamics arė where the low-level control inputs v i and ω i are to be defined.
In particular, we design the actuation mechanism available on each agent to track a given fixed reference position u i . Consider the relative displacement error ξ i ∈ R 2 between the agents current position r i and the fixed reference position u i and the relative heading error φ i ∈ (−π, π] denoting the angle between the agents orientation θ i and the straight line connecting the agents position to the reference position u i . The error variables then read as Lemma II.1. The low-level control law (3) almost globally asymptotically stabilises (2) around the origin. Moreover, ξ = 0 is a globally exponentially stable equilibrium.
Proof. The proof is provided in Appendix A.
Every agent has access to its own global position and the relative positions of its neighbours specified through the unweighted formation graph G = (V, E), where the set of vertices V consists of all N agents and the set of edges E captures the structure of the target formation. We assume the undirected version of G to be connected, and the target formation to be uniquely defined by the desired when the agents are in the target formation.
Recall that the orientation can be specified through the incidence matrix B (see example in section IV).
For simplicity, we assume that a robot always has access to the relative displacement to the agents it is adjacent to in the target formation specified by G, regardless of their current distance. Let T be the stacked desired inter-agent relative displacements according to the ordering of the edges given by the incidence matrix B, and let r = [[r T i ] i∈V ] T be the stacked positions of all the agents. The target location of the swarm, τ , is assumed to be known for all agents. Finally, we define r * = [[r * T i ] i∈V ] T = τ N + δ to be the desired final configuration, where δ represents the displacements of the agents from the target location τ when being in the target formation defined by d, that is B T 2 r * = d. Moreover, when agents are in the desired final configuration, it holds that 1 N i∈V r * i = τ . To tackle this problem by means of feedback optimisation, we propose the cost function where γ 1 , γ 2 > 0 denote the weights on the formation error and the distance from the target respectively. We manipulate the terms of the cost function to write it in matrix form: Hence, the cost function can equivalently be expressed as and its gradient is In general, the closed loop system for the feedback optimisation control scheme iṡ where the steady-state input-output map, h, is defined as h(u) = lim t→∞ g(x(t), u) and its sensitivity is Jh(u). (6a) is the plant dynamics comprising the low-level controller, (6b) is the output map and (6c) is the feedback-optimisation control-law dynamics. For the problem setup outlined above, we have and (6c) reads aṡ while (6a) is given by the plant dynamics (1) with low level controller (3). The scheme of the closed-loop system is shown in Figure 1.
Remark. As the focus of this paper is on distributed feedback optimisation, we consider a fixed global frame that is known to all agents for simplicity of exposition. As we shall discuss in subsection III-A, each agent only needs access to relative displacements to its neighbours and to the target. Thus, the information needed is local and any result in this paper can be established also considering different fixed local frames for all agents, which is shortly outlined subsequently. The error variables are independent of the reference frame by definition. Therefore, as long as every agent can locate itself in its own fixed local frame, its respective contribution to the cost function is the same as when using a global frame. Moreover, the input dynamics are also independent of the reference frame and the resulting trajectory of every agent can be projected from the global frame to its local frame by homogeneous transformation.
III. CLOSED LOOP ANALYSIS
We now present the main results of this paper. Namely, we first show in subsection III-A that the closed-loop control law (6) is distributed. Then, in subsection III-B, we prove that the robotic swarm asymptotically converges and optimises the cost function (4). Finally, in subsection III-C we provide bounds on the gains related to the topological structure of the target formation d to guarantee that the optimal configuration with respect to the specified cost function corresponds to the agents being in target formation around the target location τ .
A. Feedback optimisation as a distributed control law
To show that the control-law in (6c) uses only local information for the i-th component, we derive the input dynamics for the i-th agent: It can be observed that the control law depends only on local information y i = [r T i , [r T j ] j∈N (i) ] T . Furthermore, the information actually used is relative, i.e., each agent needs only access to the relative displacement from the target (r i − τ ) and from its neighbours (r i − r j ).
B. Asymptotic behaviour
The key motivation of this subsection is that even if both the plant (1,3) and the control input dynamics (6c) are asymptotically stable, there is no guarantee a priori that their interconnection is. To this scope, [24, Theorem III.2] requires exponential stability of the entire state of the plant. However, in the following we show that we only require exponential stability for the portion of the state that is actually measured in the output and affects the cost, as long as asymptotic stability of the system is given.
Assumption III.3. The steady-state input-output map,h, is q-Lipschitz continuous and null[Jh(u)
We now provide a simple extension to [ Proof. The proof is provided in Appendix C.
With this result, we are now ready to state our variation of [24, Theorem III.2] that allows us to prove asymptotic convergence for the system (6).
Theorem III.1 (Asymptotic convergence). Suppose that Assumption III.1, Assumption III.2 and Assumption III. 3 hold, and that the objective function Φ(r) = Φ(g(x)) is differentiable with compact sub-level sets. Then, the closed loop system (6) converges asymptotically to the set of critical points of Φ(r) whenever ǫ < γ µl . Proof. We consider a LaSalle's function of the form The second term is bounded via (Assumption III.1-ii) as, whereas for the first and third term we can proceed as in the proof of [24, Lemma III.1]. Let κ(x, u) = −Jh(u) T ∇Φ(g(x)) =u/ǫ. Then, for the first term we have where in the first inequality we use the Cauchy-Schwartz inequality [26,Theorem 7.1], and in the second we use Assumption III.2. Recalling the definition of κ, we finally obtain For the third term we have (Assumption III.1-iii) Hence, since g(x) = r, we obtaiṅ which is negative definite if ([27, pp.296]) δ = l µ + l and ǫ < γ µl .
Moreover, we notice that the for the left hand side of (8) to be zero, we need the right hand side to cancel out as well (negative definite quadratic form). This is equivalent to having r = h(u) and κ(x, u) = 0.
SinceΨ ≤ 0, we know that the sublevel sets of Ψ are invariant and using Lemma III.1 we conclude that they are also compact. Therefore, taking P = {(x, u) ∈ R n+p | Ψ(x, u) ≤ Ψ(x(t 0 ), u(t 0 ))} we have that for any initial condition (x(t 0 ), u(t 0 )) the trajectories (x(t), u(t)) converge to the largest invariant subset S ⊆ P for whichΨ = 0.
In particular, denoting by w ∈ R t , t < n, the portion of the state that does not affect Φ, and assuming without loss of generality x = [h(u) T , w T ] T , we have that where in the second to last step we use ∇Φ(u) = κ(h(u), u) and in the last step we use (Assumption III.3), Corollary III.1.1 (Asymptotic convergence). The closedloop system (6) converges asymptotically to the set of critical points of (4) whenever ǫ < γ µl .
Proof. To prove this result, we show that the assumptions in Theorem III.1 hold. First, we notice that by Lemma II.1 the positional error dynamics are exponentially stable for a fixed u and thus, we can use the standard converse Lyapunov theorem [28,Theorem 3.12] to claim the existence of a Lyapunov function W that satisfies Assumption III.1.
For Assumption III.2 we consider the bound (directly using Jh(u) = I 2N ) so that we can set l = γ 1 L G2 + γ 2 I 2N .
Recalling that h(u) = u, Assumption III.3 is trivially satisfied. Finally, our cost function (4) is continuously differentiable with bounded sublevel-sets, because (4b) is a positive definite quadratic form centered in τ N and (4a) is non-negative. Since the pre-image of continuous maps of closed sets (for any c-sublevel set, [0, c] is closed) is closed as well, we can conclude that Φ(r) has compact sublevel sets.
Proposition III.1. Consider M ∈ R n×n . Then [29], Proof. To assess the strict convexity of (4) we investigate the second order condition. We derive from (5) H Φ = γ 1 (L G ⊗ I 2 ) + γ 2 I 2N . Using Proposition III.1 we have that λ ∈ spec[γ 1 L G + γ 2 I N ] if and only if ∃µ ∈ spec[L G ] s.t. The result of Corollary III.1.2 does not imply that the closed-loop system (6) has a unique equilibrium point (x * , u * ). However, it does imply that the set of equilibrium points share the same locations for the agents. Namely, ∀(x 1 , u 1 ), (x 2 , u 2 ) ∈ S, we have u 1 = u 2 and g(x 1 ) = g(x 2 ), but in general θ 1 = θ 2 . This is not unexpected: different initial conditions might lead to different final orientations for the agents.
C. Topological Considerations on the Optimal Configuration
In this subsection, we further investigate the relation between the asymptotic configuration of the swarm and the topology of the formation. Indeed, although Corollary III.1.2 guarantees that the robotics swarm converges to the optimal configuration with respect to (4), it is not clear a priori whether this corresponds to the desired final configuration r * . This is exemplified in Figure 4 in section IV, where choosing the cost function gains inappropriately leads to a misshaped final configurationr = lim t→∞ r(t).
In the last equation we use the fact that This can be readily seen as 2 has a topological interpretation. λ 2 is known as algebraic connectivity [25,Definition 6.7] and characterises the connectivity of the graph [25,Lemma 6.9]. Hence, a stronger connectivity of the specified target formation is expected to result in a final configuration that is approximately the desired one. Moreover, since γ 1 and γ 2 are gains in the gradient-flow (7), they influence the speed of convergence to the target formation and the target location. Therefore, a stronger connectivity of the formation graph allows for a larger γ 2 that in turn allows a faster convergence to the target location.
Finally, we show that the agents gather in the target formation around the target location. Notice that we cannot have both a non-trivial target formation and all agents in the same position. Hence, the correctness of the asymptotic behaviour has to be investigated by means of bounds on the distance of the robotic swarm from the target location.
Theorem III.2 (Correct final configuration). Consider the same settings of Lemma III.2, and letr be the configuration the agents converge to. Then r − τ N < δ , with B T 2r ≈ d.
IV. EXAMPLES AND EMPIRICAL RESULTS
In this section, we provide simulation 1 examples to underline the transient behaviour under the provided cost function 1 The MATLAB code for the simulations is available at github.com/antonioterpin/feedback-optimization-swarm-robotics (4), and we show what can go wrong when the assumptions of Theorem III.2 do not hold. In Figure 2 a simulation for a swarm of 5 agents is shown. The pentagon formation is specified by and the desired relative inter-agent displacement vector (reshaped in a 2 × N matrix for readibility reasons) is The agents start from random positions in a neighbourhood around the origin, they assemble in the target formation and move together towards the specified target location. In Figure 2, one can appreciate the time evolution visualised by the change in colour. It is apparent that the specified cost function guarantees that the agents do not only drive to the target location before assembling in the target formation, but they do so early on. Hence, the desired transient behaviour is empirically obtained. Characterising it quantitatively is still an open question and a topic of ongoing research. Next, we consider an E-shaped formation, which is clearly not circularly symmetric around its centroid and thus, the cost term (4b) is expected to shrink and distort the formation for poorly chosen gains. In Figure 3 and Figure 4 the simulations of the robotics swarm with different values for the gains γ 1 and γ 2 are shown. It is apparent that when the conditions of Theorem III.2 are not fulfilled, the final configuration of the agents is not the desired one (Figure 4). On the other hand, we empirically notice that a factor of ∼30 between γ 1 λ 2 and γ 2 is enough to obtain the desired result in Figure 3. Moreover, the intuition on the speed of the flocking is empirically verified. Indeed, for γ 1 λ 2 ≫ γ 2 (Figure 3), the agents converge faster to the target formation and then Figure 3, the conditions of Theorem III.2 do not hold and thus, the final configuration is not the desired one. That is, the cost function does not capture the desired goal of the task. The time evolution is colourcoded from red (beginning) to blue (end). relatively slowly towards the target location, whereas for γ 1 λ 2✟ ✟ ≫ γ 2 (Figure 4) we have a fast flocking towards the target location, but the agents do not gather in the target formation.
V. CONCLUSIONS AND FUTURE WORK
In this article, we applied feedback optimisation to drive a swarm of agents in formation towards a target location in a distributed manner. The correctness of the algorithm was rigorously proved by means of sufficient conditions on the topology of the specified formation and on the gains of the gradient-flow.
To conclude, we want to outline some possible research directions. First, being able to prove the optimality of the final configuration in presence of non-convexities due to obstacle avoidance terms or visibility constraints would represent a natural extension to the work presented in this paper. Additionally, another interesting venue of research would be to quantify to what extent feedback optimisation implicitly deals with noisy dynamics and input saturation. Finally, quantitatively characterising the transient behaviour of the closed loop system is still a major question to address in the context of feedback optimisation. | 2021-09-30T01:16:25.752Z | 2021-09-29T00:00:00.000 | {
"year": 2021,
"sha1": "6d21af9a5c38256f6f5b6eede2598494ce8c8a00",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6d21af9a5c38256f6f5b6eede2598494ce8c8a00",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Engineering"
]
} |
258171133 | pes2o/s2orc | v3-fos-license | The natural history of childhood-onset nonallergic rhinitis; a long-term follow-up study
Background: Non-allergic rhinitis (NAR) is characterized by symptoms of nasal inflammation without allergic sensitization. The long-term outcome of NAR in children is poorly defined. Objective: To determine the natural history of childhood-onset NAR and the development of allergic rhinitis (AR) in these children. Methods: NAR patients who were followed for more than 10 years were evaluated at 3-5 years (E2) and 9-12 years (E3) after the first evaluation (E1). Nasal symptoms, disease severity, comorbidities, medication used, and aeroallergen sensitization were assessed. Results: Eighty-two NAR patients (58.5% male) completed all 3 evaluations. The age at onset was 2.0 (range 2.0-4.0) years. The follow-up period was 13.6 (range 12.3-14.3) years. At E2, 37.8% of patients developed AR. At E3, the patients were classified into four groups based on results of skin prick tests in E2 and E3 (group I: NAR→NAR→NAR, 39.0%, group II: NAR→NAR→AR, 23.2%, group III: NAR→AR→NAR, 12.2% and group IV: NAR→AR→AR, 25.6%). The most common aeroallergen sensitization was house dust mite. The family history of atopy, asthma and allergic rhinitis were higher in group III and IV than other groups ( p < 0.05). The atopic dermatitis, obstructive sleep apnea and adenotonsillar hypertrophy at E1 and E2 were predominantly found in group IV ( p < 0.05). At E2, group III and IV patients had higher proportion of exposure to house dust, animal dander and smoking compared to other groups ( p < 0.05). The overall remission rate was 14.6%. Conclusion: Children with NAR should be reevaluated periodically to determine aeroallergen sensitization for the appropriate diagnosis and management.
Introduction
2][3] These symptoms occur on two or more consecutive days for more than one hour on most days. 4,5ased on the duration of nasal symptoms, rhinitis can be divided into acute rhinitis (duration less than 12 weeks), and chronic rhinitis (presenting at least 1 hour/day and at least 12 weeks/year). 63]7 Chronic rhinitis can be classified as allergic rhinitis (AR) and nonallergic rhinitis (NAR).][5] The diagnosis of AR is based on the presence of chronic rhinitis symptoms and evidence of IgE sensitization by skin prick tests (SPT) or specific IgE (sIgE) to an aeroallergen. 4,8AR is defined as chronic rhinitis with at least two nasal symptoms such as nasal obstruction, rhinorrhea, sneezing, and/or nasal itching, without clinical evidence of endonasal infection and without aeroallergen sensitization. 4,6][5][6]8,9 The triggering factors for NAR include changes in temperature or weather, tobacco smoke, exhausted fumes, and irritants such as strong odors. 10n adults, AR is more common and affects 20-30% of the population, while the prevalence of NAR is estimated to be 10-15%. 4,11NAR generally presents predominantly with adult onset and a female:male ratio of 2-3:1. 12In contrast, the prevalence of NAR in children is not well established.In a Swedish birth cohort study, the prevalence of NAR was 8.1% and 6.3% at 4 years and 8 years, respectively. 13][16][17] Changing pattern of aeroallergen sensitization upon follow-up can be found in patients with NAR.Rondón et al. reported that 24% of adult patients with NAR developed aeroallergen sensitization within 3-7 years of follow-up. 18In pediatrics, 5.6-40% of NAR children developed aeroallergen sensitization after 3 to 5 years of follow-up. 13,19owever, long-term follow-up of NAR in children has not been well studied.Therefore, our objective was to determine the extended natural history of NAR and the continuous development of AR in pediatric population.to aeroallergens. 4,5,88,9,20,21 The severity and persistence of rhinitis symptoms were classified as mild, moderate or severe, and intermittent or persistent according to the 2019 Allergic Rhinitis and Its Impact on Asthma (ARIA). 22Mild rhinitis was defined as symptoms of rhinitis that did not disrupt activities of daily life, including sleep, while moderate to severe rhinitis affected these activities.Intermittent rhinitis was defined as rhinitis symptoms less than four days a week or less than four consecutive weeks, while persistent rhinitis was defined as rhinitis symptoms lasting more than four days a week and more than four consecutive weeks. 5,22Remission of rhinitis was defined as the absence of rhinitis symptoms without using any medication to control symptoms for at least one year. 23omorbidities including asthma, adenotonsillar hypertrophy, obstructive sleep apnea (OSA), chronic rhinosinusitis, eye symptoms, atopic dermatitis and food allergy were collected.Environmental factors (cigarette smoke exposure and pets in house) and aggravating factors (exposure to house dust, animal dander, irritant, pollen, temperature, and seasonal changes) were obtained.
Skin prick test
Skin prick test was performed to detect the most prevalent aeroallergen sensitization including house dust mites (Dermatophagoides pteronyssinus, Dp and Dermatophagoides farinae, Df), American and German cockroaches, cat and dog dander, Acacia, Careless weeds, grass pollens (Bermuda and Johnson), and molds (Alternaria spp., Cladosporium spp., Penicillium spp., Aspergillus spp., and Curvularia spp.).Commercial allergens from ALK-Abello, Port Washington, NY, were used.Histamine (10 mg/mL) and glycerine were used as positive and negative controls, respectively.The SPT was considered positive if there was a mean wheal diameter of 3 mm larger than the negative control for at least one aeroallergen.Patients were asked to discontinue antihistamines for at least seven days prior to skin tests.
Total nasal symptom score (TNSS) and medication score
At the third evaluation, the total nasal symptom score (TNSS) in the past four weeks were assessed using the sum of four individual symptoms scores for rhinorrhea, nasal congestion, nasal itching, and sneezing, with a scale of 0 = no symptom, 1 = mild, 2 = moderate, or 3 = severe symptom based on the disturbance of daily activities (possible score of 0-12). 24he daily medications were calculated as the medication score.The scores for the different medications were designated as follows: 0 = no medication, 1 = patient took oral or ocular antihistamine, 2 = patient took intranasal or inhaled corticosteroids or leukotriene receptor antagonist (LTRA) or decongestant, and 3 = patient took oral corticosteroid. 25,26he TNSS and medication score were combined into a total combination score. 27
Study design and subjects
This study was conducted at the Department of Pediatrics, Faculty of Medicine Siriraj Hospital, Mahidol University, Thailand.It was approved by the Institutional Review Board, Siriraj Hospital (approval no.333/2562, COA no.Si 239/2019).Informed consent was obtained prior to the study.We recruited patients who were diagnosed with NAR and were followed by pediatric allergists in the pediatric allergy clinic at Siriraj hospital for more than 10 years.Demographic data were obtained from the medical records and interview.Patients were invited for reevaluation visits.The second and third evaluations were completed at 3-5 years and 9-12 years after the first evaluation.Current symptoms, comorbidities, and medications for rhinitis were obtained at the third evaluation.SPT to the same panel of aeroallergens as it was performed in the first evaluation, was repeated at the second and third evaluations.Patients who did not complete the second and third evaluations were excluded.
AR was clinically defined by chronic rhinitis symptoms that included rhinorrhea, nasal obstruction, nasal itching, and sneezing after exposure to allergens with positive SPT
Statistical analysis
The demographic data, comorbidities, and triggering factors were analyzed using descriptive analysis (frequencies, percentages, median, and range).The chi-squared test was used to compare data between persistent NAR and patients who developed AR.Quantitative data (age of onset, medication scores, TNSS and combination score) were analyzed using the Mann-Whitney U test.The significance level was set at the p-value ≤ 0.05 or when the 95% confidence intervals (CI) of the odds ratio did not contain the value of 1.
The patients were classified into four groups according to the result of SPT to aeroallergens at each evaluation.Group I (n = 32, 39.0%) were patients diagnosed with NAR in the second and third evaluations.Group II (n = 19, 23.2%) were patients diagnosed with NAR in the second evaluation but developed AR in the third evaluation.Group III (n = 10, 12.2%) were patients diagnosed with AR in the second evaluation but turned to NAR in the third evaluation.Group IV (n = 21, 25.6%) were patients diagnosed with AR in the second and third evaluations (Figure 1).
Demographic data and group allocation
Eighty-two of the 175 NAR patients who had completed three evaluation visits were recruited in this study (Figure 1).At the third evaluation, 52 patients in the NAR and 41 patients in the AR group were loss to follow-up.However, there was no significant difference in sex, age of onset, family history of atopy and baseline severity of rhinitis symptoms between follow-up and non-follow-up NAR and AR patients in the second evaluation (data not shown).
Groups of the patients (n, %)
First The demographic data are shown in Table 1.Forty-eight patients (58.5%) were males.The median follow-up period was 13.7 (range 12.3-14.3)years.The median age of onset of chronic rhinitis was 2.0 (range 2.0-4.0)years and the mean ± SD age at the third evaluation was 18.7 ± 3.3 years.Among the four groups, group II had a mean age at the third evaluation less than other groups (p = 0.02).The family history of atopy, asthma and AR were higher in group III and IV than group I and II (p < 0.01, p = 0.02 and p < 0.01, respectively).
Comorbidities
The comorbidities at all evaluations were shown in Table 1.At the first and the second evaluation, atopic dermatitis, OSA and adenotonsillar hypertrophy were predominantly found in group IV (p ≤ 0.05).Food allergy was predominantly found in group III and IV at both evaluations (p ≤ 0.05).At the third evaluation, none of the patients reported atopic dermatitis, OSA or adenotonsillar hypertrophy and there was no significantly different of comorbidity among all groups.
Environmental and triggering factors
At the first evaluation, the most common triggering factor in all groups was the change in temperature.None of the patients were triggered by pollen.Patients in group III had significantly lower proportion of change in temperature as a triggering factor (p = 0.01).Patients in group III and IV had higher proportion of smoking in the house compared to patients in group I and II (p = 0.02) (Table 2).
At the second evaluation, the change in temperature was also the most common triggering factor in all groups.However, patients in group III and IV had significantly lower proportion of change in temperature as a triggering
The rhinitis score, severity and persistence of rhinitis symptoms and remission at the third evaluation
Comparisons of rhinitis symptoms (nasal congestion, nasal itching, sneezing and rhinorrhea) in each evaluation are shown in Table 3.All patients had rhinorrhea at the first evaluation.At the second evaluation, nasal itching was predominantly found in group III and IV (p < 0.01) and sneezing was predominantly in group IV (p = 0.04) when compared to other groups.On the other hand, all of the patients among 4 groups had no difference of each rhinitis symptom at the first and third evaluations.
For The medication used in chronic rhinitis is not significantly different among the 4 groups.None of the patient used intranasal antihistamine, systemic corticosteroid or long term antibiotic.
Most of the patients with INS treatment took INS regularly (group I: 78.6%, group II: 83.3%, group III 75%, group IV 90.9%, p = 0.15).On the other hand, patients with antihistamine treatment took this medication according to the allergic status (group I 24.4%, group II 71.4%, group III 33.3% and group IV 72.7%), but the frequency was not significantly different among the 4 groups (p = 0.07).
At the third evaluation, the overall remission rate was 14.6%.The remission rate in group I was 6.2%, group II 10.5%, group III 30% and group IV 23.8%.The trend of the severity and persistence of rhinitis symptoms in each evaluation was shown in Figure 2B.Patients who had no symptom were found at the third evaluation but not the first or second evaluation.
At the third evaluation, group III and IV had higher proportion of patients without symptoms compared to group I and II (p = 0.05).Patients in group III had either no or mild intermittent rhinitis symptoms.The group II and IV had higher proportion of moderate to severe intermittent severity compared to group I and III (p = 0.05).
Discussion
NAR is a common condition that affects more adults than children. 15Seventy percent of patients with NAR are diagnosed at more than 20 years of age. 21In adults, NAR accounts for 17-52% of chronic rhinitis cases, occurs more frequently in females than males (58% vs. 42%, respectively), and the symptoms of rhinitis are more likely to be perennial than seasonal. 20,21In children, AR was three times more common than NAR and found more often in males than females. 16,28atients with NAR could develop AR upon follow up.In adults, Rondon et al. reported that 24% of NAR developed sensitization to new aeroallergens and were diagnosed with AR after 3-7 years of follow-up. 18In children, Lee SH, et al. followed seven-year-old children with NAR for two years and found that 26% developed AR. 29However, a Swedish birth cohort analyzed sensitization data and found that only 5.6% of children with NAR at age four had developed AR four years later. 13Our previous study in Thailand found that 41% of children with NAR developed sensitization to aeroallergens and were diagnosed with AR after 3-5 years. 19is current study followed children with NAR for more than 10 years in an allergic clinic at a tertiary hospital.Thirty-nine percent of the NAR patients (group I) were still not sensitized to aeroallergens during the second and third follow-up evaluations, while 23% (group II) developed aeroallergen sensitization later at the third evaluation.Interestingly, 12% of NAR patients (group III) were sensitized to aeroallergens at the second evaluation but became non-sensitized at the third evaluation.Twenty-six percent of the NAR patients (group IV) developed aeroallergen sensitization at the second evaluation and remained sensitized at the third evaluation (Figure 1). Figure 3 demonstrates the significant factors which influence the natural history of NAR patients.
Our findings were supported by Shin JH et al. who followed adult patients with rhinitis in Korea for 32 months and re-evaluated aeroallergen sensitization at 2 time-points.They reported that 56.5% of rhinitis patients revealed changes in allergen sensitization patterns in which 62.8% developed new sensitization and 66.7% turned to desensitization. 30Among those who developed new sensitization, 30.6% developed allergen sensitization after lacking sensitization on the first test and 67.3% were sensitized to additional allergens.Among those who turned desensitization, 67.3% became desensitized to one or more allergens (but not all allergens) on the second test, and 32.7% became negative sensitization on the second test. 30n our study, we also found that 37.8% of NAR patients developed new sensitization at the second evaluation (group III and IV) and 23.2% developed new sensitization at the third evaluation (group II).Interestingly, 12.2% of AR patients turned desensitization at the third evaluation (group III).
Previous studies identified that family history of atopy was a significant predictor for the development of AR. 19,31 Our study found that family history of atopy, asthma and AR were predominantly found in group III and IV patients who developed AR at the second evaluation.This finding might suggest the important role of genetic factor in developing AR in school-age children (Table 1 and Figure 3).
In patients who were previously diagnosed with NAR, the most common comorbidities in patients who developed AR later were asthma, atopic dermatitis and food allergy. 16,19,28n contrast, sinusitis was found to be more common in NAR patients who did not developed AR. 16Data on OSA have been inconsistent.Vichayanond, et al. found that NAR patients who did not develop AR have more OSA than AR patients. 16owever, Veskitkul, et al. showed that OSA can be found more often in patients with NAR who further developed AR than patients who did not developed AR. 19Our study found that NAR patients who developed AR at the second and third evaluation (group IV) tended to have more symptoms of atopic dermatitis, OSA, and adenotonsillar hypertrophy and food allergy than other groups at the first and second evaluation.However, there was no significant difference among the 4 groups in the third evaluation as these comorbidities might outgrow at the middle to late adolescent period (Table 1 and Figure 3).
In this study, the most common sensitization in patients who developed AR in the second and third evaluations was house dust mites, which was consistent with other reports. 16,19,28House dust, animal dander and pollen have been reported to trigger symptoms more frequently in NAR patients who developed AR, while temperature change was a more frequent trigger for NAR patients who did not develop AR. 19 Our study observed that temperature change was the most frequent triggering factor in all groups at the first and second evaluations.For the environmental factors, previous studies reported the synergistic effect of family history of atopy and smoke exposure, on increasing the risk of allergic sensitization and allergic diseases including AR. 32,33 In our study, smoking in the house was predominated in group III and IV patients at the first and second evaluation.These two groups also had a strong family history of atopy (Table 1).However, the environmental and triggering factors were not different among all groups at the third evaluation (Table 2).
To our knowledge, this is the first study to report the characteristics of patients with NAR that went on to develop AR with a follow-up period of more than 10 years.This study may have some limitations.First, the small sample size was due to an expanded time of follow-up in a specific population group and some patients were loss to be contacted.However, the demographic data between the follow-up and the non-follow-up groups were not different.Second, the definition of NAR in this study might consist of both true NAR and local allergic rhinitis (LAR). 37AR is chronic rhinitis without evidence of aeroallergen sensitization by SPT or sIgE, but there is a localized IgE-mediated nasal allergic response that was confirmed by a positive nasal allergen provocation test (NAPT). 4,37,38 systematic review of LAR in adults demonstrated that the proportion of detectable nasal-sIgE in nasal secretions in patients with NAR was 10.2% (7.4-13.4),while the prevalence of LAR in children was far less than in adults.37,39 The previous study by our group found only 3.7% of children with NAR had a positive NAPT to Dp. 40 Due to the complexity of NAPT and the inability to test more than one allergen at once, it is not a practical test to perform routinely.41 We did not perform NAPT in our participants, so LAR was not was not identified in the group of NAR patients.
The characteristics of NAR in adults and children are different.Adults with NAR tend to have more persistent symptoms than AR patients, but the severity of symptoms is similar. 34Furthermore, the study by Rondon et al. showed that the persistence and severity of current NAR and developed AR patients were comparable with adult NAR patients. 18In children, NAR has a wide variety of clinical characteristics and the data on the severity of rhinitis symptoms between NAR and AR groups are discordant.Chiang et al. found that preschool children with AR had more moderate to severe symptoms of nasal itching, sneezing, nasal congestion, and nasal discharge than children with NAR. 28However, Vichyanond et al. found that there was no difference in the severity of rhinitis between patients with NAR and AR. 16Veskitkul et al. showed that NAR patients who developed AR experienced more persistent, moderate to severe nasal and eye symptoms than NAR patients who did not develop AR. 19 Our study found that nasal itching was frequently reported in patients who developed AR at the second evaluation (group III and IV) and sneezing was frequently reported in patients who developed AR at both the second and third evaluations (group IV, Table 3).However, at the third evaluations, no significant difference was found in total nasal symptom, medication, or combination scores among the 4 groups (Figure 2A) and group III and IV had higher proportion of patients without symptoms compared to group I and II (Figure 2B).
The main medications used to treat AR and NAR are INS, topical antihistamine sprays, combination therapy with INS and topical antihistamine and as needed oral antihistamine. 22,35We also found that INS were most used in AR and NAR patients.Group II and IV patients who were diagnosed AR in third evaluation also use daily oral antihistamine because most AR patients in Thailand are perennial AR with house dust mite sensitization. 36he remission rate of NAR varies among studies.Westman M et al. revealed that 73% of NAR children experienced remission during a 4-year follow-up. 13Lee SH et al. showed that 37% of NAR children reported no chronic rhinitis symptoms after two years. 29Our study found that clinical remission of NAR was 14.6% after more than 10 years of follow-up and there was no difference among all groups.The discrepancy between remission rate of NAR in each study may be from the duration of follow-up and the definition of remission.The two previous studies defined remission as the absence of symptoms of rhinitis symptoms at the follow-up time point but they did not mention about any medication use. 13,29On the contrary, we defined remission as the absence of rhinitis symptoms and the absence of medication use to control symptoms for at least one year.
Conclusion
Long term follow-up of NAR in children demonstrated that 39% of them had the same diagnosis.The diagnosis was changed to AR at the second or third evaluations in 61% of the patients.Therefore, periodic re-evaluation of aeroallergen sensitization is required to ensure a correct diagnosis.Appropriate management, such as allergen avoidance recommendation and specific treatments, including allergen immunotherapy, can be offered to patients who develop AR.
Figure 1 .
Figure 1.Participants' data availability at the 1 st , 2 nd , and 3 rd evaluations.Skin prick test to aeroallergens was conducted at each evaluation.
Figure 2 .
Figure 2. The dot plot of total nasal symptom score, medication score and combination score among 4 groups at the third evaluation (2A) and the trend of severity and persistency of rhinitis symptoms at each evaluation of 4 groups of patients (2B).
Figure 3 .
Figure 3.The significant factors at all evaluations which influence the natural history of NAR patients.
Table 1 . Demographic data of the patients with NAR who were classified into four groups.
*statistically significant among 4 groups of patients
Table 2 . Environmental and triggering factors at initial, 2 nd and 3 rd evaluations in all groups.
statistically significant among 4 groups of patients factor compared to patients in group I and II (p < 0.01).House dust and animal dander exposure were dominant triggering factors in group III and IV patients when compared to group I and II patients (p = 0.02 and p = 0.02, respectively). *
Table 3 . The nasal symptoms at the initial, 2 nd and 3 rd evaluations in all groups.
*statistically significant among 4 groups of patients | 2023-04-17T06:16:12.995Z | 2023-04-17T00:00:00.000 | {
"year": 2023,
"sha1": "06d3ad783885a3891477b21fa5d0cb163585ff54",
"oa_license": null,
"oa_url": "https://doi.org/10.12932/ap-140922-1455",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f0cb352c652440a29189c7b58310460b825bcaab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
145024029 | pes2o/s2orc | v3-fos-license | Cul4A Modulates Invasion and Metastasis of Lung Cancer through Regulation of ANXA10
Cullin 4A (Cul4A) is overexpressed in a number of cancers and has been established as an oncogene. This study aimed to elucidate the role of Cul4A in lung cancer invasion and metastasis. We observed that Cul4A was overexpressed in non-small cell lung cancer (NSCLC) tissues and the overexpression of Cul4A was associated with poor prognosis after surgical resection and it also decreased the expression of the tumor suppressor protein annexin A10 (ANXA10). The knockdown of Cul4A was associated with the upregulation of ANXA10, and the forced expression of Cul4A was associated with the downregulation of ANXA10 in lung cancer cells. Further studies showed that the knockdown of Cul4A inhibited the invasion and metastasis of lung cancer cells, which was reversed by the further knockdown of ANXA10. In addition, the knockdown of Cul4A inhibited lung tumor metastasis in mouse tail vein injection xenograft models. Notably, Cul4A regulated the degradation of ANXA10 through its interaction with ANXA10 and ubiquitination in lung cancer cells. Our findings suggest that Cul4A is a prognostic marker in NSCLC patients, and Cul4A plays important roles in lung cancer invasion and metastasis through the regulation of the ANXA10 tumor suppressor.
Cell Invasion Assay
The invasion assays were performed in 24-well (6.5 mm diameter) cell-culture inserts (8.0 mm pore size, Corning, Tewksbury, MA, USA) coated with an indicator layer of growth factor reduced Matrigel (BD Transduction Laboratories, San Jose, CA, USA). The cells were plated in the upper well in 0.2% serum and then incubated with 5% FBS and 100 ng/mL fibronectin in the lower chambers. After 24 h, cells in the upper chamber were removed with a cotton swab. Cells that migrated into the lower chamber were fixed in 4% PFA and then stained with 0.5% crystal violet. Filters were photographed and the total number of cells was quantified.
Western Bot Analysis
Whole protein was extracted using mammalian protein extraction reagent (M-PER) from the cell lines. The proteins were digested using the Phosphatase Inhibitor Cocktail Set II (Calbiochem, San Diego, CA, USA) and complete protease inhibitor cocktails (Roche, Lewes, UK) according to the manufacturer's protocols. The digested proteins were separated on 4-15% gradient sodium dodecyl sulfate (SDS)-polyacrylamide gels and transferred to Immobilon-P membranes (Millipore, Billerica, MA, USA). The following primary antibodies were used: Cul4A (Abcam, Cambridge, MA, USA), ANXA10 (GeneTex), and β-actin (Sigma, St. Louis, MO, USA). After incubation with indicated secondary antibodies, the membranes were washed thoroughly and an enhanced chemiluminescence (ECL) blotting analysis system (GE Healthcare Life Sciences, Piscataway, NJ, USA) was used for antigen-antibody detection. The relative intensities of protein bands were analyzed by densitometry using ImageJ 1.46r software (National Institutes of Health, Bethesda, MD, USA).
Transfection with Small Interfering RNA (siRNA) and Vectors
Pre-designed and validated Cul4A (Dharmacon, Lafayette, CO, USA), ANXA10 (Santa Cruz, Santa Cruz, CA, USA), and universal negative control siRNAs were transfected (final concentration = 50 nM) in cells grown to 80% confluence on 6-well plates using an antibiotic-free media and Lipofectamine™ RNAiMAX reagent (Invitrogen, Carlsbad, CA, USA) following the manufacturer's instructions. At 96 h after transfection, the cells were treated with gemcitabine for 72 h followed by counting for viable cells. The pCMV6-ANXA10-GFP (OriGene, Rockville, MD, USA) and empty pCDNA3 (Invitrogen, Carlsbad, CA, USA) vectors were transfected with OmniFect™ transfection reagent (TransOMIC, Huntsville, AL, USA), following the manufacturer's instructions. Cells were plated in 6-well plates in antibiotic-free media and then transfection was performed with cells at 80% confluence, with a final concentration of 0.5 µg for each vector.
Protein Degradation Assay
Protein degradation assay was used to evaluate the effects of Cul4A on the decay of ANXA10 in lung cancer cells. Cells, transfected with Cul4A siRNA and Cul4A vector, were plated on 6 cm culture dishes. At 80% confluence, the cells were exposed to 100 mg/mL of cycloheximide. Then, the cells were harvested at the indicated time points. Total cellular proteins were extracted and analyzed by western blot analysis using β-actin as a loading control.
In Vivo Ubiquitination Assay
The 293T cells were co-transfected with a combination of pBabe-Cul4A-myc-his and pCMV6-ANXA10-GFP (OriGene) with or without pRK5-HA-Ubiquitin-WT (Addgene, Cambridge, MA, USA). All the cells were treated with 10 µg/mL of MG132 for 24 h prior to lysis. Anti-GFP antibody was used for immunoprecipitation. Anti-HA tag antibody (Cell Signaling, Danvers, MA, USA) was used for the western blot analysis.
Tail Vein Injection Mouse Xenograft Models
Tail vein injection was used to establish a lung metastasis xenograft model for the Cul4A knockdown and metastasis. Approval from the Institutional Animal Care and Use Committee (IACUC) was obtained for the experiments (IACUC No. 2014121206). Female Balb/c athymic nude mice (5-6 weeks old) were housed under specific pathogen-free conditions. Cells were cultured in RPMI media, then suspended at a concentration of 1 × 10 6 cells/100 µL, and 100 µL of this suspension was injected into the tail vein of the mice. Fluorescence molecular tomography (FMT) (PerkinElmer, Waltham, MA, USA) imaging was performed 6 weeks after injection of the lung cancer cells. ProSense680 (PerkinElmer) was injected into the tail vein and then FMT imaging was performed 48 h later. The mice were euthanized at 8 weeks, and their lungs were harvested for further analysis.
Statistical Analysis
Data are presented as mean values ± standard error of deviation (SD). Student's t-test was used for comparing the means, unless otherwise mentioned. Statistical analysis was carried out using MedCalc version 15 (MedCalc Software, Ostend, Belgium). A p value < 0.05 was considered statistically significant. All the statistical tests were two-tailed.
Cul4A Is Upregulated in the NSCLC Tissues
First, we examined the Cul4A protein expression in 73 primary NSCLC tissues. An increased expression of Cul4A was observed in 59 (80.8%) tumor tissues compared to the paired normal tissues ( Figure 1A). Receiver operating characteristic (ROC) curves and the Youden index were used to determine the optimal cutoff value of Cul4A IRS for disease recurrence after surgical resection of NSCLC lung cancer ( Figure S1). High expression of Cul4A levels, which was defined by an IRS score greater than 6, were detected in 12 of the 73 (16.4%) NSCLC tissue specimens that were analyzed, and the high expression was associated with a significantly decreased disease-free survival (DFS) after surgical resection of the lung cancer ( Figure 1B,C). The expression of ANXA10 was also examined and a significantly negative correlation of the expression of Cul4A and ANXA10 was determined using the Spearman rank correlation test ( Figure 1D,E). ProSense680 (PerkinElmer) was injected into the tail vein and then FMT imaging was performed 48 h later. The mice were euthanized at 8 weeks, and their lungs were harvested for further analysis.
Statistical Analysis
Data are presented as mean values ± standard error of deviation (SD). Student's t-test was used for comparing the means, unless otherwise mentioned. Statistical analysis was carried out using MedCalc version 15 (MedCalc Software, Ostend, Belgium). A p value < 0.05 was considered statistically significant. All the statistical tests were two-tailed.
Cul4A is Upregulated in the NSCLC Tissues.
First, we examined the Cul4A protein expression in 73 primary NSCLC tissues. An increased expression of Cul4A was observed in 59 (80.8%) tumor tissues compared to the paired normal tissues ( Figure 1A). Receiver operating characteristic (ROC) curves and the Youden index were used to determine the optimal cutoff value of Cul4A IRS for disease recurrence after surgical resection of NSCLC lung cancer ( Figure S1). High expression of Cul4A levels, which was defined by an IRS score greater than 6, were detected in 12 of the 73 (16.4%) NSCLC tissue specimens that were analyzed, and the high expression was associated with a significantly decreased disease-free survival (DFS) after surgical resection of the lung cancer ( Figure 1B,C). The expression of ANXA10 was also examined and a significantly negative correlation of the expression of Cul4A and ANXA10 was determined using the Spearman rank correlation test ( Figure 1D,E).
Knockdown of Cul4A Is Associated with the Upregulation of ANXA10 in Lung Cancer Cells
We further explored the expression of the cancer metastasis suppressor ANXA10 after knocking down Cul4A. We observed an increase in the ANXA10 protein level in Cul4A knockdown lung cancer cells using western blotting ( Figure 2A). The expression of ANXA10 mRNA was further evaluated by RT-PCR in the Cul4A knockdown H460 and A549 lung cancer cells. Remarkably, no obvious change in the ANXA10 mRNA levels was observed in the Cul4A shRNA transfected groups of lung cancer cells compared to the cells transfected with the empty vector ( Figure 2B). Cul4A overexpression in the H460 lung cancer cells resulted in lowered protein levels of ANXA10 ( Figure 2C). Similar to the results observed with transient siRNA transfection, H460 and A549 lung cancer cells were stably transfected with Cul4A shRNA using retroviral transduction, which resulted in the knockdown of Cul4A, and showed increased ANXA10 protein levels ( Figure 2D).
Knockdown of Cul4A Is Associated with the Upregulation of ANXA10 in Lung Cancer Cells
We further explored the expression of the cancer metastasis suppressor ANXA10 after knocking down Cul4A. We observed an increase in the ANXA10 protein level in Cul4A knockdown lung cancer cells using western blotting ( Figure 2A). The expression of ANXA10 mRNA was further evaluated by RT-PCR in the Cul4A knockdown H460 and A549 lung cancer cells. Remarkably, no obvious change in the ANXA10 mRNA levels was observed in the Cul4A shRNA transfected groups of lung cancer cells compared to the cells transfected with the empty vector ( Figure 2B). Cul4A overexpression in the H460 lung cancer cells resulted in lowered protein levels of ANXA10 ( Figure 2C). Similar to the results observed with transient siRNA transfection, H460 and A549 lung cancer cells were stably transfected with Cul4A shRNA using retroviral transduction, which resulted in the knockdown of Cul4A, and showed increased ANXA10 protein levels ( Figure 2D).
Knockdown of Cul4A Is Associated with the Upregulation of ANXA10 in Lung Cancer Cells
We further explored the expression of the cancer metastasis suppressor ANXA10 after knocking down Cul4A. We observed an increase in the ANXA10 protein level in Cul4A knockdown lung cancer cells using western blotting ( Figure 2A). The expression of ANXA10 mRNA was further evaluated by RT-PCR in the Cul4A knockdown H460 and A549 lung cancer cells. Remarkably, no obvious change in the ANXA10 mRNA levels was observed in the Cul4A shRNA transfected groups of lung cancer cells compared to the cells transfected with the empty vector ( Figure 2B). Cul4A overexpression in the H460 lung cancer cells resulted in lowered protein levels of ANXA10 ( Figure 2C). Similar to the results observed with transient siRNA transfection, H460 and A549 lung cancer cells were stably transfected with Cul4A shRNA using retroviral transduction, which resulted in the knockdown of Cul4A, and showed increased ANXA10 protein levels ( Figure 2D).
Knockdown of Cul4A Represses Metastasis and Invasion in Lung Cancer Cells
Effects of Cul4A knockdown on metastasis and invasion of Cul4A shRNA transfected H460 and A549 stable lung cancer cells were evaluated using cell migration and invasion assays. We observed that the knockdown of Cul4A significantly repressed metastasis ( Figure 3A-D) and invasion ( Figure 3E-H) of lung cancer cells.
Knockdown of Cul4A Represses Metastasis and Invasion in Lung Cancer Cells
Effects of Cul4A knockdown on metastasis and invasion of Cul4A shRNA transfected H460 and A549 stable lung cancer cells were evaluated using cell migration and invasion assays. We observed that the knockdown of Cul4A significantly repressed metastasis ( Figure
Knockdown of Cul4A Represses Metastasis and Invasion in Lung Cancer Cells
Effects of Cul4A knockdown on metastasis and invasion of Cul4A shRNA transfected H460 and A549 stable lung cancer cells were evaluated using cell migration and invasion assays. We observed that the knockdown of Cul4A significantly repressed metastasis ( Figure 3A-D) and invasion ( Figure 3E-H) of lung cancer cells.
Cell Migration and Invasion are Restored by Knockdown of ANXA10 in Cul4A Knockdown Lung Cancer Cells
To evaluate the effect of ANXA10 on the cell migration and invasion of lung cancer cells, ANXA10 knockdown assay using ANXA10 siRNA was performed in Cul4A knockdown H460 and
Cell Migration and Invasion Are Restored by Knockdown of ANXA10 in Cul4A Knockdown Lung Cancer Cells
To evaluate the effect of ANXA10 on the cell migration and invasion of lung cancer cells, ANXA10 knockdown assay using ANXA10 siRNA was performed in Cul4A knockdown H460 and A549 stable lung cancer cells ( Figure 4A). Both cell migration ( Figure 4B-E) and invasion ( Figure 4F-I) were restored following the knockdown of ANXA10 in Cul4A knockdown H460 and A549 lung cancer stable cells. A549 stable lung cancer cells ( Figure 4A). Both cell migration ( Figure 4B-E) and invasion ( Figure 4F-I) were restored following the knockdown of ANXA10 in Cul4A knockdown H460 and A549 lung cancer stable cells.
Knockdown of Cul4A Represses Metastasis of Lung Cancer Tumors in Tail Vein Injection Mouse Models
Tail vein injection models were established using H460 and A549 cells that were stably transfected with Cul4A shRNA to confirm the effect of Cul4A on lung cancer metastasis. A significant reduction in lung metastasis was observed in the Cul4A knockdown groups of H460 using FMT imaging ( Figure 5A,B) and A549 ( Figure 5D,E) lung cancer cells.
An increased expression of ANXA10 was also observed in the Cul4A knockdown H460 ( Figure 5C) and A549 cells compared to the empty virus transfected lung cancer cells ( Figure 5F).
Knockdown of Cul4A Represses Metastasis of Lung Cancer Tumors in Tail Vein Injection Mouse Models
Tail vein injection models were established using H460 and A549 cells that were stably transfected with Cul4A shRNA to confirm the effect of Cul4A on lung cancer metastasis. A significant reduction in lung metastasis was observed in the Cul4A knockdown groups of H460 using FMT imaging ( Figure 5A,B) and A549 ( Figure 5D,E) lung cancer cells.
An increased expression of ANXA10 was also observed in the Cul4A knockdown H460 ( Figure 5C) and A549 cells compared to the empty virus transfected lung cancer cells ( Figure 5F).
ANXA10 Expression Is Regulated by Protein Degradation
Protein degradation assay was used to evaluate the stability of ANXA10 protein in Cul4A knockdown and overexpressed H460 lung cancer cells. After treated with cycloheximide for the indicated periods, protein degradation was dramatically increased in the Cul4A overexpressed lung cancer cells ( Figure 6A), while Cul4A knockdown lung cancer cells showed a marked reduction in protein degradation ( Figure 6B). The NEDD8 inhibitor specifically blocks the NEDDylation and subsequent function of Cul4A [30]. Protein degradation assay was also performed in H460 lung cancer cells after the treatment of MLN4924 (Sigma, St. Louis, MO, USA), a NEDD8 inhibitor. Decreased protein degradation was observed in the MLN4924 treated lung cancer cells ( Figure 6F).
Protein degradation assay was used to evaluate the stability of ANXA10 protein in Cul4A knockdown and overexpressed H460 lung cancer cells. After treated with cycloheximide for the indicated periods, protein degradation was dramatically increased in the Cul4A overexpressed lung cancer cells ( Figure 6A), while Cul4A knockdown lung cancer cells showed a marked reduction in protein degradation ( Figure 6B). The NEDD8 inhibitor specifically blocks the NEDDylation and subsequent function of Cul4A [30]. Protein degradation assay was also performed in H460 lung cancer cells after the treatment of MLN4924 (Sigma, St. Louis, MO, USA), a NEDD8 inhibitor. Decreased protein degradation was observed in the MLN4924 treated lung cancer cells ( Figure 6F). In reciprocal immunoprecipitation and in vivo ubiquitination assays, the cells were treated with 10 µg/mL of MG132 for 24 h prior to being lysed. Anti-GFP antibody was used for the immunoprecipitation. Anti-HA antibody was used for the western blot (WB) analysis. (F) Protein degradation assay for ANXA10 in H460 lung cancer cells treated with DMSO and 1 µM of MLN4924, a NEDD8 inhibitor, for 24 h. H460 cells were incubated with 100 µg/mL cycloheximide (CHX) for the indicated time periods. The expression of ANXA10 was quantified by densitometry normalized to actin, and using o hour groups as the control.
ANXA10 Is Ubiquitinated by Cul4A Through a Protein-Protein Interaction
We evaluated the role of Cul4A mediated ubiquitination in ANXA10 degradation. Co-immunoprecipitation assay was performed and the association of ANXA10 with Cul4A was observed ( Figure 6C,D). Further in vivo ubiquitination assays showed that the overexpression of Cul4A increased the ubiquitination of ANXA10 in the 293T cells ( Figure 6E).
Discussion
Cul4A has been implicated in multiple cancers, namely breast [7], mesothelioma [8], lung [9], and liver cancers [10]. In this study, we observed that the upregulated Cul4A is associated with poor prognosis in NSCLC lung cancer patients after surgery. On the other hand, the knockdown of Cul4A is associated with the decreased invasion and metastasis of lung cancer cells and tumors. The knockdown of Cul4A is associated with the increased expression of ANXA10, a tumor suppressor protein, in lung cancer cells. Further experimentation revealed that Cul4A regulates ANXA10 through ubiquitination and protein degradation in lung cancer cells. We hypothesized that Cul4A mediated degradation of ANXA10 was one of the key mechanisms in lung cancer invasion and metastasis. To our knowledge, the association of Cul4A and ANXA10 in NSCLC tissues and lung cancer cells was also reported for the first time after reviewing published papers.
A nude mouse tail vein injection metastasis model was established in our study, which showed a decreased metastatic potential in Cul4A knockdown lung cancer cells. Cul4A has been reported to promote cancer metastasis and invasion in osteosarcoma [31] cells. In breast cancer cells, Cul4A induces an epithelial-mesenchymal transition, and it promotes cancer metastasis by regulating the expression of the EMT regulatory ZEB1 gene [32]. In colorectal cancers, overexpression of Cul4A induces the epithelial-mesenchymal transition through the regulation of H3K4 trimethylation at the E-cadherin, N-cadherin, and vimentin gene promoters [33]. In gastric cancers, overexpression of Cul4A promoted gastric cancer cell proliferation and epithelial-mesenchymal transition by downregulating LATS1-Hippo-YAP signaling [34]. Our study showed that the overexpression of Cul4A was associated with the downregulation of the tumor suppressor ANXA10 in lung cancer tissues and cells, which may provide another mechanism for the role of Cul4A in lung cancer invasion and metastasis. The mechanism of lung cancer invasion and metastasis regulated by Cul4A is complex, and further studies regarding Cul4A and other metastasis suppressors are still warranted in the future.
ANXA10 is the latest identified member of the annexin family of calcium (Ca 2+ ) and phospholipid-binding proteins [35]. The downregulation of ANXA10 correlates with decreased differentiation, invasion, and tumor progression, pointing to a possible tumor suppressor role [35]. In bladder cancer, the downregulation of ANXA10 is also related to the aggressiveness of the cancer [25]. In hepatocellular carcinoma, the downregulation of ANXA10 correlates with p53 mutation and it is associated with vascular invasion, tumor progression, and poor prognosis [36]. Decreased ANXA10 has been correlated with increased invasion in a colorectal cancer cell line and with the increased proliferation and migration in a gastric cancer cell line [37]. Additionally, the upregulation of S100A4, which is considered a mediator of metastasis, has been reported to downregulate ANXA10 in a lung cancer cell line [38]. In our study, the knockdown of ANXA10 increases lung cancer cells migration and invasion. These reports together show strong evidence of the tumor suppressor and metastasis role of ANXA10 in cancer cells.
In conclusion, our results showed that Cul4A is important in lung cancer cell invasion and metastasis through the inhibition of ANXA10, a tumor suppressor. Our results add ANXA10 to the repertoire of tumor suppressor proteins that are inhibited by Cul4A in cancer, and thus, it suggests Cul4A as a potential drug target for the development of novel therapy for lung cancer in the future.
Conclusions
Our findings suggest that Cul4A is a prognostic marker in NSCLC patients after surgery of lung cancer. Cul4A also plays important roles in lung cancer invasion and metastasis partially through ubiquitin-mediated protein degradation of ANXA10 in lung cancer cells. The role of ANXA10 as a tumor and metastasis suppressor in lung cancer cells was further confirmed. | 2019-05-05T13:03:10.451Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "8f73ade211c02f814a6c2ad3427033fc7d44616b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cancers11050618",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f73ade211c02f814a6c2ad3427033fc7d44616b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2793989 | pes2o/s2orc | v3-fos-license | Who Am I? The Self/Subject According to Psychoanalytic Theory
The article argues that the importation into psychoanalytic theory of the terms “self” and “subject” is neither true to Freud’s intentions, nor necessary, nor helpful. Having observed how Freud undermined these concepts in both his Topographical Model and Structural Model, the article turns to the position of Ogden. Two of his contributions masterfully deconstruct the concept of “the subject” through a selection of positions advocated by Freud, Klein, and Winnicott. But he then revitalizes the concept in his own way, claiming it to be “central” and “irreducible.” This article argues for a more radical stance than Ogden’s; whereas Ogden is reductionist with regard to the subject, this article argues for eliminativism.
Introduction
There have been several attempts since Freud to introduce into psychoanalytic theory concepts of `the self' and `the subject'.This essay argues that these attempts are neither necessary, nor helpful, nor true to Freud's intentions.Freud himself hardly used the terms `self' or `subject', and when he did he certainly did not intend them as technical terms.
In America it was Kohut who began talking of the self as a `psychic structure ' (1971: xv), `a content of the mental apparatus' (p.xv) with a `psychic location ' (p. xv).In England Winnicott introduced the idea of a `true self' linking it with the id.Guntrip depicts the evolution of psychoanalytic theory as consisting of four stages before it was able to reach its highpoint as `a theory of the ego as a real personal self ' (1968: 127).Bollas and Khan have taken on from Winnicott the idea of the self as an entity, though they differentiate themselves from Winnicott in the following ways.For Bollas the true self consists not only of the id, but also the ego, since the latter contains the `organizing idiom' and the `factor of personality ' (1987: 8), both of which, for him, form part of the constitution of the self.He regards the `true self' as `the historical kernel of the infant's instinctual and ego dispositions' (p.51).Khan dislikes the adjective `true' in Winnicott's phrase (and he accuses Guntrip of falling into the `danger of romanticization of a pure-self system ' [1996: 304]), but `the self' is a theme throughout his writing, and he regards self-experience as `more than can be accounted for by our structural hypotheses ' (1996: 304).The bulk of this essay will consist of demonstrating how such theoretical directions run counter to Freud's intentions and represent a return to an earlier, more narcissistic mode of thinking from which Freud enabled us to free ourselves.
Freud
Before Freud the vast majority of European philosophers -from Plato and Aristotle to Kant and Descartes -regarded human beings as having an essence, to which they gave the name `soul' or `self'.The main characteristic of this supposed entity, apart from it constituting our `core', was that it was `the subject'.The meaning of the word subject here is connected to its grammatical meaning as when we say `the subject of the sentence', the thing which carries out the action denoted by the verb.The self was regarded as the subject of both our mental and our physical actions, i.e. the thinker of our thoughts, experiencer of our experiences, perceiver of our perceptions, feeler of our feelings, as well as the initiator of our physical actions, the agent.Combined with these two characteristics of being the essence and being a subject was the idea of being unitary, single, undivided over time.Thus the self can always be referred to by the word `I' even when the latter features in such diverse contexts as moral judgements, inner sensations, sense-perceptions, intentions or physical actions (`I deem that irresponsible'; `I feel a pain'; `I heard a bang'; `I plan to retreat'; `I kicked the ball'.) It was part of the genius of Freud that he was able to see through this concept.He did not accept the existence of any single entity that could be put forward as an answer to the question `Who am I' or `What am I'?We neither are nor contain anything that remains identical over time.Even at one moment of time we are not one thing.Rather we are a multiplicity of interacting systems and processes.
Topographical Model
Freud's breaking up of the unity of the person begins with his earliest writings on hysteria.For his hysterical patients seemed both to know, yet also to not know, certain things.
Thus Freud writes of Elisabeth von R.'s love for her brother-in-law: `With regard to these feelings she was in the peculiar situation of knowing and at the same time not knowing ' (1895b: 165).And in his discussion of Lucy R., he recounts the following.He had asked her why, if she knew she loved her employer, she had not told Freud.She replied: `I didn't know-or rather I didn't want to know.I wanted to drive it out of my head and not think of it again; and I believe latterly I have succeeded' (1895a: 117).
How could one entity both know and not know something?Freud's solution was to divide us into consciousness and an unconscious. 1he unconscious of the patients in question `knew', but censorship prevented this information from passing into their consciousness.
What we are dealing with here is not something particular to neurotics, but a fundamental plurality of human subjectivity.That it is characteristic of everyone is evidenced by, for example, dreaming.The following footnote, added by Freud in 1919 to `The Interpretation of Dreams', illustrates how dreaming cannot be explained if we envisage ourselves as a unity: No doubt a wish-fulfilment must bring pleasure; but the question then arises `To whom?'.To the person who has the wish of course.But, as we know, a dreamer's relation to his wishes is quite a peculiar one.He repudiates them and censors themhe has no liking for them, in short.So that their fulfilment will give him no pleasure, but just the opposite; and experience shows that this opposite appears in the form of anxiety, a fact which has still to be explained.Thus a dreamer in relation to his dream-wishes can only be compared to an amalgamation of two separate people who are linked by some important common element (1900: 580-581).
It makes no sense to ask what in all this is the subject, the self.The dreamer?But if that were the case then it would be the dreamer that had the wish, so its fulfilment would give him/her pleasure.The dreamer's experience of anxiety in the face of the wish-fulfilment indicates that we must ascribe the wish to some other agency, an agency that wishes for things that the dreamer does not.The situation can only be satisfactorily explained on the assumption of two different agencies, one that wishes and the other that resists this wish.
To choose one of these two as the self would be arbitrary.But to claim that they are both the self would be contradictory; the very concept of selfhood implies a unity that does not allow for opposed agencies.As Freud implies, we are dealing here with an irreducible plurality, comparable only to `an amalgamation of two separate people'.
Structural Model
Thus far we have been looking at the destruction of the concept of `self' or `subject' that results from Freud's topographical model.The structural model suggests the same result.Though we may prereflectively appear as a unity, we can only be satisfactorily represented as a plurality of the three agencies of id, ego and superego.Ego = Self/Subject?One of the constituents of the structural model, the ego, may look as though it can be equated with `self' or `subject': is that not implied by the fact that the literal meaning of the German term, das Ich, is `the I'?But Freud's elaboration of the concept of the ego clearly precludes such an equation.It is true that the ego is the subject of consciousness, but Freud's point is that the subject of consciousness, far from constituting our core, is a marginal agency at the outer surface of the mind, occasionally able to influence the expression of the id's instincts, but often not.If anything occupies a central position it is the id: on one occasion, Freud describes it as `the core of our being ' (1938: 196).The ego, by comparison, is marginal and impotent, and thus very unlike a `self' or an autonomous agent, for two reasons.
1) Lack of Power.
Whereas the idea of a self is of something from whose orders all actions proceed, the ego enjoys no such autonomy: `it is not even master in its own house ' (1917a: 285), i.e. within the sphere of the mind.It may order the id to behave in a way that it deems desirable, but `the life of our sexual instincts cannot be wholly tamed ' (1917b: 143).The influence it can exert is lamentably small compared to the idea of a self as sole agent.Freud envisages the mind as a hierarchy of agencies (1917b: 141).The `highest' agency, the ego, initiates a chain of commands; but at any of the many stages before the command is carried out, it may be met with refusal.It is as though the owner of a newspaper tells the editor what he wants to be written, the editor tells the writer, the writer writes it, but then someone at the printing press does not like it so refuses to print it (my comparison not Freud's).
The limit of the ego's power can also be seen on the level of thought: it does not decide what thoughts arise, when they arise, and neither can it order them away once they have arisen: Thoughts emerge suddenly without one's knowing where they come from, nor can one do anything to drive them away.These alien guests even seem to be more powerful than those which are at the ego's command.They resist all the well-proved measures of enforcement used by the will, remain unmoved by logical refutation, and are unaffected by the contradictory assertions of reality (1917b: 141-142).
Freud mentions disowned impulses that feel foreign to the ego, which the ego fears, takes precautions against, yet feels paralyzed by.Psychoanalysis, he says, speaks thus to such an ego (1917b: 142): You over-estimated your strength when you thought you could treat your sexual instincts as you liked and could utterly ignore their intentions.The result is that they have rebelled and have taken their own obscure paths to escape this suppression; they have established their rights in a manner you cannot approve.
So here too the ego is portrayed as quite unable to defend against rebellions on the part of the id.
Whereas a self is characterized as aware of all of our thoughts and feelings, the ego `must content itself with scanty information of what is going on unconsciously in its mind ' (1917a: 285).The reports available to it are neither complete nor always accurate.Consciousness has access only to a small fraction of the mind's current activities.Rhetorically addressing the ego, Freud writes (1917b: 142-143): You feel sure that you are informed of all that goes on in your mind if it is of any importance at all, because in that case, you believe, your consciousness gives you news of it.And if you have had no information of something in your mind you confidently assume that it does not exist there.Indeed, you go so far as to regard what is "mental" as identical with what is "conscious" -that is, with what is known to you -in spite of the most obvious evidence that a great deal more must constantly be going on in your mind than can be known to your consciousness.Come, let yourself be taught something on this one point!What is in your mind does not coincide with what you are conscious of; whether something is going on in your mind and whether you hear of it, are two different things.In the ordinary way, I will admit, the intelligence which reaches your consciousness is enough for your needs; and you may cherish the illusion that you learn of all the more important things.But in some cases, as in that of an instinctual conflict such as I have de-scribed, your intelligence service breaks down and your will then extends no further than your knowledge.In every case, however, the news that reaches your consciousness is incomplete and often not to be relied on.Often enough, too, it happens that you get news of events only when they are over and when you can no longer do anything to change them.Even if you are not ill, who can tell all that is stirring in your mind of which you know nothing or are falsely informed?You behave like an absolute ruler who is content with the information supplied him by his highest officials and never goes among the people to hear their voice.
It is thus inappropriate to equate the ego with the self because not only does it have at best intermittent control over the id, it also has only partial knowledge of the contents of its own mind. 2 Freud aligned himself on this point with Copernicus and Darwin.The former undermined the narcissism that regarded man's planet as the centre of the universe; the latter undermined the narcissism that set man apart from animals as God's favourite creature.But Freud predicted that `human megalomania' would suffer its `most wounding blow' from psychoanalysis' contention that the ego is not even supreme within its own mind (1917a: 285; 1917b: 139-143).Id = Self/Subject?Could the id be characterized as our `self'?Is this not suggested by Freud's remark about the id being the `core of our being'?But to describe the id as the `self', given that it departs so far from the usual connotations of the concept of a self, would be at best counter-intuitive and at worst meaningless.The id is not the subject 2 Two other considerations make the ego unsuitable for being equated with the self: 1) A part of it is unconscious.In Freud's early writings the ego was held to be co-extensive with consciousness; but in his 1923 essay, 'The Ego and the Id', he expressed his realization that resistance, though proceeding from the ego, is unconscious (see, e.g., pp.16-18).Thus he had to accept that the ego is not wholly conscious.2) The ego itself can be split and divided against itself: see (in the context of fetishism) Freud (1915: 189). of consciousness.It can neither know itself, nor make itself known, depending for that on the ego: it `is accessible even to our own knowledge only through the medium of another agency' (Freud 1938: 196).It is not the part of us that is capable of reason, nor that which perceives the external world.
Neither is it unitary, but rather a plurality of instincts differentiated from each other because of being associated with different organs (1938: 197).These various instincts are often opposed to each other, moreover, some being predominantly infused with Eros and some with destructiveness (1938: 196).Thus Freud's intention was not, having removed selfhood and subjectivity from the ego, to rehabilitate them in the id.Rather he regarded them as suspect concepts, to be done away with altogether.It was simply narcissism that gave rise to them and sustains them, with their implications of autonomy and unity (1917a: 284-285; 1917b: 139-143).
Besides, to select just one out of three things would be arbitrary and too restricted, given that it is the combination of the three that is supposed to represent the workings of our mind.
In that case how about characterizing the conglomeration of all three as the `self' or `subject'?The problem with this move is that the conglomeration would only be misleadingly characterized by those terms.The three constituents lack sufficient compatibility and mutual coherence to be capturable by these concepts that suggest unity.The id and the ego do not share common goals (pleasure v's safety), do not function according to the same principles (pleasure principle v's reality principle).In fact the id can threaten the very existence of the ego: though it cannot do away with it altogether, it can shatter its carefully built-up structure or change it back into a portion of the id. 3 Freud describes the id as an enemy of the ego, and one that is harder to defend against than an external enemy.One can flee from an external enemy, but the id is always by the side of the ego; even if it can be temporarily held down, it continues to issue threats from that position. 4he ego is similarly antagonistic and antipathetic to the id.The impulses emerging from the id seek to actualize themselves but they are obstructed by the ego, which, if it does not approve of them, `ruthlessly' inhibits them. 5f ego and id were portrayed by Freud as companions functioning cooperatively to achieve a common purpose, they could more easily be The ego, which seeks to maintain itself in an environment of overwhelming mechanical forces, is threatened by dangers which come in the first instance from external reality; but dangers do not threaten it from there alone.Its own id is a source of similar dangers, and that for two different reasons.In the first place, an excessive strength of instinct can damage the ego in a similar way to an excessive 'stimulus' from the external world.It is true that the former cannot destroy it; but it can destroy its characteristic dynamic organization and change the ego back into a portion of the id.
It adopts the same methods of defence against both, but its defence against the internal enemy is particularly inadequate.As a result of having originally been identical with this latter enemy and of having lived with it since on the most intimate terms, it has great difficulty in escaping from the internal dangers.They persist as threats, even if they can be temporarily held down.subsumed under a unitary whole, but they are depicted as mutually antagonistic and independent.
At this point the reader may respond: But surely the three of them are three constituents of something; they constitute a larger whole.This larger whole may be heterogeneous, but what is the harm in talking of a heterogeneous or divided `self' or `subject'?Why cannot a single thing contain within itself opposing tendencies? 1) To speak of one thing containing ego, id and superego implies the existence some entity that exists over and above these three.But there is no extra entity to which the three belong: the mind is nothing other than the plurality of these three.
2) If we want some term for the three of them together, let us either use some new concept other than the self or the subject, which does not contain the shortcomings of those, or let us stick with what Freud himself uses here -terms such as the mind, the psyche.These are non-technical terms.Other non-technical terms are harmless, such as `the person' or `the individual'.Psychoanalysis does not need to suggest that such terms are eliminated from language; but it should remember that for Freud 1) they were not `scientific', and 2) when that which they designate is analyzed `scientifically ' (1917a: 284, 285; 1917b: 139, 142), it is revealed as a plurality of three antagonistic and independent systems or agencies.
Ogden I hope to have shown by now that those who speak of the self as an entity, such as the authors mentioned in the introduction, are theorizing in a way that is neither necessary nor true to Freud's intentions. 6Ogden 's position (1992a, 1992b) is more subtle, for two reasons.He deliberately avoids the use of the term `self', being suspicious of its `static, reifying meanings ' (1992a: 522).Secondly, he uses the term `subject' not to refer to a fixed entity but to something that is `dialectically constituted'.In case it seems, therefore, that his position is little different from that argued for here, the rest of the essay will be taken up with a critique of it.Ogden's two articles (1992aOgden's two articles ( , 1992b) ) on what he terms the `dialectically constituted/decentred subject of psychoanalysis' can be analyzed as consisting of two strands.In one, he masterfully deconstructs the concept of `the subject' through a selection of positions advocated by Freud, Klein and Winnicott.Each of these three he depicts as having in different ways undermined the concept of a unitary subject: Freud through his divisions of the mind into 1) consciousness and the unconscious, and 2) id, ego and superego; Klein and Winnicott through their emphasis on an intersubjective context as a necessary requirement for a sense of individual subjectivity. 7ut having deconstructed `the subject', Ogden then reconstructs it in the second strand.Having shown that the subject is neither consciousness nor the dynamic unconscious, he then argues that it is `constituted' by `the dialectical interplay' between the two (1992a: 518).Having replaced it with id, ego and superego, and shown it to be represented by none of these taken singly, he then reconstitutes it as the `discourse of' the three (1992a: 520).A concept that has been shown to be redundant thus becomes resuscitated.What would, according to the reasoning both of this essay and of Ogden's first strand, preferably remain decomposed becomes recomposed.
Ogden's preoccupation with the question of the `location of the subject' (1992a, 1992b: passim) reveals a belief that the subject must be located somewhere.But this belief is only valid if we remain committed to the view that a subject exists.When Ogden writes that `The subject for Freud is to be sought in the phenomenology corresponding to that which lies in the relations between [his italics] consciousness and unconsciousness' (1992a: 519), it is not surprizing that he supplies no reference to Freud's writings: I doubt if one could be found where Freud states that `the subject is to sought' anywhere.The sentence reveals Ogden's assumption that the subject must be sought.But if it is an illusion, why does it need to be sought?
Having not found it to be equivalent to ego, id or superego, he sees it as constituted out of the interplay of these three.But Freud's concern, I hope by now to have shown, is to undermine the idea of a single subject.So to say `the subject is X' is not being true to Freud's intentions, whatever referent we supply for X.
Ogden provides an incisive account of the way in which the subject has been decentred, dethroned, dispersed, but he then cannot resist the urge to re-instate it.A sign that he is not being true to Freud is his claim (1992a: 517) that a `central', `irreducible element' of psychoanalysis is Freud's `conception of the subject'.Yet, as he himself admits (1992a: 517), Freud hardly used the term `subject'.It is perhaps in order to address this seeming inconsistency that he asserts, `Despite the central importance of this theme, it remained a largely implicit one in Freud's writing ' (1992a: 517).But if the theme was not addressed explicitly by Freud, in what sense is it of central importance?Thus I regard as highly dubious his claim that in Freud's writing one can `discern the creation of a new conceptual entity: the psychoanalytic subject ' (1992a: 517).The concept of the `psychoanalytic subject' should rather be viewed as an invention of Ogden's.Some may want to counter that Ogden's position is not significantly different from that proposed here, in that both claim the subject to be nothing other than ego, id and superego (or consciousness, preconscious and unconscious).The difference lies in the interpretation of the words `nothing other than'.When a physicist says that `heat is nothing other than movement of molecules', he is not proposing that heat is an illusion.He is proposing the reduction of heat to something more fundamental, but not the elimination of the concept of heat.Indeed the validity of the concept of heat is safeguarded by the fact that it can easily be translated into the more fundamental level of molecule-movement.But when a sceptic says that `the ghost in the garden is nothing other than the play of light and shadow and rustling of leaves', he is proposing that the ghost is an illusion.He is proposing not the reduction of the concept of ghosts to something else, but its elimination.Ogden's view of the relationship of `the subject' to ego, id and superego is equivalent to that of heat and molecule-movement; mine is equivalent to ghosts and light, shadow and rustling. | 2017-11-29T02:53:10.724Z | 2014-08-01T00:00:00.000 | {
"year": 2014,
"sha1": "075be613c53ea149fd7f79678792300b148456fb",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2158244014545971",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0442ca3b43db997ec0ded653f6e4b8d28064f123",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
11888232 | pes2o/s2orc | v3-fos-license | Biomechanical comparison of the four-strand cruciate and Strickland techniques in animal tendons
OBJECTIVE: The objective of this study was to compare two four-strand techniques: the traditional Strickland and cruciate techniques. METHODS: Thirty-eight Achilles tendons were removed from 19 rabbits and were assigned to two groups based on suture technique (Group 1, Strickland suture; Group 2, cruciate repair). The sutured tendons were subjected to constant progressive distraction using a universal testing machine (Kratos®). Based on data from the instrument, which were synchronized with the visualized gap at the suture site and at the time of suture rupture, the following data were obtained: maximum load to rupture, maximum deformation or gap, time elapsed until failure, and stiffness. RESULTS: In the statistical analysis, the data were parametric and unpaired, and by Kolmogorov-Smirnov test, the sample distribution was normal. By Student's t-test, there was no significant difference in any of the data: the cruciate repair sutures had slightly better mean stiffness, and the Strickland sutures had longer time-elapsed suture ruptures and higher average maximum deformation. CONCLUSIONS: The cruciate and Strickland techniques for flexor tendon sutures have similar mechanical characteristics in vitro.
& INTRODUCTION
Flexor tendon lesions have always been a challenge for hand surgeons. However, due to advances in materials and suture techniques, the functional results of flexor tendon tenorraphies have improved (1,2).
New suture techniques are designed to provide sufficient strength during early rehabilitation without increasing the incidence of premature suture rupture or the work required for flexion (3). The strength of a flexor tendon repair is proportional to the number of suture strands that cross the repair site (1,4); however, this biomechanical advantage occurs at the expense of increased suture volume and decreased vascularity of the tendon, resulting in worse clinical outcomes, decreasing the incidence of adherence and increasing the requirement for secondary tenolysis (5).
Moreover, increasing the number of suture strands prolongs the time of repair and increases the difficulty; consequently, many surgeons prefer four-strand sutures (5,6). Studies of four-strand sutures have reported good strength, but unequal loads can occur when two knots are used because the knot itself is a weak point of the suture (7,8).
In recent articles on suture techniques, the cruciate technique has provided good tensile strength and has required greater force for failure and for the formation of gaps, without increasing the operative times (6,7,(9)(10)(11)(12)(13). The cruciate repair suture was first described by McLarney et al. (10) and was considered the ideal technique by James W. Strickland, possessing the mechanical strength of a fourstrand suture and technical simplicity of a two-strand suture (5,12). Although the cruciate technique provides better mechanical results in vitro, the Strickland technique remains one of the most widely used methods (14).
With regard to completing tendon repairs, the circumferential epitendinous suture increases the strength of the tendon suture by 10% to 50% and reduces the gap between the stumps of the tendons (1).
The objective of the present study was to compare different four-strand techniques, specifically the cruciate and Strickland sutures, both of which are reinforced by a continuous epitendinous suture in terms of the maximum load, maximum deformation, time elapsed until rupture and the stiffness of the sutures.
& MATERIALS AND METHODS
Nineteen male and female New Zealand albino rabbits, between 3,500 g and 3,900 g, were acquired from the vivarium of the Faculty of Medicine, University of Sã o Paulo, and were maintained in a laboratory for musculoskeletal research. The University of Sã o Paulo Ethics Committee for Animal Resources approved this animal study.
The animals were euthanized with sodium thiopental at 75 mg/kg intraperitoneally, as per instructions from the Brazilian College of Animal Experimentation (COBEA, 2007). Both Achilles tendons from each rabbit were harvested, and the skin was sutured. The tendons were prepared immediately for testing. The animals were disposed of at the Center of Biological Material, University of Sã o Paulo.
The tendons were divided into two groups, each consisting of 19 experiments. Each tendon was randomly repaired with one of the techniques: Group 1 received a Strickland suture ( Figure 1); and Group 2 received a cruciate repair suture ( Figure 2). Both groups were reinforced with a circumferential, epitendinous, simple running suture.
Each tendon was sectioned into two parts with a number 15 scalpel using a straight transverse cut and was sutured according to the randomization of surgical techniques with a 4-0 Nylon suture and with the core suture placed 7 mm from the cut edge of the tendon. The circumferential, epitendinous, simple running suture was held with 6-0 Nylon and a core suture purchase of 2 mm.
The average cross-sectional volume of the tendons in Group 1 was 15.87 mm 2 versus 15.65 mm 2 in Group 2. The groups were homogeneous with regard to volume.
The repaired tendons were tested for failure by constant progressive distraction using a KratosH universal testing machine, equipped with a load cell of 100 kgf and adjusted to a range of 10 kgf (accuracy of 10 gf). The tendon was fixed in the testing machine using two rectangular grasps with a trapezoidal profile; the distal end of the tendon was attached to a fixed section of the machine, and the proximal end was connected to the load cell in the movable part of the machine. The measurement system consisted of one mechanical linear actuator, and the load transducer connected to the proximal end of the Achilles tendon was connected to a computer, using the ADS2000 LynxH data acquisition system. The force and displacement data measured by the system were registered.
To measure the gap between the cut edges of the tendon during mechanical testing, the tests were synchronized with a Sony DCR-HC26 digital camera. A two-point template with known distance was placed beside the tendon as a reference for the gap. Maximum deformation was calculated by setting the gap between the cut edges of the tendons at the time of suture rupture, measured in millimeters. The gap at the repair site was measured using a program that automatically identifies, calculates and records the gap in millimeters, based on the distance between points on the template as a reference.
To ensure synchronization between the data from the machine-based tests and measurements from the computer, a light-emitting diode (LED) was placed in the visual field of a digital camera that lit up at the same instant that the computer started to acquire data from the testing machine. This synchronization of equipment allowed us to calculate in seconds the maximum time elapsed until the moment of suture rupture.
Based on data from the testing machine, a computer program calculated the maximum load at the time of suture rupture for each test, and the stiffness of the suture was obtained by dividing the maximum load by the maximum gap in Newtons per millimeter.
Ethics
The University of Sã o Paulo Ethics Committee for Animal Resources approved this animal study.
Statistical analysis
In our statistical analysis, the data were parametric and unpaired. The sample distribution was normal, as assessed by the Kolmogorov-Smirnov test, and the variance was homogeneous by Levene's test. Student's t-test was employed for quantitative variables. Descriptive and inferential analyses were performed with SPSS software, version 17.0 for Windows.
& RESULTS
Group 1 underwent tenorrhaphy by the Strickland method, as follows: the average maximum deformation or gapping of the cut edges of the tendons at the time of suture rupture was 12.68 mm (median 13.05 mm, SD 2.86 mm). Group 2, which underwent cruciate tenorrhaphy, had an average value of 11.74 mm (median 11.51 mm, SD 3.16 mm).
In Group 1, the average time that elapsed until the moment of suture rupture was 44.9 seconds (median 46.0 seconds, SD 9.8 seconds), compared with 40.2 seconds in Group 2 (median 38.9 seconds, SD 10.4 seconds).
In In our statistical analysis, by Student's t-test (2 6 2 table), none of the parameters were statistically significant. The median maximum force that was required for suture rupture and the stiffness of the sutures were greater with cruciate repair (p = 0.94). The Strickland technique resulted in higher median maximum deformation with a wider final gap (p = 0.36) and a longer time elapsing until the moment of suture rupture (p = 0.15). Boxplots for these values were generated for the samples (Figures 3 and 4).
& DISCUSSION
Various tendon sutures have been compared with regard to their techniques, materials and use of epitendinous sutures. The ideal suture, according to Strickland (1), must: 1) be easy to perform; 2) be reliable; 3) result in homogeneous coaptation of the cut edges of the tendon; 4) create a lower gap in the suture zone; 5) provide less interference with tendon vascularity; and 6) provide sufficient strength to facilitate early rehabilitation.
The ideal suture can be achieved through techniques with a higher number of strands that cross the repair site, which, however, can also lead to increased technical difficulty and more time to perform (13). Moreover, tendons can be injured, with impairments to vascularization, using techniques that use six or more strands (1). The most widely used techniques are the four-and six-strand methods, which are considered superior to two-strand techniques (6,15).
The epitendinous suture increases the resistance of the tendon by 10% to 50% and reduces the gap at the repair site of the tendon. In the present study, epitendinous suturing was performed in both groups, but its presence in mechanical tests hindered the visualization and evaluation of gap formation at the repair site, thereby generating a homogeneous suture and increasing early suture resistance (1).
Savage (16) suggested that the ideal suture should withstand a force at the repair site that is five times greater than the force necessary to actively move the tendon without resistance. Initial studies of the cruciate technique (17) demonstrated that it is capable of supporting strength beyond the physiological requirement for active movement.
In our study, both sutures attained a strength that exceeded 30 N, sufficient to allow for active rehabilitation protocols, per Viinikiainen et al. (18,19). Because the tests were performed in rabbit tendons in vitro, it was not possible to compare the values of human flexor tendons in the suture tests; these values should approximate one another, although rabbit tendons have less mechanical resistance and smaller diameters.
Another limitation of this experimental study, similar to all in vitro studies, was the inability to study the effects of postoperative edema, tendon resistance and gap formation during active movement (17).
In this study, we also observed that the cruciate repair suture technique was easier to perform and had a lower volume at the repair site, with one suture knot, and a more homogeneous suture, which was consistent with the literature (6,7,10). During the tests, the cruciate repair suture formed a more homogeneous graph of deformation versus resistance, whereas the Strickland suture had one of its knots rupture and rapidly lose resistance, which might be Four-strand cruciate suture techniques are easier to perform, provide less interference with tendon gliding and are sufficiently strong for an early active motion protocol (15,17); in our study, however, there were no significant differences between the Strickland and cruciate repair techniques regarding the maximum load required to rupture the suture, the maximum deformation at failure or the stiffness, which can be explained by the number of tests that were performed with each technique. The cruciate repair suture also had a lower tendency toward gap formation, which will be evaluated in future studies.
Croog et al. (6) studied various configurations of the cruciate repair suture and noted that the cross lock increased the overall resistance, as well as the resistance to gap formation (20). Based on our observation that the simple cruciate repair suture had similar resistance compared with the Strickland technique, we recommend using the cross lock cruciate repair suture, which improves suture strength without significantly increasing the technical difficulty.
Hand surgeons should aim to simplify tendon sutures and to maintain the suture strength without increasing the technical difficulty (21). Thus, we advocate the cruciate repair suture technique, which yielded results comparable to the Strickland method in our study, in addition to its reported advantages. The technical benefits of the cruciate repair suture under clinical conditions could generate a lower coefficient of friction, thereby reducing failures, which we did not address in our experiments (5,6,7,9,10).
The cruciate repair suture is similar to the Strickland method with regard to the maximum load and the stiffness to suture rupture. Further studies should be conducted to investigate the clinical results of these techniques.
& ACKNOWLEDGMENTS
The authors thank Cesar A. M. Pereira for helping to perform the mechanical tests in the Laboratory of Musculoskeletal Research (University of São Paulo).
& AUTHOR CONTRIBUTIONS
Iamaguchi RB contributed to the study design, data collection, assessment of the results, statistical analyses and manuscript preparation. Villani W and Santos GB contributed to the data collection. Rezende MR, Wei TH and Cho AB were responsible for the critical revision. Mattar R supervised the study. | 2017-06-10T20:08:19.337Z | 2013-12-01T00:00:00.000 | {
"year": 2013,
"sha1": "f7484797afeecaeb44aa46e806ebb4faf2b8027f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.6061/clinics/2013(12)11",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7484797afeecaeb44aa46e806ebb4faf2b8027f",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235459416 | pes2o/s2orc | v3-fos-license | Eastern-medicine doctors in 1910s Korea integrating Western medicine on their own terms
h 2 ( Doctors and researchers in the early twenty-first century usully understand integrative medicine as involving a process in hich older therapies such as herbal medicine earn validaion through applying biomedical analysis to determine efficacy. astern-medicine doctors in Korea in the early-twentieth century, owever, chose not to accept the hegemony of biomedicine episteology. Instead, they adopted a modus operandi in which Westrn medicine could complement Eastern medicine. They contined to insist on the primacy of their own conceptual frameworks ased on long historical and practical clinical experience. This aricle highlights representative discussions in the 1910s journals, ast-West Medicine News 1 and the Joseon Medicine World 2 in which nonymous doctors discussed how they understood the relationhip between Eastern and Western Medicine. The case studies here how the typical way in which Eastern medicine doctors undertood Western medicine and in fact throughout the entire colonial eriod in Korea from 1910-1945. After a discussion on doctors’ inerpretations of Western medicine, the article discusses two clinial cases, hematemesis, and wasting and thirsting. Finally, it examnes how Eastern-medicine doctors understood their own diagnosic methods vis-à-vis Western-medicine instruments.
Doctors and researchers in the early twenty-first century usually understand integrative medicine as involving a process in which older therapies such as herbal medicine earn validation through applying biomedical analysis to determine efficacy. Eastern-medicine doctors in Korea in the early-twentieth century, however, chose not to accept the hegemony of biomedicine epistemology. Instead, they adopted a modus operandi in which Western medicine could complement Eastern medicine. They continued to insist on the primacy of their own conceptual frameworks based on long historical and practical clinical experience. This article highlights representative discussions in the 1910s journals, East-West Medicine News 1 and the Joseon Medicine World 2 in which anonymous doctors discussed how they understood the relationship between Eastern and Western Medicine. The case studies here show the typical way in which Eastern medicine doctors understood Western medicine and in fact throughout the entire colonial period in Korea from . After a discussion on doctors' interpretations of Western medicine, the article discusses two clinical cases, hematemesis, and wasting and thirsting. Finally, it examines how Eastern-medicine doctors understood their own diagnostic methods vis-à-vis Western-medicine instruments.
Incipient bilingualism in medicine
To make clear for the readers, we know little of the journals' contributors because the editors and authors remained mostly anonymous in the 1910s. Editorials were attributed to the newspaper while many articles were printed without attribution. No authors' names were mentioned, for instance, in the first five volumes of the East-West Medicine News.
In 1919, one author explained, "We accept Western medicine disease analysis, but in the clinic, we apply our Eastern medical concepts." 1 Arguably, the author meant that Western-medicine diagnosis helped to inform the physician of the patient's condition. Having understood the diagnosis, the physician would nevertheless prioritize the Eastern-medicine diagnosis over the Western medical concept. The journal authors did not reject Western medicine, but rather drew on a dual conceptual understanding. As with many discussions on pathology and specific diseases throughout the volumes of all the journals, the authors integrate Western and Eastern medical concepts. For the purpose of analysis, we may say that the physicians' seemingly comfortable alternating between the two concepts demonstrates a form of medical bilingualism.
The historian Sean Lei's analysis of China arguably provides a counter-example to the professed comfort in Korea with the coexistence of the two concepts. 3 Lei shows that in 1920's and 1930's China, Chinese medicine physicians grappled with the anxiety caused by apparent incommensurability between some Chinese medical concepts and some Western medical concepts. However, Eastern-medicine physicians in Korea displayed little apparent anxiety with incommensurability, but instead argued for the validity of both systems operating in tandem.
Marta Hanson 4 calls medical bilingualism the ability not only to read in two different medical languages, but also to understand their historical and conceptual differences. Thus, in "accepting Western medicine disease analysis," while at the same time applying "Eastern medical concepts," the Korean Eastern-medicine physicians were arguably practicing medical bilingualism.
Chemistry not substitute for, but rather, complement to Eastern Medicine theory
For example, a discussion of Cold Damage by Zhang Zhongjing (150-219) claims that the theories of the past can be corroborated by modern chemistry. 5 The author of East-West Medicine News reasons that chemistry explains the traditional concepts of generation, production, and transformation of ki (Chinese: qi ). 2 In Eastern medical theory, we may take the example of Greater yang syndrome from Cold Damage theory. If a patient has Greater yang syndrome, they have maximum yang . With such an extremity of yang , the Heavenly yang consumes the Earthly eum (Chinese: yin ) in the form of water. With yang consuming water, the patient's condition will thus transform from excess to deficiency. We can trace this transformation by feeling the patient's pulse. 2 To interpret the theorization in this passage, it is most important to know that yang corresponded with fire, and eum corresponded with water. Fire in the body may manifest in numerous ways, but a feeling of heat is a common example. In the case of Greater yang , as it corresponds to maximum yang , there may be an intense feeling of heat that rises to the head and affects the body surfaces, as in the skin and muscles. Furthermore, yang corresponded to excess and eum to deficiency. In the case of Greater yang , extreme heat consumes water, and since water is necessary in the body, with water's diminishment, the person's body weakens, which means the person becomes deficient. The physician would then "intervene with herbal prescriptions to rebalance the fire ( yang ) and water ( eum )." 2 Having explained an aspect of Eastern medical theory, the author then made a comparison with Western medical theory: In Western chemistry, we know that air contains water. We know that fire evaporates water. Also, we can produce hydropower through heating water. Thus humans can work with the relationship of heat and water to produce energy. We can do it mechanically. However, this phenomenon resonates with the relationship of Heavenly Fire and Earthly Water in the human body. Heat transforms water, and also consumes energy. 2 Here, the author argued that Western scientists, who harnessed the study of the properties of substances and how they interact with each other to create energy, were expressing ideas familiar to Eastern-medicine physicians. According to the author, Eastern medical concepts based on the physician's careful balancing of eum -yang in the human body, and manifested through the five agents, were concerned with the management of energy, in the form of ki . In short, both East and West shared the concept of the importance of the interrelationship of fire and water in explaining energy. The author did not claim equivalence of East and West, but argued there was complementarity in using similar concepts to explain the workings of the human body on the Eastern medicine side and in chemistry on the side of Western science. Thus, unlike the Chinese-medicine physicians who aimed to "scientize" their medicine by incorporating Western science in China during roughly the same period, the Korean authors believed that science, in fact, validated their own medical ideas.
How historians of medicine interpret diseases of the past
The journals show that the Eastern-medicine physicians interpreted Western science to valorize their own medicine, and to justify the insistence on continuing to use their own terminology such as the concepts of ki, eum -yang, five agents (aka five elements), and so on. The historian Adrian Wilson's analysis of the historicity of disease concepts helps in considering the problematic of using modern science to understand diseases of the past. 6 In his study of pleurisy, he identifies two methodological approaches among historians of medicine who analyze diseases: 1. The historicalistconceptualist approach considers disease concepts as objects of historical study. In this approach, disease changes meaning over time according to socio-historical context. 2. The naturalist-realist approach excludes disease concepts from historical investigation since it considers modern disease concepts as the mirror of natural reality. This means that the modern disease concept is extended backwards in time, and is conceived as an unchanging discrete entity. Randall Packard,7 in his study of a disease outbreak in Philadelphia in 1780, adds to the debate on the historicity of disease concepts by declaring that both approaches are attractive and important to follow in historical scholarship. He argues that the different framing of questions results in different answers that combined together contribute to a more complete historical understanding of epidemics and human responses to them. This article mostly takes the historicalist-conceptualist approach, but, following Packard, also sees the value in the naturalist-realist approach. The historicalist-conceptualist approach helps scholars to take seriously the Korean authors on their own terms, but it is also useful to accept their use of Western disease concepts in their integrated East-West approach to medical reasoning at that time.
Following the general-policy editorial in East-West Medicine News in 1916, the subsequent section in the journal explains the type of content readers will find in the first Volume. 2 Aiming for comprehensive coverage of the broad field of Eastern medicine, there are articles on external medicine, acupuncture, diagnosis and treatment of discrete disease categories, and herbal medicine. 2 Two representative examples can help to illustrate the type of reasoning that the physicians employed in attempting to integrate Eastern and Western medical ideas underneath the essentially Eastern-medicine umbrella.
Hematemesis
First, in an article titled "Discussion on Blood Diseases," the author focuses on hematemesis (vomiting blood). 1 Western medicine explains the physiological and chemical reasons for hematemesis. We accept this type of analysis, and agree it is useful. However, in our clinical practice we still apply our Eastern medical concepts. For example, we will diagnose whether the hematemesis is caused by, for example, liver wood attacking the stomach, or an issue with turbidity and overcoming the clear ki . Or if the patient has a headache, it might be a lesser yang problem. In that case, there would be a liver fire problem. We [then would] need to give therapy to clear wind. 1 This representative passage demonstrates that the authors were thinking in terms of the individual patient's overall condition, rather than only a symptom or a disease. The discussion here clarifies that Eastern medicine physicians prioritized individualized diagnosis over the mechanical phenomenon of bleeding. The cause of bleeding is attributed to a patient's particular imbalance of ki and among the five agents of wood, fire, earth, metal, and water. The healing approach, therefore, is to rebalance the individual patient's ki rather than to simply stop the bleeding. In the example of the patient with a headache, the author suggests that the physician needs to refine his or her diagnosis to ascertain the cause. For example, in the five agents concept, a lesser yang syndrome corresponds to the liver area and so in turn wood and wind. In this conceptual system, therefore, wind as a pathogenic factor is considered to have caused the headache. The physician would then prescribe a treatment to clear wind from the patient's body and thereby also clear the headache.
Wasting and thirsting
A discussion on "wasting and thirsting" serves as the second example. 1 In Western medicine, there is an identified disease called diabetes. We accept this concept. However, we believe that Eastern medicine has the best therapy. For example, we prescribe herbs such as magnolia berries and ophiopogonis. Also, Bamboo Leaf and Gypsum Decoction is an excellent prescription for wasting and thirsting. 8 In sum, the overarching argument in the above examples is that Western medical concepts, such as physiology and biochemistry, have merit and should be understood. However, Eastern medical concepts, in terms of diagnosis and therapy, are the most efficacious. The author insists that older concepts such as the turbid and clear ki continue to be used as concepts in diagnosis.
The term used by Eastern-medicine physicians, "wasting and thirsting" refers to a condition where the patient suffers from significant loss of weight in conjunction with unquenchable thirst. The author also refers to the Western disease concept of diabetes, suggesting a one-to-one correspondence between the two terms. 9 Whereas diabetes involved pathology of the endocrine system related to an imbalance in newly measurable blood sugar levels, the Korean disease pattern of "wasting and thirsting" was diagnosed through the two primary symptoms. Even though conceptualized differently, there was clear overlap between the two disease concepts, since a typical diabetic patient also suffers from weight loss and excessive thirst. Since not until 1921 could Western-medicine physicians offer insulin for diabetic patients, the author's claim in 1916 of Eastern medicine's superiority for this condition most likely had a concrete basis from clinical experience in the absence of anything more effective from Western medical options in the same period. At the time of writing, Bamboo Leaf and Gypsum Decoction was an example of an efficacious therapy for a patient presenting with a "wasting and thirsting" pattern. In this early twentieth-century context, the Eastern-medicine physicians did not accept assessments about the inferiority of their own medical knowledge. The diabetes/wasting and thirsting example above illustrates one way in which Western-medicine physicians' claims of superiority did not convince contemporary Eastern-medicine physicians. As the medical records show, even though Westernmedicine doctors began to prescribe insulin in 1921, nevertheless, Eastern-medicine doctors continued to treat patients who suffered from wasting and thirsting. The advent of Western medicine did not supplant Eastern-medicine therapies.
Eastern-medicine doctors validate the use of the human senses to diagnose patients
Having analyzed some of the differences in approach to disease concepts, the Eastern-medicine physicians also questioned Western-medicine physicians' assumption of the superiority of using instruments in making diagnoses. A representative editorial summarizes the overall thinking regarding comparative methods of diagnosis. 2 The editor began by asking whether physicians needed to change their diagnostic methods.
Nothing surpasses the four diagnostic methods of looking , listening [and smelling], asking , and feeling the pulse in subtlety and refinement. Of the four methods, the last-pulse diagnosis is the most crucial… We should continue with that method. But now we have immature Western medicine with its cellular biology, germ theory, and its emphasis on physical anatomy. Western medicine has the concept of relying on instruments to make diagnoses. It's actually not that much different to our four methods. Western medicine has a very similar approach, with looking, listening, smelling, and percussing. The difference is the reliance on instruments, such as the thermometer, the stethoscope, and the microscope. In Western medicine, there is still the emphasis on seeking observable signs, as we do. But here, we should accept the merits and convenience in using instruments. For example, it is useful if we identify bacteria through a microscope. Such instruments are cheap. While using our diagnostic methods, we should also use instruments such as stethoscopes. 2 The argument here is on the effectiveness of the traditional four methods of diagnosis -namely, 1) looking, 2) listening and smelling, 3) asking, and 4) feeling the pulse -while supplementing with Western instruments when they are useful. The author argues that Western medical diagnostic instruments have their merits such as the stethoscope for percussion and the microscope for identifying bacteria. However, these new medical instruments are more or less aids that are not able to reach the accuracy of "the most crucial" pulse diagnosis and the four diagnostic methods as a whole.
Having accepted that there are differences in diagnostic methods between Eastern and Western medicine as well as in use of new diagnostic instruments, it is important to note that the authors recognized similarities between the two medical systems. The passage above states, for instance, "It's actually not that much different to our four methods. Western medicine has a very similar approach, with looking, listening, probing (with instruments), and touching." The issue here was that the traditional four methods of diagnosis measure different things than did instruments. The four methods ascertain qualities such as ki and the five agents. Specifically, physicians felt the pulse to ascertain many aspects of a patient that Western medical instruments could not detect. For example, the pulse could give information on all parts of the body, and on many parameters of the body's function. 10 For example, by feeling the pulse, a skilled physician could detect a urinary problem, a lung problem, and a headache all at once. The reasoning was that the physician could feel many permutations of the patient's ki , by judging the pulse qualities. Instruments, on the other hand measured specific parts of the body, such as stethoscopes for lungs. Demonstrating medical bilingualism in practice, however, the physicians in Korea argued for both methods' benefits.
Western medicine as an adjuvant to Eastern medicine
While arguing for Eastern medicine's relevance and clinical effectiveness, the Eastern-medicine physicians in Korea in the 1910s perceived similarities with Western medicine but also drew on their own historical imagining to position Eastern medicine as part of a world medicine. After all, as the doctors pointed out, what they imagined as the East and the West were historically connected with regards to knowledge flows. Arguing that Eastern medicine may be judged on its own terms neither invalidates Western medicine nor concedes hegemonic privilege to it. These Korean doctors' analyses offer insights into possibilities, for even today, of conceptual convergence more than binary difference. Rather than believing that Western medicine would replace, or seriously challenge, Eastern medicine, they saw it as an alternative way of interpreting the body and disease. In other words, Western medicine could complement and act as adjuvant to Eastern medicine. Such confidence in Eastern medicine in the 1910s stands out as an unusual case of colonial medicine and differentiates Korea from both Japan and China where East Asian medicine came under serious challenge from Western-medicine doctors and intellectuals. This self-belief helps to explain why Korean medicine not only survives, but also flourishes in South Korea into the early twenty-first century. | 2021-06-18T05:16:55.728Z | 2021-05-18T00:00:00.000 | {
"year": 2021,
"sha1": "01ed18397017ed1c1c633909c7df24652463dc42",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.imr.2021.100730",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01ed18397017ed1c1c633909c7df24652463dc42",
"s2fieldsofstudy": [
"History",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236913124 | pes2o/s2orc | v3-fos-license | Campylobacter rectus Infection Leads to Lung Abscess: A Case Report and Literature Review
Background Campylobacter rectus is one of the anaerobic bacteria in the mouth. Case Presentation We report the case of a 73-year-old man admitted for lung abscess caused by Campylobacter rectus with unique manifestations under electronic bronchoscopy, and the pathogen is first reported to be confirmed by metagenomic next-generation sequencing (mNGS) through testing bronchoalveolar lavage fluid. Conclusion Sometimes, Campylobacter rectus can cause infection outside the mouth such as lung abscess. Most patients have good outcomes.
and Periodic Acid-Schiff (PAS) staining were both negative. Not only the traditional culture (including aerobic and anaerobic) but also the galactomannan (GM) test of bronchoalveolar lavage fluid were negative. Surprisingly, the mNGS of bronchoalveolar lavage fluid revealed 4415 sequences of Campylobacter rectus and 1091 sequences of Parvimonas micra.
Empirical antimicrobial therapy commenced immediately with intravenous tazobactam/piperacillin (4.5g three times daily) and ornidazole (500 mg twice daily) from the first day in the hospital. One week later when we got the result of mNGS, etimicin (300mg one time daily) was used in combination to enhance treatment against Gramnegative bacteria. Another two weeks later, the symptom of diarrhea in the patient was considered to be due to the imbalance of intestinal flora induced by long-term extensive use of broad-spectrum antibiotics. Therefore, the anti-infective treatment regimen was reduced to etimicin only. The patient was hospitalized for one month. The results of CT re-examination suggested that the area of infection in the inferior lobe of the left lung was significantly reduced, and the cavity was smaller ( Figure 1B). After he was discharged from hospital, he was treated with oral levofloxacin (0.5g once daily) for four months. The condition of the lung was further improved than before ( Figure 1C).
Discussion
Campylobacter rectus is Gram-negative, with no spores and can be cultured in microaerobic or anaerobic state. Its colonies are translucent, rough, flat and non-hemolytic. The morphology of Campylobacter rectus is straight rod-shaped, arcuate or S shape. Urease and oxidase tests are both negative.
Campylobacter rectus is one of the oral colonization flora. In 2007, a large study involving 1294 healthy adults in southern Finland found that 31.3% of them had been detected Campylobacter rectus in the saliva. 2 Sometimes it can cause infections outside the mouth, but the reasons are not completely clear. The table summarizes the data of 20 cases (including this case) searched from the literatures ( Table 1). The age of the patients ranged from 10 months to 75-year-old. Most of the patients ranged from 50 to 70 years old (12/20), among which 55% patients (11/20) had dental caries, periodontitis, poor oral hygiene and other oral risk factors, and 15% (3/20) had a history of malignant tumor. The site of infection is varied, including empyema, brain abscess, osteomyelitis, etc. In terms of prognosis, only two patients died unfortunately, while the remaining patients (18/20) were discharged after effective anti-infection treatment, puncture or incisional drainage, and the success rate of the comprehensive treatment was 90%. Pathogens can be identified in a variety of ways, including traditional culture, 16S rRNA gene sequencing, matrix-assisted laser desorption/hours-of-flight mass spectrometry, and mNGS. The duration of anti-infective therapy for Campylobacter rectus varied from 23 days to 6 months, except for 2 deaths.
Electronic bronchoscopy, as a routine technique for respiratory infections, plays an important role in the diagnosis and treatment of respiratory diseases. This technique can detect early abnormalities in the lumen that might not be found by CT scanning. At the same time, samples can be taken for corresponding tests. In this case, a large amount of white necrotic material was found in the bronchial lumen at the lesion site, blocking the lumen and attaching to the wall. This is the first reported case of lung abscess caused by Campylobacter rectus under the electronic bronchoscope.
There are few literatures on the anti-infection treatment of Campylobacter rectus. In 2002, a study in Italian about periodontal anaerobe which can cause systemic infection found that Campylobacter rectus is sensitive to a variety of antibiotics except moxifloxacin, 5 such as penicillin, amoxycillin/clavulanate, cefoxitin, etc. In 2007, another study in Italian about anti-microbial susceptibility of oral microorganisms also confirmed that Campylobacter rectus was sensitive to multiple antibiotics, and none of the seven groups of samples produced β-lactamase. 6 As a study on the antimicrobial resistance of this bacterium in 2020, Rams et al from the Netherlands studied the in vitro resistance of periodontal pathogens to four antibiotics, and found no resistance to Campylobacter rectus. 7
Conclusion
Campylobacter rectus is an oral colonizing bacterium which can cause infection outside the mouth. Most patients have a good outcome. In this case, a characteristic pattern of white necrotic material forms in the bronchial lumen. Metagenomic next-generation sequencing is one of the rapid diagnostic methods.
Data Sharing Statement
All raw data in the manuscript has been uploaded to the submission system.
Ethics Approval and Consent to Participate
The study has been approved by the Independent Ethics Committee of Nanjing Tongren Hospital (Approval No: TRLLKY2020013.1). We obtained the patient's consent and signed the informed consent.
Consent for Publication
The manuscript is approved by all authors for publication.
Patient Consent
The patient provided written informed consent for the case details and accompanying images to be published. | 2021-08-05T05:31:47.052Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "be00a2acec5992fb474244616cc97000ace6a4cb",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=72134",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be00a2acec5992fb474244616cc97000ace6a4cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54199967 | pes2o/s2orc | v3-fos-license | The Use of Molecular and Imaging Biomarkers in Lung Cancer Risk Prediction
High-dimensional genomics, genetics and proteomics techniques have been widely used in cancer research for over two decades. Correspondingly, various genomic, genetic and proteomic signatures have been discovered in the cancer’s diagnosis, prognosis and prediction. For instance, over the last decade, considerable effort and resources have been devoted to characterize the genomic, genetic, and proteomic profiles of lung cancers [1-3]. These studies can enable us to have a deep understanding of the molecular heterogeneity of this disease and help create new therapeutic targets that will facilitate personalize targeted therapy. Now, as non-invasive medical image technologies are likely to become routine in screening high-risk populations, the use of imaging features may greatly assist the therapy guidance and the monitoring of development and progression of lung cancer and its response to treatment. Similar to other – omics technologies, radiomics refers to the high-throughput extraction and analysis of a large amount of quantitative features from advanced medical images with the assistance from compute science, and can provide a comprehensive quantification of the tumor phenotype [4-6].
Introduction
High-dimensional genomics, genetics and proteomics techniques have been widely used in cancer research for over two decades. Correspondingly, various genomic, genetic and proteomic signatures have been discovered in the cancer's diagnosis, prognosis and prediction. For instance, over the last decade, considerable effort and resources have been devoted to characterize the genomic, genetic, and proteomic profiles of lung cancers [1][2][3]. These studies can enable us to have a deep understanding of the molecular heterogeneity of this disease and help create new therapeutic targets that will facilitate personalize targeted therapy. Now, as non-invasive medical image technologies are likely to become routine in screening high-risk populations, the use of imaging features may greatly assist the therapy guidance and the monitoring of development and progression of lung cancer and its response to treatment. Similar to other -omics technologies, radiomics refers to the high-throughput extraction and analysis of a large amount of quantitative features from advanced medical images with the assistance from compute science, and can provide a comprehensive quantification of the tumor phenotype [4][5][6].
Studies that Collect Molecular and Imaging Features
The National Lung Screening Trial (NLST) was a randomized screening trial that accrued over 53,000 older smokers to compare lowdose helical computed tomography (CT) relative to chest-x-ray screening in reducing lung cancer mortality. Half of accrued participants (about 26,000) underwent at least one CT screen. In addition, about 10,000 participants consented to have their specimens collected for the development of the NLST biorepository for lung cancer biomarker validation research. The NLST study has shown that compared to chest-x-ray, low-dose helical CT can reduce the death from lung cancer by 20% [7].
More recently, combination of the molecular findings with imagebased features of lung cancer on chest CT has emerged as new tool that can potentially impact both the diagnostic and prognostic spaces [8,9]. A few prospective studies have been developed in this regards. The Detection of Early lung Cancer Among Military Personnel (DECAMP) consortium is an ongoing multidisciplinary and translational research program that was funded by DoD to study the diagnostic ability of a number of developed molecular biomarkers, including one genomic biomarker measured in bronchial airway brushings, two proteomic biomarkers measured in bronchial airway biopsies or serum, and one cytokine biomarker measured in serum. The consortium aims to enroll 500 heavy smokers with indeterminate pulmonary nodules (ranging from 0.7cm -3.0cm) on chest CT from 7 VA hospitals and 4 designated Military Treatment Facilities (and also one academic hospital). The research team of the consortium includes several molecular laboratories and the cores of Biostatics, Bioinformatics and Biorepository. In addition to its primary endpoint, an important aim of this study is to develop models that can combine the features from demographic, clinical, radiographic, and molecular sources to predict the risk of lung cancers [10,11].
Recently, a few grants were awarded by the NCI to create a consortium that studies the molecular characterization of screendetected lesions, including the domain of prostate cancer, lung cancer, breast cancer, and pancreatic cancer. The consortium has seven molecular characterization laboratories (MCLs) and a coordinating center, and is supported by the Division of Cancer Prevention and the Division of Cancer Biology [12]. In the context of lung cancer, the aim is to seek evidence that screening will detect a class of non-aggressive tumors, which is different from the tumors detected in patients with symptoms. For this purpose, the study team will characterize the mutational status, RNA expression profiles, tumor microenvironment, and imaging related features in these screen-detected tumors. One of the key questions is to integrate the feature data from various sources to develop a composite model that can assist the prediction of lung cancer risk.
Method of Integrating Molecular and Imaging Biomarkers
In these studies, various types of biomarkers will be collected from various platforms, e.g., demographics, clinical practice, molecular assay, imaging modality, and so on. Many methods can be used to analyze and integrate these biomarker data. Unsupervised clustering analysis can be conducted to group biomarkers in discrimination analysis and allow us to obtain an assessment of the overall relationship among them. Specially, clustering analysis can be used to determine the possible clusters formed from these platforms and then characterize each cluster based on different biomarkers [13][14][15]. Dependent on the types of outcomes, logistic regression and Coxproportional hazards regression will usually be used to model these biomarkers in the integration analysis. When there are too many biomarkers, robust regression techniques (Such as LASSO) are often used to reduce dimensionality [16].
Challenging Issues in the Integration of Molecular and Imaging Biomarkers
One essential aim of the risk predictive modeling is to predict the outcome of new subjects. For this, the biggest challenging issue is how to avoid overfitting, i.e., that the data fit the training set well, but perform poorly in the validation set. This is particular true when building a complex risk prediction modeling with the inclusion of too many biomarkers. Overfitting causes optimism about a model's performance in new subjects and will greatly limit the model's capacity of generalization. Here, we recommend the bootstrap resampling to evaluate a model's optimism-corrected performance, which repeatedly draws samples with replacement from the original sample to fit the model and then evaluate the model's performance in the original sample. Detailed procedure of the calculation can be found in the chapter 5 of Steyerberg's book [17]. Of course, the best approach in evaluating a risk prediction model's performance is to design a new prospective study and test the model's performance there, e.g., the ongoing DECAMP study. Then the model's accuracy can be independently assessed by sensitivity, specificity and AUC in the ROC approach. | 2019-02-16T14:28:56.653Z | 2016-05-03T00:00:00.000 | {
"year": 2016,
"sha1": "ecc26a630a4c5cbed3b79465a3ca427012c67e28",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/the-use-of-molecular-and-imaging-biomarkers-in-lung-cancer-riskprediction-2155-6180-1000299.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ea4e17df87e56ab52f1786aed295ca9b04e95ad7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245629919 | pes2o/s2orc | v3-fos-license | Computing Simplicial Depth by Using Importance Sampling Algorithm and Its Application
Simplicial depth (SD) plays an important role in discriminant analysis, hypothesis testing, machine learning, and engineering computations. However, the computation of simplicial depth is hugely challenging because the exact algorithm is an NP problem with dimension d and sample size n as input arguments. ,e approximate algorithm for simplicial depth computation has extremely low efficiency, especially in high-dimensional cases. In this study, we design an importance sampling algorithm for the computation of simplicial depth. As an advanced Monte Carlo method, the proposed algorithm outperforms other approximate and exact algorithms in accuracy and efficiency, as shown by simulated and real data experiments. Furthermore, we illustrate the robustness of simplicial depth in regression analysis through a concrete physical data experiment.
Introduction
With the development of computer technology and multivariate statistical analysis, scientists deal with a large amount of multidimensional data in many fields, such as biogenetics and industrial engineering. e demand for multivariate data analysis tools has become increasingly urgent. As a powerful multivariate nonparametric and robust statistical tool, the statistical depth function extends the concept of one-dimensional data order statistics and provides the central-outward sorting of multivariate data [1][2][3][4]. In recent years, the interest of researchers in statistical depth has increased due to the extensive application of the statistical depth function in multivariate statistical analysis, robust estimation, discriminant analysis, hypothesis testing, machine learning, economics, and hydrological data analysis [5,6]. e first statistical depth function concept, which was proposed by Tukey in 1975, is known as the halfspace depth (also known as the Tukey depth) [7][8][9]. e other concepts of the statistical depth function include projection depth [3,10], simplicial depth (SD) [11,12], and regression depth [13,14]. Zuo and Serfling defined a general structural property of the statistical depth function [1]. Among the many concepts of this statistical depth function, SD is a relatively attractive one not only because of its simple form and ability to achieve the maximum depth value in the center and satisfy monotonicity but also because of its important applications in sign test and centralization test [1,12].
However, the computation of SD is complicated. e exact calculation of SD is an NP problem, which is only feasible when the dimension is no higher than three. Serfling and Wang emphasized that the computation of SD for higher-dimensional data still requires further study [12]. e computation and application of the statistical depth function are active research topics.
Similarly, Monte Carlo (MC) methods have become important statistical, computational tools that are widely used in finance, engineering computation, genetic biology, computational chemistry, and other related fields [15][16][17][18]. As a critical MC strategy, the importance sampling (IS) method concentrates most of the test samples in the important area of the objective function by introducing the transfer probability density function [15,19]. is method dramatically improves the computational efficiency and is an important MC acceleration algorithm. In this study, we apply an efficient IS algorithm to the approximate computation of SD and demonstrate the advantages of such an algorithm over other MC methods and exact algorithms through simulated and real data examples. Furthermore, we extend the SD to regression analysis and obtain a robust estimation of regression analysis. e results of a real physical data experiment show that the estimation based on the SD method is more robust than that based on the traditional least squares (LS) method. e remainder of this paper is organized as follows. In Section 2, we review the preliminary concept and existing algorithms for SD. Section 3 describes the IS algorithm used for the computation of SD. e advantages of the IS algorithm are illustrated through simulated data examples in Section 4. e extension of SD to regression analysis and a real data experiment are presented in Section 5. Lastly, the conclusions are provided in Section 6.
Preliminary of SD and the State of the Art
In this section, we present the preliminary of SD and the existing algorithms for its computation.
Consider a sample set X n � X 1 , , where X n is one sample of size n in R d and x is a given point in R d . e sample version [11] of SD of x with respect to the sample set X n is expressed as where 1 A { } denotes the indicator function of event A, and S[X i 1 , . . . , X i d+1 ] denotes the simplex determined by the d + 1 sample points X i 1 , . . . , X i d+1 .
Serfling and Wang stated that no algorithms are faster than simply generating all simplices and counting the ones enclosing the given point (using O(n d+1 ) complex time) when dimension d ≥ 5 [12]. erefore, designing an efficient approximate algorithm for the computation of SD is necessary.
A direct MC method for the computation of SD contains two steps: (1) randomly selecting d + 1 points from X n and then (2) taking the average of the points that enclose the given point x (i.e., using SD(x, X n ) to estimate the true SD value SD(x, X n )).
where X i 1 , . . . , X i d+1 is randomly chosen from X n and M is the trying number for the estimation. Another approach for the computation of SD is the use of the IS algorithm, which is the proposed method in this study. e computation of SD is an expectation computation. erefore, SD can be estimated by the IS algorithm. e simple MC method uses the randomly selected d + 1 points to estimate the SD, whereas the IS approach selects d + 1 points with a high probability that they contain the given point x. eoretically, the results of the latter will have a smaller variance than those of the former. e simulated data examples in Section 4 illustrate the advantage of the IS algorithm over the MC method.
New Algorithm for SD in R d
3.1. Overview of the IS Algorithm. Many engineering problems can be expressed as computations of a multidimensional integral. Using the MC method to compute the integral involves drawing samples from a uniform distribution on a regular area and using the sample mean to approximate the true integral. In higher-dimensional cases, the efficiency of the MC method is extremely low if the region where the target function is not equal to zero is extraordinarily sparse. On the contrary, the IS algorithm draws most samples in the important area.
is strategy improves the efficiency of the integral computation. e IS algorithm plays an important role in the field of statistical physics, molecular simulation, and Bayesian statistics.
For example, we want to compute the integral of h(x) on region A; that is, and the integral computation (3) can be treated as an expectation calculation: where X is a random variable (r.v.) with its own probability density function (p.d.f.) π(x); that is, X ∼ π(x). If X 1 , . . . , X n denote samples with size n from X, the MC method draws X from a uniform distribution on region A. From the Law of Large Numbers [20], the sample mean can be used to estimate the expectation in (4) as where S(A) is the area of A and X is the r.v. from the uniform distribution on A (X ∼ U(x)). However, the efficiency of the MC method (5) will be extremely low if region A is extremely wide or sparse (especially in high-dimensional cases). By contrast, the IS method uses a special p.d.f. g(x) instead of π(x) in (4) to compute mean μ and utilizes the corresponding sample mean to estimate the expectation in (4): and the variance Var(μ) � 1/nVar(h(X i )/g(X i )), which means that we can choose appropriate g(x) close to h(x) to reduce the variance of μ. In extreme situations, if we select , the variance of μ will drop to zero, and μ is equal to the exact value A h(x)dx. However, we cannot directly use the IS method defined in (6) during such an extreme situation because we do not know the exact value of A h(x)dx in advance.
Mathematical Problems in Engineering
Nevertheless, it gives us a significant hint that the closer g(x) and h(x) are, the more accurate the IS method's result is. e steps of the IS method for the computation of integral (3) are listed as follows: (1) Draw the samples ∞ from g(x).
(3) Use the mean of the computed weights to estimate the integral in (3): e following theorem shows that the IS estimator in (7) is unbiased.
e IS estimator μ in (7) is an unbiased estimator of μ.
Proof. To prove that the IS estimator is unbiased, we only need to show that the expectation of μ is equal to μ: Because ω i is a r.v. and We obtain the expression E(μ) � μ, which verifies that the IS estimator μ in (7) is unbiased. So we complete the proof of this theorem.
Aside from being an unbiased estimator of the integral presented in (3), the IS estimator exhibits a more efficient and powerful integral computation than the MC estimator defined in (5), especially in higher-dimensional cases. □ 3.2. IS Algorithm for SD Computation. We use the previously described IS method to compute the SD. Using the definition of SD in (1) is not appropriate in computing the SD value of a data point with respect to a dataset. e MC method in (6) becomes extremely inefficient when dimension p or sample size n is excessively large because the number of simplices containing the original data point decreases with the increase in p or n.
e IS algorithm can transform the original p.d.f. into a highly efficient one that can construct the simplex containing the original data point. In the computation of SD, the MC method randomly selects p data points to construct the simplex, whereas the IS method chooses the data points that are likely to let the original data point inside the simplex. Figure 1 is a 2D example that is composed of 20 sample data points. e data point x 0 is used to compute the SD value. After sampling the two data points (x 1 and x 2 ), only two more (x 3 or x 4 ) are needed to construct the simplex that contains the original data point x 0 . In this illustrated example, we do not need to count all the simplices after getting x 1 and x 2 ; only x 3 or x 4 is considered as the final vertex of the simplices containing x 0 .
We list the details of the IS algorithm for the computation of SD in high-dimensional cases. Suppose that X n is a sample with size n in R d (i.e., X n � X 1 , X 2 , . . . , X n ) and x is a given point in R d , (d ≥ 2). e data points are in general position (i.e., any d data points can define a unique d − 1-dimensional hyperplane in R d ). e procedure of using the IS algorithm to compute SD (i.e., the computation of SD(x, X n ) ) is summarized as follows: (1) Set the IS parameters, including the number of samples tries N.
(i) Randomly choose d sample points from X 1 , X 2 , . . . , X n , and denote them as . . , d, and compute the simplex data point set U t k (i.e., the datasets consist of the possible data points that can construct a simplex containing the original data point x).
Replace the k-th data point X t k with the original data point x to obtain a dataset P t k with size d . Compute the unique director d t k which is perpendicular to the hyperplane determined by P t k . Project all data points X 1 , X 2 , . . . , X n and x along d t k , and compute the projected value (3) e sample mean of ω t (t � 1, . . . , N) can be treated as the IS estimator of SD(x, X n ); that is, Theorem 2. e computational complexity of using the IS algorithm to calculate SD is O Nd 5 n , (11) where N is the number of samples tries of the IS algorithm, d is the dimension of the sample data, and n is the sample size.
Proof. According to the steps for computing SD using the IS algorithm, we need to compute every ω i for i � 1, . . . , N. For every ω i , every selected sample data point X t k for k � 1, . . . , d must be replaced. e computational complexity of finding the unique director perpendicular to the hyperplane is O(d 3 ), whereas that of projecting all data points to the unique director is O(dn). e total computational complexity is O(Nd 5 n).
en we complete the proof of this theorem. eorem 2 shows that the computational complexity of the IS algorithm for the computation of SD is a polynomial with dimension d and sample size n as its input arguments. While all other exact algorithms for the computation of SD are NP problems, especially when the dimension d ≥ 5, there is no algorithm that can run faster than simply generating all simplices and computing the exact SD value (i.e., using O(n d+1 ) time) [12]. According to the definition of the IS algorithm in (7) and eorem 1, the IS estimator defined in (10) is an unbiased estimator of SD(x, X n ).
2D Simulated Data Example.
In the simulated data experiment, we compare the computed SD results of the IS, exact, and approximate algorithms, including the MC method. e simulated dataset is sampled from a 2D multivariate normal distribution (i.e., N( 0 → 2 , E 2 ), where 0 → 2 is 2D zeros vector and E 2 is a 2D unit matrix), and the sample size is 100.
We used the exact algorithm [21], the MC method, and the IS algorithm to compute the SD. e selected points x are (0, 0), (0.5, 0.5), (1, 1), and (2, 2). We used the exact and approximate algorithms to compute the SD of x with respect to the dataset. e number of random simplices was set to 100 for the MC and IS algorithms. All computations were repeated 50 times. e computed results (mean, standard deviation (sd), and total CPU time (s)) are summarized in Table 1 and Figure 2.
Since there is an exact algorithm for the SD computation in the 2D case, we can evaluate the accuracy of the IS and MC methods through their mean values and sd values. Moreover, the total CPU time consumed by every algorithm can reflect its efficiency. So, in this experiment, we use these three indicators (mean, sd, and total CPU time) to compare the performances of these algorithms (exact, MC, and IS methods) for the computation of SD. e results reveal that (1) the exact algorithm consumes less CPU time (approximately 0.1 s), (2) the approximate algorithms (MC and IS) can achieve accurate results because their means are extremely close to the exact value, (3) IS performs better than MC as indicated by the smaller sd of the results of the former compared with those of the latter under the same CPU time, (4) all computed SD results from exact and approximate algorithms are zeros at point (2, 2), which means that (2, 2) is outside the data cloud, and (5), with the exact algorithm, the simulated example also indicates that the IS algorithm can obtain highly accurate results.
Higher-Dimensional Simulated Data Example.
In this subsection, we compute the SD of different data points by using the MC and IS algorithms in 3D and five-dimensional simulated dataset. We did not use the exact algorithm [21] because it cannot obtain any result within three hours.
In the 3D case, the dataset was sampled from N( 0 → 3 , E 3 ), and the sample size was 1000. We used MC and IS methods to compute the SD of points (0, 0, 0), (0.5, 0.5, 0.5), and
Mathematical Problems in Engineering
(1, 1, 1). We set the number of random simplices to 100 and repeated the computation 50 times. e computed results are summarized in Table 2 and Figure 3. Because the exact algorithm cannot get any computed SD results within three hours when dimension d ≥ 3, we can only use MC and IS methods for the computation of SD in this subsection. ree indicators (mean, sd, and total CPU time) are summarized for the evaluation of the approximate methods. e mean values can be seen as the final computed SD results and the sd reflects the accuracy of the method (the smaller, the more accurate). e total CPU time reflects the efficiency of the method because it is more efficient if the method consumes less CPU time in the same computation of SD. Table 2 and Figure 3 indicate that (1) the computed SD results decrease when the data points are changed from (0, 0, 0) to (1, 1, 1); the data point (0, 0, 0) is deeper than the data point (1, 1, 1) with respect to the dataset; (2) the two methods have similar computational efficiencies because they consume almost the same total CPU time; (3) the sd obtained by the IS method is smaller than that calculated by the MC method, which means that the former is more accurate than the latter in this case.
In the five-dimensional case, the dataset was sampled from N( 0 → 5 , E 5 ), and the sample size was 1000. We used MC and IS methods to compute the SD of points (0, 0, 0, 0, 0), (0.5, 0.5, 0.5, 0.5, 0.5), and (1, 1, 1, 1, 1). e number of random simplices was 100, and the computations were repeated 50 times. e computed results (mean, sd, and total CPU time in s) are presented in Table 3 and Figure 4. Table 3 and Figure 4 show that (1) the computed SD values decrease when the data points are changed from (0, 0, 0, 0, 0) to (1, 1, 1, 1, 1), thereby suggesting that the former is deeper than the latter; (2) the SD values in the fivedimensional examples are slightly smaller than those in 3D examples because the sparsity of the data points increases when the dimension is increased from three to five; (3) the IS algorithm performs better than the MC approach as indicated by the smaller sd of the results of the former compared with those of the latter; (4) the two approximate algorithms consume almost the same CPU time; (5) even after using 100 random simplices, the MC algorithm cannot find any simplex containing point, whereas the IS algorithm can identify many simplices. In conclusion, the IS method outperforms the MC method in terms of accuracy in these simulated examples.
We also evaluated the MC and IS methods with other numbers of random samples tries in different datasets. e findings show that the result's accuracy increases with the
Application to Regression and Real Data Example
One of the most important extensions of SD is the robust estimation of regression based on SD. To demonstrate the relevant concept, we consider the linear regression model as follows: where random variables X and Y are in R 1 , ε ∼ N(0, σ 2 ), and α, β, and σ 2 are unknown parameters. Considering that SD(x, X n ) can measure the depth of x with respect to X n , we extend the definition of SD to regression (12) and determine the simplicial regression depth: where θ � (α, β) are the parameters, W n � (Y n , X n ) are the samples of the model defined in (12), and r i (θ) � Y i − α − βX i is the residual based on the i-th sample and A r i (θ), r j (θ), r k (θ) � 1, r i (θ), r j (θ), r k (θ)have alternating signs, 0, otherwise.
e SD based estimator of (12) can be defined as the maximum of SD(θ, W n ); that is, We consider the physical experiment data concerning the relationship between the atmospheric pressure and boiling point of water, which was discussed by a Scottish physicist named James D. Forbes [22]. In the mid-nineteenth century, this experiment can illustrate whether the simple measurement of the boiling point of water can substitute for the direct reading of the barometric pressure.
e dataset was collected in the Alps in Scotland (Table 4 and Figure 5). e linear regression model in (12) was used to fit the Forbes dataset. We used LS and SD methods to estimate the parameters of the model in (12). e function "lm" in R Stats package ("stat") can be used to determine the LS estimator of Table 3: Results (mean, sd, and total CPU time in s) were obtained by MC and IS methods in five-dimensional experiments. the model in (12). For the SD based method, we combined quasi-Newton [23] and IS methods to find the maximum point of (15). Moreover, we performed three statistical tests (i.e., the R square value, normality test, and the test of goodness of fit [24]) for every fitted regression model to get a more insightful analysis. e R square (or adjusted R square) value from the significance test gives the percentage that the dependent variable (Y) can be explained by the fitted model (α + βX) (see (12)). e normality test is used to test whether the residuals of the fitted model obey normal distribution which is the basis of other statistical tests. For example, under the assumption of normality, the F statistic value in the test of goodness of fit can be used to determine whether the fitted regression model makes sense. We first used the LS method and SD approaches to compute the linear regression model with the original Forbes dataset (Table 4, denoted as original data in this section). e computed regression results are summarized in Table 5 and Figure 5(a); their corresponding statistical tests are summarized in Table 6 and Figure 6. Table 5 and Figure 5(a) show that the LS and SD estimators obtained the very similar intercept parameter and slope parameter. is finding suggests that the SD method can capture the same accurate regression results compared with LS method. e statistical test results have also confirmed the finding since the results from LS and SD methods were also very similar. ey have very high R square values which indicate more than variance of the dependent variable that can be explained by the fitted model. Under significance level 0.01, we accept the assumption of normality and they pass the goodness of fit test (i.e., the p value of F statistic is almost zero). In addition, if one needs a higher level of significance (such as 0.05) in this example, then some statistical techniques (e.g., Box-Cox transformation or strong influence points detection) can be used to improve the regression model (see more details in [22]). However, this is another research topic and there is a lack of sample points in this example; we only focus on the robustness of the regression model computed from different methods, especially when the dataset is contaminated, and that is what we do in the next experiment.
In the following experiment, we worked with a contaminated dataset from Forbes data. We intentionally changed the pressure of the 16th data point from 29.88 to 59.76. e new dataset was denoted as the contaminated data ( Figure 5(b)). We compared the SD and LS methods' performances in the linear regression model with the contaminated dataset. e regression results are presented in Table 5 and Figure 5(b). eir corresponding statistical tests are summarized in Table 6 and Figure 7. Table 6: e statistical tests for regression analysis with original data and contaminated data using LS and SD methods.
Original data Contaminated data e results show that the LS estimator is greatly influenced by the contaminated data point, whereas the SD estimator can maintain satisfactory performance. e slope parameter estimated by the LS estimator changes from 0.522 9 to 1.026 6, which cannot reflect the actual variation trend of the pressure-temperature curve. By contrast, the SD estimator is not affected by the contaminated data point and can still provide the actual variation trend. e estimated slope parameters obtained using SD method for two different datasets are 0.508 6 and 0.508 5, respectively. e statistical test results show that, under the influence of the contaminated data point, the residuals of the fitted models from the two methods do not pass the normality test. However, the R square (or Adjusted R square) value from the SD method (0.991 7) is much large than that of the LS method (0.765 0) which means that the regression line from the SD method can explain more percentage of the variance of dependence variable compared with that of the LS method.
ese results imply that the SD estimator outperforms the LS estimator in the contaminated dataset experiment in terms of robustness.
Conclusions
e concept of statistical depth plays an important role in mathematical sciences, engineering, regression analysis, and life sciences. In this study, we computed the SD using the IS method and found that this new approach performs better than other exact and MC methods in terms of accuracy and efficiency. e simulated and real data examples illustrated the advantage of this new method. Finally, we tested the SD method based regression analysis through a concrete physical data example. e result indicated the excellent robustness of the proposed method compared with the LS estimation.
Given the many favorable properties of the proposed method, further research can be conducted on different angles. First, the IS parameter (i.e., number of sample tries N) plays an important role in the computation of SD, so the determination of N before the performance of IS algorithm is yet to be thoroughly investigated. Second, the IS method for the SD computation can be improved by sampling the data points via other more important simplices (not the last data point in the possible simplices). ird, with the development of modern computer science, multicore high-performance computer is gaining popularity. erefore, the IS method can be extended to a parallel computation based version. Lastly, the approximate algorithms (advanced MC methods) for other statistical depths (e.g., halfspace depth, projection depth, and regression depth) can be further explored.
Data Availability
e experimental data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2022-01-02T16:10:49.221Z | 2021-12-31T00:00:00.000 | {
"year": 2021,
"sha1": "6dbc5c49240cc8efafd9ad49a66bf21426dd7e9e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2021/6663641.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4e57041145cb7dfaea71b9d1c6d81dafbbd0c496",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": []
} |
252503639 | pes2o/s2orc | v3-fos-license | Assessment of Flooding Impact on Water Supply Systems: A Comprehensive Approach Based on DSS
The assessment of flood impact on a Water Supply System (WSS) requires a comprehensive approach including several scales of analysis and models and should be managed in the Water Safety Plans (WSP), as recommended in the EU Water Directive 2020/2184. Flooding can affect the quality of groundwater and surface water resources and can cause supply service interruption due to damaged infrastructures. A complete approach to address flood impact on WSS is required but not yet available, while only specific aspects were investigated in details. This work introduces a comprehensive tool named WAter Safety Planning Procedures Decision Support System (WASPP–DSS) developed in the context of MUHA (Multihazard framework for Water Related risks management) project. The tool is mainly addressed to small water utilities (WU) for supporting WSP development and is based on two main premises: 1) a correct approach for WSS risk analysis requires a multi-hazard perspective encompassing all the system components and different hazards; 2) other institutions in addition to WUs have to be involved in WSS risk analyses to harmonize monitoring and response procedures. The tool is here applied on risks associated to flooding and demonstrated for three case studies. The WASPP–DSS, tested by eight WUs, was found a potentially valid support for small WUs that must start drafting the WSP in a comprehensive way and can provide a common shared scheme. Improvements are desirable, as including a specific section to consider the issue of loss of water resources from reservoirs due to overflow.
Introduction
The Water Supply System (WSS) complexity is related to the dynamic nature of the characteristics influencing it, as climate change and increasing demand due to population (Amarasinghe et al. 2017;Ghandi and Roozbahani 2020).
The water supply, along with the sanitation, is considered a main factor in environmental sustainability, human health, social services and resilience (Luh et al. 2017). The service interruption due to disasters is a scenario that must be considered for WSS design and management. Different hazards could impact the WSS, such as drought and earthquake (Pagano et al. 2021;Amarasinghe et al. 2017), operational losses (Bozorgi et al. 2021), accidental pollution (Di Cristo and Leopardi 2008) and floods (Arrighi et al. 2017;Chau et al. 2021). Assessing the impact of flooding on WSS requires a comprehensive approach able to account for several processes potentially leading to interruption of water supply and/or water quality degradation, which in turns involve several scales of analysis, from the catchment area to the distribution network. The assessment of natural hazards impact on WSS has to be involved and managed in the Water Safety Plans (WSP), whose implementation is supported and advised by the WHO (2017).
During flooding, scarcity of safe drinking water (McCluskey 2001; Bariweni et al. 2012), disruption of water treatment facilities and, as a consequence, disease outbreak (Shimi et al. 2010;Speranza 2010) are the most frequent problems. Moreover, the reduction of groundwater quality can be caused by pollutants transport and flood effect on groundwater recharge (Comte et al. 2018;Alam et al. 2020;Zhang et al. 2017).
Severe flooding can also cause interruption of abstraction from artificial reservoirs and quality deterioration of stored water due to turbidity (Chou and Wu 2010). Flooding can affect well fields and result in pump failure and/or ingress of chemically/microbiologically contaminated flood water into damaged wells (Joannou et al. 2019;Sweya and Wilkinson 2020). It can cause damages also to the treatment component producing interruption of the treatment/water quality control (Hedera 1987;McCluskey 2001;Barnes et al. 2012;Koh et al. 2017). Lastly, flooding can affect the distribution system damaging the infrastructures leading to disruption of the supply service and contamination of the water resources (Arrighi et al. 2017;Joannou et al. 2019).
In this context, the MUHA project, funded by the European INTERREG V-B Adriatic-Ionian ADRION Programme 2014-2020 (https:// muha. adrio ninte rreg. eu/), developed a tool for WAter Safety Planning Procedures Decision Support System (WASPP-DSS). MUHA aims to improve forecasting, prevention and mitigation capacities of different risks in WSSs, strengthening cooperation between civil protection systems and operators at national, European and international levels in the implementation of the WSP.
This process requires common risk analysis tools as basis for sound intervention planning. Some elements of novelty with respect to both the WHO guidelines and the available national guidelines are: 1) the tool is based on a matrix approach that crosses the WSS components with hazards potentially threatening the water safety, as required by the WHO for water safety plans development; 2) the tool allows to fully account for the quantitative aspects impacting on water safety due to climate change and more in general, quantity issues as explicitly mentioned in WHO guidelines.
In this work, Sect. 2 describes the procedure to evaluate the flooding impact on WSS and the main characteristics of the tool; in Sect. 3 the application of the tool to three case studies is presented and, lastly, Sects. 4 and 5 outline the discussion and the conclusions.
Procedure for Flooding Impact Assessment
The conceptual scheme of the comprehensive approach to evaluate the flood risk impact and consequences on a WSS, as well as adaptation measures, is shown in Fig. 1. The scheme depicts the logic flow to be followed by the WU to estimate flood impacts on the WSS; it considers both surface water bodies and groundwaters as sources for abstraction and it has to be read clockwise starting from the natural hazard occurrence.
Starting from the natural hazard occurrence (1), it has to be considered if severe hydro-meteorological events can produce at basin scale an increase of turbidity, due to the high concentration of suspended sediments caused by a high soil erosion capacity, organic matter load and possible pollutant load in the surface runoff. When the surface flow reaches the river and/or enters into an artificial reservoir, it can affect significantly the surface water body sources quality (2). Similarly, the quality of groundwater (3) can be deteriorated by a polluted surface runoff if it affects damaged wells or springs recharge area. Extreme rainfall can cause inundation with problems related to sediment, organic and pollutant transport potentially affecting surface and groundwater sources. When the abstraction is done from a deteriorated source, the raw water quality (4) entering the treatment plant needs to be analyzed to determine whether it is suitable for a standard treatment, requires a modified treatment or is not appropriate, leading to a temporary interruption of abstraction and distribution. Uncontrolled or particularly polluted source water may directly affect the treatment with possible failures (5) of the process sections and consequences on treatment efficiency (6). Moreover, treatment (7), powered by electricity, are flooded causing reduction or interruption of the service. The flood hazard maps (8), generated by a chain of hydrologic and hydraulic models, and/or the delineation of historical flooding (8), identified through the new available high-resolution satellite images that can delineate the boundaries of flooded areas also together with fragmentary ground/remote data, allow to identify exposed elements. The exposure analysis (9) produces a list of exposed components (10), i.e. potentially vulnerable elements of both the distribution and treatment system. The possible failure of these elements during flood events can be foreseen as well as the quantification of effects in terms of quantity and quality of the available resource (11) considering modelling results. When the impact on the efficiency of the water supply service is expected to be not sustainable, adaptation measures (12) to face the possible emergency phase can be identified and planned in details.
The above-described comprehensive approach can be conveniently used to develop WSP and to identify potential flood impacts on the WSS components. The identified information can be organically gathered in the WASPP-DSS tool.
WASPP-DSS Tool
The MUHA tool (http:// muha. apps. vokas. si/ home) concept moves from the WHO guidelines (World Health Organization and International Water Association, 2009) that suggests 11 modules in the life-cycle of a WSP to support its implementation.
The tool focuses on the sub-group "System Assessment", specifically on Modules 2 ('Describe the water supply system'), 3 ('Identify the hazards and assess the risks') and 4 ('Determine and validate control measures, reassess and prioritize').
A detailed survey of the current status of implementation of WSPs in the ADRION area performed in the MUHA project highlighted the need for a common scheme, shared by all WUs and institutions, for the analysis of the possible hazardous events affecting the components of WSS. It was also found that although large WUs generally have in their own skills and knowledge to develop robust WSP, small and medium ones have to face several internal issues. On this basis, MUHA identified as first support for WUs who need to start the development of WSP, a tool able to individuate the WSS components prone to specific hazards and to rank hazards and related risks to deliver initial risk matrices, according to WHO guidelines (World Health Organization and International Water Association, 2009).The WASSP-DSS is constituted by a catalogue of possible hazardous events affecting the different components of the WSS listed in Fig. 2a, from 'surface water resources' to 'governance and future hazards'.
Each possible foreseen hazardous event is described in a specific box (Fig. 2b) where the user is requested to evaluate the probability and severity of occurrence. The two components are combined for a risk estimation categorized as very low, low, medium, high and very high.
Once the "catalogue of events" is completed, the overall risk assessment is given in terms of number of hazardous events completed (Fig. 3a), number of hazardous events per component and hazard category (Fig. 3b), severity of consequences by component and by hazard (Fig. 3c, d, respectively) and the risk category by component and by hazard (Fig. 3e, f, respectively).
Application and Results
In the MUHA project the WASPP-DSS was tested on the six pilot areas of the ADRION region considering four hazards: drought, flooding, accidental pollution and damage to infrastructure due to earthquakes.
In this work, the tool is demonstrated for flooding impact analysis in three pilot areas: the Ridracoli reservoir in Italy and two municipalities (Larissa in Greece and Zadar in Croatia).
In the following, for each case study the main outcomes of the tool application are presented. It is worth stressing that some analyses need to be performed out of the tool: e.g., for the Ridracoli dam, previous studies on the water volume loss due to overflow process were available and can be included as important indications in the tool.
Ridracoli Pilot Area (Italy)
The Ridracoli dam, located in northern Italy, is managed by Romagna Acque company. The reservoir can store a maximum of 33 million cubic meters of water. The water is made drinkable by passing through a water treatment plant and supplies 50 municipalities serving 950,000 inhabitants and millions of tourists in summer. No severe issues were experienced in the last years due to flood events; however, floods could affect the dam in the future due to climate change that is expected to exacerbate extreme events both in terms of drought periods and flood waves. When dealing with flood impact on artificial dams, the loss of water for overflow process and the increased sediment transport are recognized as important issues. The first process occurs when extreme precipitation events produce large amounts of water entering into the reservoir rapidly with an available volume not sufficient to store it. The second issue is caused by the increased sediment transport that affects water quality and leads to sedimentation with a reservoir storage reduction.
WASPP-DSS Application
The main statistics and risk assessment derived by the tool application are shown in Fig. 4 where the total number of hazardous events (not only the ones due to flooding) are summarized.
For the sixth category of the WSS components (i.e. Treatment), nearly all the hazardous event sections were filled (Fig. 4a). For this category, the severity of consequences is mostly indicated causing minimal effects (Fig. 4b) with a risk category mainly classified as very low (Fig. 4c).
Focusing on flood impact, the results of the analysis are summarized in Table 1. Concerning the 'Surface water sources', floods could lead to water quality degradation due to significant surface runoff causing erosion and consequent high-load sediment transport. The probability of occurrence cannot be estimated because it requires investigations not yet developed. The consequences were classified as moderate because the impact would be Fig. 3 Overall representation of the hazard analysis performed on a generic water supply system through the WASSP-DSS tool: a) overview of completion; b) overview of hazards; c) severity of consequences; d) severity of consequences by hazards; e) risk evaluation: f) risk category by hazard mainly the necessity of a deeper treatment in the purification plant. Thus, the risk was classified as medium.
Floods could cause interruption of water supply due to damages to reservoir; the dam could be compromised until the extreme consequence of total or partial failure. This hazard was considered potentially present with severe consequences and with a probability of occurrence higher than 30 years; therefore, the risk was classified as low. Moreover, the sediments coming into the reservoir during floods can accumulate reducing the storage volume storage. The consequences were assessed as major, with a probability of occurrence that cannot be assessed with the available information and the risk was classified as medium.
The last row in Table 1 is related to the 'Loss of resource (water volume released through overflow process)', not currently included in the tool for the lack of appropriate section. The results of a previous analysis, presented below, allowed to fill this section.
Severity of consequences Risk
Drinking water source -surface water (1) Contamination of catchment zone (1.12) Hazard is present but probability cannot be assessed
Medium
Supply System -Reservoirs and pumps (either directly after treatment or in the distribution system) (7) No water supply / contamination of water (134.1) Every 30 years or more Severe effects
Low
Supply System -Reservoirs and pumps (either directly after treatment or in the distribution system) (7) Water quality deterioration (140.1) Hazard is present but probability cannot be assessed
Medium
Supply System -Reservoirs and pumps (either directly after treatment or in the distribution system) (7) Loss of resource (water volume released through overflow process) Every 10 years Moderate effects
Historical Flood Events Analysis
For the Ridracoli dam, a study of historical floods was developed to analyze possible flood management scenarios. First, the main floods entering into the reservoir during the last years were reconstructed by exploiting the outflows from the reservoir (assessed through equations and graphs of the regulating devices), the recorded lake levels and available reservoir curve.
Second, the reconstructed flood hydrographs were used as input to the reservoir to investigate different scenarios for dam management by assuming different initial lake levels.
The study exploited a lamination model (Castorani and Moramarco 1995) based on the continuity equation: where q in = inflow to the artificial reservoir; q rel = outflow from the artificial reservoir; W = W 0 h = storage volume compared to the reference level h; W 0 = volume of reservoir derived from the lake level-volume curve; h = water level compared to the reference plan; t = time.
The total release, q rel , is the sum of all the contributions to the downstream outflow from the various outlets and spillways (free surface spillway, bottom and middle outlet; discharge derived to treatment plant). Equation (1) allows to compute q in from the knowledge of the total outflow and time modification of the stored water volume.
Similarly, the outflow and the lake level variation can be estimated when the incoming hydrograph is known.
Historical Inflow Hydrographs Reconstruction
The lack of a monitoring station upstream the dam prevents from having a direct observation of reservoir inflows. Therefore, the events were reconstructed using Eq. (1) starting from the known outflows and time pattern of the reservoir levels at hourly intervals. The inflow hydrograph was assessed for six main floods in the period 2010-2019. Table 2 summarizes the events characteristics and Fig. 5 shows the estimated inflow hydrograph, Q in , the total outflow released from the dam, Q out , and the trend of the lake level for the flood event occurred on January-February 2014, characterized by the highest flood peak.
Historical Floods Lamination Scenario
The six reconstructed floods were considered as inflows to the reservoir during a second step of analysis. Moreover, a flood characterized by 100-years return period was considered with a peak flow of 140 m 3 /s. The analysis aimed to identify, for each flood, the maximum initial lake level that could be allowed in order to have a maximum discharge released not exceeding 50 m 3 /s. In this way, we wanted to assure the maximum water volume stored in the reservoir and, at the same time, to guarantee a safe condition for the downstream territory where no negative impacts are expected with a release lower than 50 m 3 /s. The analysis was carried out assuming the middle and bottom outlets closed, no water derived to the treatment plant and the discharge released only by the free surface spillway, which is described by a known discharge-lake level relationship. First, we identified the maximum initial lake level that would allow to not release water downstream from the free spillways, storing the maximum water volume during the flood.
Second, we optimized the maximum initial lake level to allow a release from the free spillways not exceeding 50 m 3 /s, to avoid flooding problems. The results, summarized in Table 3, show that with an initial lake level equal to the crest of the surface spillways (557.3 m asl) and without other releases, all the flood events would produce an outflow lower than 50 m 3 /s, except for the May 2019 flood. This was the one characterized by the highest volume and, hence, for it we estimated the lowest optimal initial lake level (552.1 m asl). The results can address planning measures to reduce the water loss for overflow process and should be included in the tool once an appropriate section will be developed.
Municipality of Larissa Pilot Area (Greece)
The municipal water supply and sewerage company of Larissa (DEYAL) supplies water to the municipality of Larissa (Greece) and other local districts. The pilot area covers 335.12 Km 2 and supplies approximately 230,000 people through 1,110 km pipeline network and 84,126 water meters. The WSS of DEYAL consists of 11 water supply zones abstracting water through 28 active boreholes. Water is transferred to water tanks and from there it is distributed to the consumers. The daily water volume supplied is 33,888 m 3 . The WSS of DEYAL is located in the potential high-risk flood zone of Pinios (3,353 Km 2 ).
Five flood events took place in the area of Larissa municipality in the period 2012-2018, however, none of these affected the WSS.
WASPP-DSS Application for Flooding Risk
The flooding risk assessment using the WASPP-DSS resulted in the outcomes summarized in Table 4.
Concerning the 'Groundwater sources', floods could lead to water contamination due to agricultural runoff during flooding events. This hazard is present, but its probability of occurrence could not be estimated (never experienced in the past), and the consequences were assessed as major due to pollutants and pathogens from the manure spread present in such contamination events. Thus, the risk was classified as medium.
Moreover, floods could lead to water contamination due to wastewater overflows. This hazard was estimated as potentially occurring every 30 years and more, since there are no wastewater facilities very close to the water intake points, and the consequences were assessed as severe due to pathogens present in such contamination events. Thus, the risk was classified as low.
When the 'Raw water source' is of interest, floods have been considered able to cause contamination phenomena which were assessed as a potential hazard whose probability of occurrence could not be estimated, while the consequences were assessed as major. Thus, the risk was classified as medium. Considering the 'Reservoirs' in the supply system, the
Severity of consequences Risk
Drinking water source -groundwater (2) Contamination of aquifers (9.11) Hazard is present but probability cannot be assessed
Medium
Drinking water source -groundwater (2) Contamination of aquifers (9.3) Every 30 years or more Severe effects
Low
Raw water intake (4) Contamination through openings (e.g. well-head, ventilation pipe, grit chamber, stilling basin, overflow pipe, doors…) (23.1) Hazard is present but probability cannot be assessed
Medium
Supply System -Reservoirs and pumps (either directly after treatment or in the distribution system) (7) No water supply / contamination of water (134.1) Hazard is present but probability cannot be assessed
Medium
Supply System -Reservoirs and pumps (either directly after treatment or in the distribution system) (7) No/low pressure/flow in network water.
Network water contamination (141.1) Hazard is present but probability cannot be assessed
Major effects
Medium floods could cause contamination or destruction of water supply due to failures to reservoirs. This hazard was considered potentially present with major consequences, but the probability was not quantifiable. Thus, the risk was classified as medium.
Analyzing 'Pumps' in the system, floods can cause contamination or destruction of water supply due to failures of pumping stations. The hazard was considered potentially present causing major consequences and with a not quantifiable probability of occurrence leading to a medium risk.
Pilot Area Managed by Zadar Water Supply Company (Croatia)
The WSS managed by Vodovod d.o.o. Zadar is situated in the coastal area of northern Dalmatia in Croatia, and it is a typical karst system with numerous karst phenomena. It covers an area of about 2152.50 km 2 and supplies about 115,000 consumers in three cities and 16 municipalities.
The WSS consists of 16 extraction sites, 35 reservoirs, and break chambers with a daily capacity of 37,240 m 3 and about 1,000 km of pipelines. Average water supply amounts are around 500-600 l/s, and in the summer, around 800 l/s. From 2000 to 2018, 27 flood events were registered in the area; the most significant flood was in September 2017, when buildings, cattle, agricultural land, and equipment were heavily damaged by a torrential flood. The pilot area is susceptible to climate change that is causing more frequent extreme precipitation and severe floods.
WASPP-DSS Application for Flooding Risk
The results of the analysis are summarized in Table 5.
Concerning the 'Groundwater sources', the hazardous event due to the process of leaching of contaminates from waste disposal sites and material storage as a result of failed safety features due to floods (first row in Table 5) is considered characterized by a quite low probability (every 10 years). Moreover, in the past the pollution events at water supply facilities was not recorded and, hence, the consequences can be considered as moderate leading to a risk classified as low. During floods, the agricultural runoff can produce increment of the pesticide and nitrate concentrations (second row in Table 5). Past observations indicated that the maximum allowable concentrations for drinking water were not exceeded. Moreover, water pollution can happen if the flood occurred at the time of the maximum pesticides' concentration. The available data indicate that this hazard can be characterized by a return period of 10 years with minor consequences and, hence, the risk was classified as low.
Discussion
The first version of WASPP-DSS was tested by eight WUs in the context of the MUHA project; it was found to be a potentially useful tool to gather and compare all the available information on the WSS components and the possible issues caused by different hazards. The tool has been evaluated useful for the early-stage development of the WSP mainly for small water utilities that typically do not have in their own necessary skills and knowledge. It allows the description of the WSS components and links, the hazards identification and risk assessment providing overall outcomes in terms of number of hazardous events, severity of consequences and risk category classification. The possibility to choose from a predefined list of hazards would make easier the classification and also the selective extraction of results based on the type of hazard, e.g. flooding. Moreover, having a shared scheme of analysis may booster comparison among different WSP, promoting collaboration among water utilities acting in close territories or, in some cases, having interconnections.
In this work, we focused on flooding hazards, testing the WASSP-DSS on three case studies. The Ridracoli dam is an artificial reservoir feeding a drinking water distribution network. Therefore, the loss of water volume during floods from surface spillways is an issue that was investigated and the relevant results are shown in this work even if no related specific section is currently available in the tool.
Further developments are necessary to also account for: a) the multi-hazard dimension, when effects of different hazards overlap; b) the propagation of impacts through the whole chain from resource(s) to tap. In fact, the tool does not allow to consider the spatial dimension of water infrastructures. Adding spatial data (e.g. maps with the location of infrastructures and assets, flooding maps) would provide more detailed and distributed information on risk level over a complex infrastructural system, and help directly identifying suitable mitigation measures. When dealing with flood hazard, the main limit is the lack of a GIS section that would allow the user to import flood hazard maps and other geodata (e.g., historical flooded areas, high-resolution satellite data) really important to figure out the location and overlap of flood-prone/flooded areas and WSS components.
Conclusions
The assessment of flood impact on WSSs needs different scales of analysis and is fundamental to ensure safe drinkable water distribution. Floods can impact the quality of the water resources in multiple ways, and can cause the interruption of supply service. A comprehensive approach to address the issue of flood impact on WSS is here presented based on the WASPP-DSS tool developed in the context of the MUHA project. The tool provides a shared scheme of analysis among different WSPs, promoting collaboration among WUs acting in close territories. It is based on a matrix approach that crosses the WSS components with hazards potentially threatening the water safety and allows to fully account for the quantitative aspects impacting on water safety due to climate change.
The tool, applied for some pilot areas, has proven to be a valid support for the development of the WSP allowing the description of the WSS components and links, the hazards identification and risk assessment. It was found particularly useful for small WUs activities, when the WSP development and implementation is still at an early stage. Specifically, the tool provides an overview of the main statistics and risk assessment outcomes allowing to identify the most important issues to be addressed by identifying appropriate mitigation measures. The first version of the WASPP-DSS can be surely improved also including an appropriate hazardous event section to take the possible water volume loss during floods into consideration and a GIS interface. Further developments are also necessary to account for the multi-hazard dimension, when effects of different hazards overlap, and the propagation of impacts through the whole chain from resource(s) to tap. | 2022-09-25T15:04:08.584Z | 2022-09-23T00:00:00.000 | {
"year": 2022,
"sha1": "039956243dba5253e071b84e486d985ccbc41c20",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11269-022-03306-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8fe4fec4516c05bafd2d6b03aae5a3569bc67867",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
245704363 | pes2o/s2orc | v3-fos-license | Quantum Chaos and Circuit Parameter Optimization
We explore quantum chaos diagnostics of variational circuit states at random parameters and study their correlation with the circuit expressibility and the optimization of control parameters. By measuring the operator spreading coefficient and the eigenvalue spectrum of the modular Hamiltonian of the reduced density matrix, we identify the universal structure of random matrix models in high-depth circuit states. We construct different layer unitaries corresponding to the GOE and GUE distributions and quantify their VQA performance. Our study also highlights a potential tension between the OTOC and BGS-type diagnostics of quantum chaos.
I. INTRODUCTION
The random circuit model provides a framework of hybrid quantum/classical algorithms for solving optimization and learning tasks, formulated as a search for the ground state of k-local Hamiltonians [1][2][3]. Generic quantum gates create an entanglement between qubits. Entanglement is a valuable resource for achieving quantum advantage, but it also becomes a hurdle for successful optimization of circuit control parameters at the same time, specifically when random circuit states are much more highly entangled than the ground state of the Hamiltonian encoding the task [4][5][6][7][8]. Quantum information in such highly entangled states is scrambled, and a successful adjustment of circuit parameters via local gradient search typically requires over-parametrization [9][10][11][12].
Quantum chaos [13] is correlated with information scrambling [14][15][16][17][18] and is in general a feature of interacting dynamical quantum systems [19]. One expects deep random circuit states to be generically chaotic. While the entanglement properties of a state can be quantified by entanglement entropies constructed from the eigenvalues of its reduced density matrix, the quantum chaotic structure of the state is largely diagnosed by measures that depend on the level spacing of the reduced density matrix. The aim of this work is to investigate the chaotic properties of random circuit states, with focus on the relationship between quantum chaos, circuit expressibility, and optimization performance.
A common diagnostic of quantum chaos that characterizes information scrambling is the operator spreading, which can be quantified by the 4-point out-of-time-order correlation function (OTOC) [16,17,20]. One considers an operator at time t = 0, denoted by O(0), that acts on a small number of qubits and evolves it to: supported on a larger number of qubits at time step t. This growth is typically ballistic with a characteristic velocity, known as the butterfly velocity, reminiscent of the spread of classical chaotic trajectories. Considering a discrete time evolution driven by the random circuit unitary, this velocity depends on the circuit architecture, i.e., arrangement and type of quantum gate unitaries. However, the operator growth is associated with spectral values of the circuit reduced density matrix themselves, much like the entanglement entropy, but not their level spacings. Let us denote the density matrix of the circuit state by ρ c and divide n qubits of the quantum register into two subsets A and B of the equal size, n A = n B = n 2 . The modular Hamiltonian H(ρ A ) of the reduced density matrix ρ A = Tr B ρ c can be written as where Z A = Tr A e −H(ρ A ) is the partition function of the modular Hamiltonian. It is indeed the eigenspectrum of H(ρ A ) that encapsulates the entanglement and operator spreading properties of the quantum circuit. There are by now accumulated pieces of evidence that chaotic properties of Hamiltonian systems reveal themselves in the level spacing distribution of the Hamiltonian energy spectrum [21]. This understanding can be extended to the eigenspectrum of the modular Hamiltonian, diagnosing the chaotic nature of a quantum state from its level spacing distribution [22]. We will explore various quantum chaos diagnostics as a function of the circuit depth and show that deep circuit states exhibit the characteristics of random matrix models. Specifically, the level spacing distribution of the modular Hamiltonian, rstatistics, and spectral form factor will manifest the universal structure of the Gaussian Orthogonal Ensemble (GOE) or Gaussian Unitary Ensemble (GUE), depending on types of quantum gates introduced in the random entangling circuit.
The paper is organized as follows. In Section II, we will describe the architecture of layered random circuits used for numerical simulation and briefly review the relationship between the number of circuit layers and VQA performance. Section III will explore the connection between the operator spreading, a typical diagnostic of quantum chaos, and the optimization efficiency of control variables. We will then study in Section IV the level spacing distributions of the modular Hamiltonians, r-statistics, and their spectral form factors at different circuit depths, showing that all diagnostics match those of random matrix ensembles in the high-depth regime. Section V will conclude with discussion.
II. VQA PERFORMANCE
We begin with specifying the circuit architecture used in this paper and briefly review the relation between the entanglement made by random circuit unitaries and the optimization efficiency of variational quantum algorithms (VQAs) [4][5][6][7][8]11].
A. Circuit Architecture Figure 1 illustrates the variational circuit architecture assumed throughout our discussion. The n quantum registers are arranged periodically, i i+n, and acted upon by a chain of two-qubit unitaries. The unitaries are made of single-qubit gates R(θ) acting on all n distinct qubits, followed by two-qubit entanglers: that operate on all adjacent pairs of qubits. Every layer swaps the roles of odd/even qubits, alternating between controlling/controlled and controlled/controlling pairs. In our numerical simulation, we will consider only two types of single-qubit gates. They are Pauli rotation gates along the y-axis R y (θ) = exp(iσ y θ) = cos θ sin θ − sin θ cos θ , which are real and orthogonal, and those along the x-axis which are complex-valued unitary matrices. All the rotation angles are randomly chosen from the uniform distribution U(0, 2π) at circuit initialization. We will use the symbol θ ,i to denote a specific angle that rotates the i'th qubit at the 'th layer, for 1 ≤ i ≤ n and 1 ≤ ≤ L. We will consider four different types of layered circuits for numerical experiment. If the single-qubit rotations are all along the x or y axis, followed by the CZ entangler actions, the corresponding circuits will be called R x + CZ or R y + CZ. Two additional variants of circuit structures, dubbed as R x +CZ+R y +CZ and R y +CP, will be examined, where one-qubit gates at odd/even layers alternate between R x /R y and where all CZ's are replaced with CP's, respectively. In general, different gate choices will lead to different entanglement and chaos properties. |0i < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 r t M C d F m l j j 1 H x D j 4 U S z f C K n D 2 E = " > A A A C C H i c b V C 7 S g N B F J 2 N r x g f W b W 0 G Q x C q r A b B S 0 D N p Y R z A O y S 5 i d 3 C R D Z m a X m V k h L P k B f 8 F W e z u x 9 S 9 s / R I n y R Y m 8 c C F w z n 3 c i 4 n S j j T x v O + n c L W 9 s 7 u X n G / d H B 4 d F x 2 T 0 7 b O k 4 V h R a N e a y 6 E d H A m Y S W Y Y Z D N 1 F A R M S h E 0 3 u 5 n 7 n C Z R m s X w 0 0 w R C Q U a S D R k l x k p 9 t x x Y 1 2 A P B 4 r I E Y e + W / F q 3 g J 4 k / g 5 q a A c z b 7 7 E w x i m g q Q h n K i d c / 3 E h N m R B l G O c x K Q a o h I X R C R t C z V B I B O s w W j 8 / w p V U G e B g r O 9 L g h f r 3 I i N C 6 6 m I 7 K Y g Z q z X v b n 4 n 9 d L z f A 2 z J h M U g O S L o O G K c c m x v M W 8 I A p o I Z P L S F U M f s r p m O i C D W 2 q 9 J K T C R m t h R / v Y J N 0 q 7 X / K t a / e G 6 0 q j m 9 R T R O b p A V e S j G 9 R A 9 6 i J W o i i F L 2 g V / T m P D v v z o f z u V w t O P n N G V q B 8 / U L b x W Z M w = = < / l a t e x i t > |0i < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 r t M C d F m l j j 1 H x D j 4 U S z f C K n D 2 E = " > A A A C C H i c b V C 7 S g N B F J 2 N r x g f W b W 0 G Q x C q r A b B S 0 D N p Y R z A O y S 5 i d 3 C R D Z m a X m V k h L P k B f 8 F W e z u x 9 S 9 s / R I n y R Y m 8 c C F w z n 3 c i 4 n S j j T x v O + n c L W 9 s 7 u X n G / d H B 4 d F x 2 T 0 7 b O k 4 V h R a N e a y 6 E d H A m Y S W Y Y Z D N 1 F A R M S h E 0 3 u 5 n 7 n C Z R m s X w 0 0 w R C Q U a S D R k l x k p 9 t x x Y 1 2 A P B 4 r I E Y e + W / F q 3 g J 4 k / g 5 q a A c z b 7 7 E w x i m g q Q h n K i d c / 3 E h N m R B l G O c x K Q a o h I X R C R t C z V B I B O s w W j 8 / w p V U G e B g r O 9 L g h f r 3 I i N C 6 6 m I 7 K Y g Z q z X v b n 4 n 9 d L z f A 2 z J h M U g O S L o O G K c c m x v M W 8 I A p o I Z P L S F U M f s r p m O i C D W 2 q 9 J K T C R m t h R / v Y J N 0 q 7 X / K t a / e G 6 0 q j m 9 R T R O b p A V e S j G 9 R A 9 6 i J W o i i F L 2 g V / T m P D v v z o f z u V w t O P n N G V q B 8 / U L b x W Z M w = = < / l a t e x i t > |0i < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 r t M C d F m l j j 1 H x D j 4 U S z f C K n D 2 E = " > A A A C C H i c b V C 7 S g N B F J 2 N r x g f W b W 0 G Q x C q r A b B S 0 D N p Y R z A O y S 5 i d 3 C R D Z m a X m V k h L P k B f 8 F W e z u x 9 S 9 s / R I n y R Y m 8 c C F w z n 3 c i 4 n S j j T x v O + n c L W 9 s 7 u X n G / d H B 4 d F x 2 T 0 7 b O k 4 V h R a N e a y 6 E d H A m Y S W Y Y Z D N 1 F A R M S h E 0 3 u 5 n 7 n C Z R m s X w 0 0 w R C Q U a S D R k l x k p 9 t x x Y 1 2 A P B 4 r I E Y e + W / F q 3 g J 4 k / g 5 q a A c z b 7 7 E w x i m g q Q h n K i d c / 3 E h N m R B l G O c x K Q a o h I X R C R t C z V B I B O s w W j 8 / w p V U G e B g r O 9 L g h f r 3 I i N C 6 6 m I 7 K Y g Z q z X v b n 4 n 9 d L z f A 2 z J h M U g O S L o O G K c c m x v M W 8 I A p o I Z P L S F U M f s r p m O i C D W 2 q 9 J K T C R m t h R / v Y J N 0 q 7 X / K t a / e G 6 0 q j m 9 R T R O b p A V e S j G 9 R A 9 6 i J W o i i F L 2 g V / T m P D v v z o f z u V w t O P n N G V q B 8 / U L b x W Z M w = = < / l a t e x i t > |0i < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 r t M C d F m l j j 1 H x D j 4 U S z f C K n D 2 E = " > A A A C C H i c b V C 7 S g N B F J 2 N r x g f W b W 0 G Q x C q r A b B S 0 D N p Y R z A O y S 5 i d 3 C R D Z m a X m V k h L P k B f 8 F W e z u x 9 S 9 s / R I n y R Y m 8 c C F w z n 3 c i 4 n S j j T x v O + n c L W 9 s 7 u X n G / d H B 4 d F x 2 T 0 7 b O k 4 V h R a N e a y 6 E d H A m Y S W Y Y Z D N 1 F A R M S h E 0 3 u 5 n 7 n C Z R m s X w 0 0 w R C Q U a S D R k l x k p 9 t x x Y 1 2 A P B 4 r I E Y e + W / F q 3 g J 4 k / g 5 q a A c z b 7 7 E w x i m g q Q h n K i d c / 3 E h N m R B l G O c x K Q a o h I X R C R t C z V B I B O s w W j 8 / w p V U G e B g r O 9 L g h f r 3 I i N C 6 6 m I 7 K Y g Z q z X v b n 4 n 9 d L z f A 2 z J h M U g O S L o O G K c c m x v M W 8 I A p o I Z P L S F U M f s r p m O i C D W 2 q 9 J K T C R m t h R / v Y J N 0 q 7 X / K t a / e G 6 0 q j m 9 R T R O b p A V e S j G 9 R A 9 6 i J W o i i F L 2 g V / T m P D v v z o f z u V w t O P n N G V q B 8 / U L b x W Z M w = = < / l a t e x i t > (a) The circuit architecture < l a t e x i t s h a 1 _ b a s e 6 4 = " R 5 h w 3 S E 0 O G Q e P n + 1 H c 5 x l Y l v x V U = " > A A A C A X i c b V D J S g N B E O 2 J W 4 x b 1 I v g p T E I E S T O i K j H g B e P U c w C S R h 6 O j V J k 5 6 F 7 h o x D P H i r 3 j x o I h X / 8 K b f 2 N n O W j i g 4 L H e 1 V U 1 f N i K T T a 9 r e V W V h c W l 7 J r u b W 1 j c 2 t / L b O z U d J Y p D l U c y U g 2 P a Z A i h C o K l N C I F b D A k 1 D 3 + l c j v 3 4 P S o s o v M N B D O 2 A d U P h C 8 7 Q S G 5 + 7 9 Z N H 0 4 G w 2 I L e 4 D M T V s g 5 b E Y H r n 5 g l 2 y x 6 D z x J m S A p m i 4 u a / W p 2 I J w G E y C X T u u n Y M b Z T p l B w C c N c K 9 E Q M 9 5 n X W g a G r I A d D s d f z C k h 0 b p U D 9 S p k K k Y / X 3 R M o C r Q e B Z z o D h j 0 9 6
3 E / 7 x m g v 5 l O x V h n C C E f L L I T y T F i I 7 i o B 2 h g K M c G M K E u Z W y n t M M Y m t J w J w Z l 9 e Z 7 U T k v O e c m 5 O S u U 6 T S O L N k n B 6 R I H H J B y u S a V E i V c P J I n s k r e b O e r B f r 3 f q Y t G a s 6 c w u + Q P r 8 w c c L J a Q < / l a t e x i t >
Rx/y(✓`,i) < l a t e x i t s h a 1 _ b a s e 6 4 = " s G g s 0 A F 8 z j 7 2 u s n z d F m 9 m P z y h 0 j y W 9 2 a S o B / R o e Q h Z 9 R Y q c n 7 5 Y p b d e c g q 8 T L S Q V y N P r l r 9 4 g Z m m E 0 j B B t e 5 6 b m L 8 j C r D m c B p q Z d q T C g b 0 y F 2 L Z U 0 Q u 1 n 8 0 O n 5 N w q A x L G y p Y 0 Z K 7 + n s h o p P U k C m x n R M 1 I L 3
s z 8 T + v m 5 r w x s + 4 T F K D k i 0 W h a k g J i a z r 8 m A K 2 R G T C y h T H F 7 K 2 E j q i g z N p u S D c F b f n m V t G t V 7 7 J a a 1 5 V 6 i d 5 H E U 4 h T O 4 A A + u o Q 5 3 0 I A W M E B 4 h l d 4 c x 6 d F + f d + V i 0 F p x 8 5 h j + w P n 8 A c Q p j M c = < / l a t e x i t > j < l a t e x i t s h a 1 _ b a s e 6 4 = " W 2 + 3 k 3 v R g F h l 6 5 h E T + D q E h X 7 x c s = " > A A
e n Y 9 l a 8 7 J Z k 7 h D 5 z P H 8 W t j M g = < / l a t e x i t >
B. Optimization and Expressibility
Random circuit states with a large number L of layers are typically highly entangled. Circuit expressibility, i.e., being able to represent generic states in the Hilbert space, can be achieved for sufficiently deep circuits. However, as quantum typicality makes the energy landscape of VQA Hamiltonians flattened [11,23], the circuit parameter optimization via local gradient search becomes more difficult with highly entangled circuits [4][5][6][7][8]. A known remedy for the flattened energy landscape is overparametrization of variational ansatz [9,10,12,24], developing multiple steep directions that lead to the robust success of the gradient descent method [11]. This comes with a classical computational cost for storing and manipulating variables.
If a k-local Hamiltonian encodes the task to be solved, so that the corresponding ground state exhibits the arealaw entanglement scaling, VQA will perform better with avoiding the region of quantum typicality [4,5], including the saturation of bipartite entanglement entropy near the maximum value [6,8]. A canonical example showing this relation is VQA with the one-dimensional Ising Hamiltonian made of the nearest-neighbor spin interaction coupled to a transverse magnetic field [25]. Specifically, we will use two similar Hamiltonians that differ only in the direction of the external magnetic field: and attempt to reach their ground states at g = 1 by optimizing the circuit parameters at different L.
The search for optimal parameters that minimize the energy function will be conducted locally via the Adam optimization algorithm [26]. It is a variant of the plain gradient descent that shows faster convergence in many circumstances, adjusting the step size at each iteration based on the moving average of gradients. We will choose its hyperparameters to be (α, β 1 , β 2 ) = (0.05, 0.9, 0.999) in all numerical experiments. Per each run, we will allow enough time for convergence towards the ground state by waiting for 5000 steps of the parameter update.
From the collection of 10 independent, repeated runs for each architecture and depth, the overall trend stands out: A high level of expressibility, measured through the average Renyi entanglement entropy at random parameters saturating around its maximum possible value, has an adverse effect in reaching the ground state of the VQA Hamiltonian. See Figures 2 and 3 for the outputs under four different choices of quantum gates. The orange/blue curves in their left panel display the energy gap from the ground state before/after the circuit parameter optimization as a function of L. Likewise, the orange/blue curves in the right panels represent the Renyi-2 entropy of the reduced density matrix obtained through partially tracing out n/2 qubits, before/after running the VQA optimization. It is notable that distinct gate choices lead to different entanglement growth and saturation values. For a particular circuit architecture dubbed as R x +CZ+R y +CZ, where one-qubit gate alternates between R x and R y at each layer, the entanglement curve converges to half the saturated value of other circuits. For R y + CP model that substitutes CZ entanglers with CP gates, the entanglement growth becomes considerably slower. They result in widening the depth window until approaching the maximum level of entanglement, in which the circuit parameter optimization can likely succeed. Also interestingly, we observe that R x + CZ/R y + CZ models fail to reach the ground state of the Ising Hamiltonian coupled to the external field along the x/y-axis, respectively.
III. OPERATOR SPREADING
Operator spreading serves as a diagnostic of the chaotic dynamics and information scrambling. It has been extensively studied in the context of random unitary circuits, starting from [27]. In this section, we will examine operator spreading as a function of the circuit depth L that can be regarded as the time t in discrete quantum systems.
Any Hermitian operator O(t) acting on n qubit systems can be written in the Pauli string basis: where h j1,··· ,jn (t) ≡ 1 2 n/2 Tr(σ The size of the operator O(t) is defined as the size of the region where O(t) does not commute with an operator σ (x) a located at position 1 ≤ x ≤ n. It can be written as We numerically measure it with a = y, where the operator O(0) is the Pauli-x matrix located at x = n/2. Figure 4 visualizes the operator spreading coefficient C y (x, t) at different times t = L and positions 1 ≤ x ≤ n, averaged over 50 random circuit instances in a system of n = 12 qubits. For comparison of the operator spreading pattern across different quantum circuit architectures, we also draw in Figure 5 the standard deviation of C y (x, t) over all 1 ≤ x ≤ n as a function of L. We observe that the R x + CZ and R y + CZ unitaries reach saturation around L 30, while R y + CP takes L 60 for complete spread. Furthermore, there is no complete spreading under the R x + CZ + R y + CZ unitaries even with a large number of circuit layers L. These behaviors are all consistent with the entanglement growth pattern of random circuit states illustrated in Figures 2 and 3, showing a clear correlation between two distinct quantities. x as a function of the circuit depth t. We see that even at a large number of layers, there is no complete spreading for the Rx + CZ +Ry + CZ circuit. This is consistent with the saturation behaviour of the circuit entanglement in Figure 2 (c) and in Figure 3 (c).
IV. SPECTRAL DIAGNOSTICS OF QUANTUM CHAOS
The Bohigas-Giannoni-Schmit (BGS) conjecture [13,28] associates the quantum chaotic properties of a system with the correlations between its energy levels. Chaotic Hamiltonians exhibit level correlations in agreement with the predictions of random matrix theory (RMT) [29]. Adjacent eigenvalues show level repulsion and, at larger energy scales, signals of spectral rigidity. The plot on the left shows the energy difference from the ground state before (blue curves) and after (orange curves) optimization of the circuit parameters as a function of the number of circuit layers. We see that the entanglement velocity (the slope of the curves) as well as the saturation plateau depend on the type of quantum gates. The plot on the right shows the Renyi-2 entropy before and after optimization of the circuit parameters as a function of the number of circuit layers. (a) Rx one-qubit rotation gate followed by entangling two-qubit CZ gate. (b) Ry one-qubit gate followed by entangling two-qubit CZ gate (c) A sequence of Rx one-qubit rotation gate, entangling two-qubit CZ gate, Ry and CZ. (d) Ry one-qubit gate followed by entangling two-qubit CP gate.
In this section, we will apply three quantum chaos diagnostics, i.e., the level spacing distribution, the r-statistics and the spectral form factor, to the modular Hamiltonian (2) of quantum circuits at varying depth L. The first two diagnoses focus on small energy scales and therefore can determine the presence of level repulsion, one of the most robust indications of quantum chaos [21]. On the other hand, the spectral form factor probes larger energy scales and is mostly a quantifier for spectral rigidity.
A particular care needs to be taken in analysing the eigenvalues of the reduced density matrices ρ A , since they show unavoidable numerical errors. In order to control the effect of the numerical errors, we have adopted a robust phenomenological procedure, which makes use of the fact that all the eigenvalues of ρ A must be non-vanishing by definition. Let us denote by λ min the minimum negative eigenvalue among N ensemble realizations for a given value of L. To make sure that we consider only eigenvalues of ρ A (L) that are not affected by the numerical precision, we impose a cutoff on the spectra by considering only the eigenvalues satisfying the bound: Such cutoff, when applied at small values of L, removes most of the eigenvalues of ρ A (L) as most of the eigenvalues are zero at small L. However, this is not the case for larger L, when the RMT structure is clearly visible. The procedure ensures that the eigenvalues kept are robust and not significantly affected by the numerical precision. Out of the significant eigenvalues of ρ A (L), we compute energy levels E i of the modular Hamiltonian H(ρ A ).
In the next subsections, we will consider only meaningful energy levels, E i , obtained from the above procedure.
A. Level Spacing Distribution
Roughly speaking, the level spacing distribution measures the probability density for two adjacent eigenvalues to be in the energy distance s, in units of the mean level spacing ∆. The procedure for normalizing all distances in terms of the local mean level spacing is often referred to as unfolding. We unfold the spectrum of the modular Hamiltonian H(ρ A ) by using the following algorithm: 1. Arrange non-degenerate energy levels, E i , of a modular Hamiltonian H(ρ a ) in ascending order. Renyi-2 Entropy R (2) (c) Rx + CZ+Ry + CZ ates all eigenstates of H(ρ a ) whose eigenvalues are smaller than or equal to E.
Compute the staircase function S(E) that enumer-
3. Fit a smooth curve that we denote byρ(E) to the staircase function. To be specific, we used a 12-th order polynomial as the smooth approximation.
4. Rescale the energy levels E i as follows: 5. By construction, the unfolded energy levels e i must show an approximately uniform distribution with mean level spacing 1. This can be used to ensure if the above procedure has been successful, i.e., by plotting the unfolded levels and checking the flatness of the distribution.
Having obtained the unfolded spectrum, we compute their level spacing, s i = e i+1 − e i , and draw the probability density function p(s) for having two neighbouring eigenvalues separated at a distance s. The level spacing distribution serves as a diagnostic for quantum chaos in Hamiltonian systems. It captures information about the short-range spectral correlations. It thus demonstrates the presence of level repulsion, i.e. whether p(s) → 0 as s → 0, which is a common characteristic of random matrix ensembles and particularlly chaotic Hamiltonians.
The level spacing distribution p(s) for integrable systems follows the Poisson distribution while for chaotic systems it takes the following form where β depends on which universality class of random matrices the chaotic Hamiltonian belongs to [29]: β = 1 for the Gaussian Orthogonal Ensemble (GOE), β = 2 for the Gaussian Unitary Ensemble (GUE), and β = 4 for the Gaussian Symplectic Ensemble (GSE). For different types of circuit unitaries defined in Section II with L = 10, 30 and 250 layers, we collect 500 random circuit states and draw the corresponding level spacing distributions in Figures 6-8. The modular Hamiltonian of shallow circuit states at L = 10 displays a clear departure from the RMT predictions, manifesting a lack of level repulsion. Such distinction is particularly pronounced for the R y + CP unitary circuit. However, the emergence of random matrix structure becomes evident as stacking more circuit layers. The agreement between empirical level spacing distributions of random circuit states and RMT predictions (16) is already quite obvious at L = 30 and further improved at L = 250. Note that different choices of unitary gates lead to the emergence of different random matrix ensembles. We observe GUE for the R x + CZ + R y + CZ and R y + CP circuit unitaries and GOE for R x + CZ and R y + CZ unitaries. What universality class high-depth random circuit states belong to can be traced from the characteristics of their Ry one-qubit gate followed by entangling two-qubit CZ gate (c) A sequence of Rx one-qubit rotation gate, entangling two-qubit CZ gate, Ry and CZ. (d) Ry one-qubit gate followed by entangling two-qubit CP gate. We see a clear correlation between the operator spreading and the entanglement measures of the circuit in Figures 2 and 3. modular Hamiltonians, or even primarily, from their full density matrices.
We remark that, though the empirical level spacing distribution follows the random matrix theory and exhibits chaotic properties at L = 30, the entanglement entropy and operator spreading coefficient have not reached saturation and the VQA optimization works smoothly. This points out a difference between the information scrambling measures based on eigenvalues vs. their spacings of modular Hamiltonians.
B. r-statistics
The previous analysis of the level spacing distribution depends on unfolding the energy spectrum, which is only heuristically defined and has some arbitrariness. Therefore, it would be desirable to have additional diagnostics of quantum chaos that bypass the unfolding procedure. The r-statistics, first introduced in [30], is such a diagnostic tool for short-range correlations, defined without the necessity to unfold the spectrum.
Given the level spacings s i , defined as the differences between adjacent eigenvalues · · · < E i < E i+1 < · · · without unfolding, one defines the following ratios: which are by definition positive numbers between 0 and 1. The ratios r i take very specific values if the energy levels are the eigenvalues of random matrices: For matrices in GOE, GUE and GSE, the ratios are r i ≈ 0.53590, r i ≈ 0.60266 and r i ≈ 0.67617, respectively. The values become typically smaller for integrable Hamiltonians, approaching r i ≈ 0.38629 for a pure Poisson process [31]. From their very definition, we see that the ratios (17) do not require to unfold the spectrum since their dependence on the local density of states is cancelled by taking the ratio between spacings. Moreover, each r i depends on just three adjacent energy levels, rendering it as a sharp microscopic probe of the chaotic/integrable behavior in a small cluster of spectral values.
Here we use the r-statistics to study the chaotic properties of the entanglement spectra as a function of the number of circuit layers L. Under the equal partitioning of n = 12 qubits and with L = 10, 30, 250 layers, the numerical values of {r i } are shown in Figure 9 where we observe the transition from Poisson-like to RMT-like values. The low-level eigenstates of the reduced density matrix are more prone to keep their integrable behavior until a sufficient number of entangling layers L 30 is reached, where we find the universal GOE/GUE chaotic structure in agreement with the level spacing distribution analysis.
C. Spectral Form Factor
The spectral form factor (SFF) is the Fourier transform of the spectral two-point correlation function [29]. It can be viewed as a long-range observable, since it probes the agreement of a given unfolded spectrum with RMT at energy scales much larger than the mean level spacing. In particular, SFF can detect the presence of spectral rigidity, and is thus a complementary probe of quantum chaos to the distribution of level spacing and the r-statistics that are short-range observables. Formally, one defines the analytically continued partition function and the spectral form factor is [32] For a concrete numerical evaluation, we will take the following expression as a robust definition of the spectral form factor [33]: where e i is the unfolded spectrum of the modular Hamiltonian. The normalization factor Z = i |ρ(e i )| 2 is chosen to ensure that K(τ ) ≈ 1 in the limit τ → ∞. The bracket · · · denotes the ensemble average over distinct random circuit realizations. ρ(e i ) is a Gaussian filter [34], whereē and Γ 2 denote the mean energy and the variance for each unfolded spectrum. Its purpose is to guarantee that the SFF is mainly affected by eigenvalues located around the mean value of each unfolded spectrum. The SFF can be computed analytically for the Gaussian ensembles (GOE and GUE). It reads in the thermodynamic limit as [29] K GOE (τ ) = 2τ − τ ln(1 + 2τ ) , when 0 < τ < 1 and K(τ ) = 1 when τ ≥ 1. Its constancy for τ ≥ 1 simply comes from the discreteness of the spectrum and carries no information about spectral correlations. In particular, since the mean level spacing ∆ is by construction equal to 1 in the unfolded spectrum, the relevant time scale at which the discreteness of the spectrum becomes relevant is accordingly τ ≈ 1/∆ ≈ 1. This scale is usually called the Heisenberg time, τ Heis . The emergence of the random matrix structure in spectral correlations must be investigated for times shorter than the Heisenberg time, τ ≤ 1. The timescale that characterizes the ergodicity of a dynamical system is called the Thouless time, τ Thoul , defined as the time when the SFF of the dynamical system converges to the universal RMT computation. More concretely, it is indicated by the onset of the universal linear ramp as in (22). One expects τ Thoul to decrease by increasing the system size, in ergodic systems, to approach 0 in the thermodynamic limit. In contrast, non-ergodic systems show the absence of linear ramp, i.e. τ Thoul ∼ τ Heis ∼ 1, or more generally, unclear scaling of τ Thoul with respect to the system size.
Following the above discussion, we computed the empirical SFF for different circuit architectures with L = 10, 30 and 250 layers, where the ensemble average is replaced with averaging over 50 random circuit samples. See Figure 10. Inspecting the Thouless time as expanding the system from n = 12 to 18 reveals clear indications of ergodicity breaking at L = 10, but an expected ergodic behavior for L = 30 and 250 layers. They are consistent with the conclusion obtained through the short-range observables in previous subsections.
The circuit reduced density matrix is a random matrix by construction. And thus, it may not be surprising that the modular Hamiltonian eigenspectrum exhibits chaotic properties of RMTs. It is interesting, however, to trace the reasons for the GOE and GUE structures to the form of the quantum gates. While random circuit states should generically be in the GUE class, the choice of the gates may generate a modular Hamiltonian whose matrix elements are not complex-valued, but rather real or pure imaginary. In such cases, e.g., for R x + CZ and R y + CZ unitaries, the eigenspectrum of the corresponding modular Hamiltonian must belong to GOE.
Note that although the level spacing diagnostics show apparent RMT properties at L = 30, the local search of optimal circuit parameters still operates well. It indicates that unlike the diagnostic measures based on eigenvalues of the modular Hamiltonian, e.g., entanglement entropies and operator spreading coefficients, the quantum chaos diagnostics constructed from the level spacing of eigenvalues are not precisely correlated with the efficiency of optimizing control variables.
V. DISCUSSION AND OUTLOOK
We analyzed the universal chaotic properties of random quantum circuits at different depths and how it correlates to the optimization performance of control variables. Our main focus was on the operator spreading and the level spacing distribution for the eigenspectrum of reduced density matrices. We found that the random circuit wavefunction exhibits the chaotic structure of Gaussian matrix ensembles, which can be either GOE or GUE depending on the type and arrangement of unitary gates.
By changing the direction of the magnetic field coupled to the Ising Hamiltonian used in the VQE experiments, we observed the failure of specific GOE-type variational circuits in reaching the ground state. It suggests the expressibility of variational circuits is not determined alone by their capability of creating highly-entangled states.
Both chaos and entanglement follow from the eigenspectrum structure of the reduced density matrix. However, while entanglement and operator spreading are captured by quantities constructed from eigenvalues themselves, there are other measures of quantum chaos beyond the operator spreading which instead relate to their level spacings. We found that the quantum chaos diagnosed by the eigenvalue spacings typically emerges with fewer circuit layers than to come close to the maximum entanglement of random circuit states, which hinders an effective search of optimal circuit variables [4][5][6][7][8].
Such study points to some mismatch between two distinct definitions of quantum chaos, i.e., the BGS conjecture vs. the operator spreading measured by OTOC. Note that the random circuit exhibits the BGS-type chaotic structure before reaching the complete spreading of operators in OTOC. To the best of our knowledge, this is the first example of a genuine many-body setup in which such discrepancy is observed. Previous studies dealt only with single-body examples with classical counterparts [35][36][37][38][39][40][41].
As for future studies, it would be interesting to explore the connection between the graph structure of variational circuits, their effectiveness as the eigensolver of distinct Hamiltonians, and the emergence of quantum chaos in random circuit states. A popular measure of information mixing is the k-design state that cannot be distinguished from the Haar random state when considering averages of polynomials of degree not higher than k. It would be useful to investigate the relationship in the framework of random quantum circuits between the k-design structure and the quantum chaos measures that we analyzed. Some results in this direction have been investigated in [42].
Another intriguing line of investigation is to study the non-stabilizerness of variational circuits -often referred to in the literature as magic and regarded to be the source of quantum advantage in many computing problems [43,44]. An explicit measure of magic was recently proposed in [45]. Its relations with quantum chaos was studied in [46]. It would be interesting to better investigate the role of magic in the VQA problems, following [47]. In all cases, the SFFs are very different from the RMT predictions at 10 layers, while starting from 30 layers the agreement is excellent. In particular, the Thouless time, τ Thoul , clearly decreases while increasing the system size. As mentioned in the main text, such a behavior is a signal of the ergodic character of the circuits under investigation. | 2022-01-06T02:16:27.052Z | 2022-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "eaf58545d6b066c1654a6a03420ba6d8c53dd835",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e8f71037dd5558906058d4a0792a25688e0e94e9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249663727 | pes2o/s2orc | v3-fos-license | Treatment completion among justice-involved youth engaged in behavioral health treatment studies in the United States: A systematic review and meta-analysis
Justice-involved youth (JIY) have high rates of behavioral health disorders, but few can access, much less complete, treatment in the community. Behavioral health treatment completion among JIY is poorly understood, even within treatment studies. Measurement, reporting, and rates of treatment completion vary across studies. This systematic review and meta-analysis synthesizes the literature on rates of treatment completion among JIY enrolled in research studies and identifies potential moderators. After systematically searching 6 electronic databases, data from 13 studies of 20 individual treatment groups were abstracted and coded. A meta-analysis examined individual prevalence estimates of treatment completion in research studies as well as moderator analyses. Prevalence effect sizes revealed high rates of treatment completion (pr = 82.6). However, analysis suggests a high likelihood that publication bias affected the results. Treatment groups that utilized family- or group-based treatment (pr = 87.8) were associated with higher rates of treatment completion compared to treatment groups utilizing individual treatment (pr = 61.1). Findings suggest that it is possible to achieve high rates of treatment completion for JIY, particularly within the context of family- and group-based interventions. However, these findings are limited by concerns about reporting of treatment completion and publication bias.
Introduction
Approximately 50% of youth who have been arrested, are on probation post-adjudication, or are otherwise involved with the justice system (justice-involved youth; JIY) have a mental health disorder [1,2]. More than a third meet diagnostic criteria for a substance use disorder (SUD) [2]. Behavioral health disorders (i.e., both mental illness and SUD) are associated with an increased likelihood of recidivism and additional involvement with the justice system [3,4]; further justice system involvement, in turn, is associated with higher rates of behavioral health disorders [2]. While treatment completion has been associated with positive health and life outcomes (i.e., employment, housing) in substance use treatment [5] and with reduced recidivism among JIY youth specifically [6], JIY often do not complete available treatment for behavioral health disorders [7,8], even in the context of wellresourced treatment studies [9][10][11]. It is critical to understand factors involved in treatment completion among JIY.
Data suggest that JIY and their families experience challenges in completing treatment for behavioral health disorders [12]. For example, 2017 data from the Substance Abuse and Mental Health Services Administration indicate that only 45% of youth aged 12-20 years who were referred to publicly funded substance abuse treatment by a criminal justice organization successfully completed treatment [13] most dropped out of treatment early or were discharged by the service provider due to lack of compliance with treatment. A recent cross-sectional analysis of administrative data from the Florida Department of Juvenile Justice suggests that, although 32% of the sample met criteria for SUD treatment, only 11.5% completed a SUD treatment program [14].
While administrative data may show the extent of the problem, they provide little understanding of how to address the problem. However, treatment studies for JIY may provide more insight. These studies often provide high-quality, evidence-based care, or are testing new interventions. In these contexts, researchers often make considerable efforts to help participants and their families maintain participation [15], and may therefore represent the best-case scenario for measuring treatment completion among JIY. A better understanding of the factors that contribute to successful completion in these contexts may help inform future research and practice.
Influences on treatment completion among JIY
Existing research suggests a number of possible moderators of behavioral health treatment completion among JIY. A recent systematic review examined empirical evidence on the effects of three types of interventions designed to improve engagement in behavioral health treatment among adolescents (not exclusively focused on JIY): systems-level (e.g., offering treatment services in-home), family-level (e.g., informing family members about treatment topics), or individual-level (e.g., utilizing contingency management interventions) [16]. Findings suggested that any type of intervention designed to increase behavioral health treatment engagement has positive influences on attendance at varying stages of treatment. Type of treatment (i.e., group, individual, family) may also have an influence on the extent to which youth are able to engage in and complete treatment; existing research suggests that familybased treatments are associated with greater engagement in substance use treatment among adolescents [17]. JIY who are members of ethnic and/or racial minority groups may be less likely to have access to or utilize mental health treatment [8]; research also finds that JIY of color have lower rates of treatment completion [14]. Some have argued that these disparities might reflect a difference in needsfor example, Black JIY are at an increased risk of experiencing poly-victimization, defined as having experienced many different types of traumatic victimization in their lifetime including assault, family/community violence, physical or sexual abuse, and trauma from racially driven encounters [18,19]. JIY of color are also more likely to experience a wide array of comorbid mental disorders [20], further demonstrating their need for high quality treatment. Researchers have argued that current treatment options may not be properly poised to address the complex stressors that JIY of color experience [19,21], thus discouraging engagement with treatment. Despite this, many youth of color do not have access to quality treatment options due to the fact that these youth tend to live in poorer communities with fewer resources, a systematic barrier [20]. Other barriers unique to engaging JIY youth of color include cultural mistrust of healthcare services rooted in historical oppression [22], greater logistical challenges (e.g., transportation, insurance issues) [20], and difficulties engaging family members in treatment (e.g., language barriers, competing demands) [20].
Low rates of treatment completion among JIY may be surprising, since JIY often enter treatment because of legal mandates (e.g., as a condition of probation). However, the effect of juvenile drug courtsone of the most common examples of mandated treatment has been highly variable, especially in comparison to more commonly successful adult drug courts [23]. Current evidence suggests that most juvenile drug courts minimally engage parents and youth; a meta-analytic review found that slightly more than half of all youth who initially enroll in a juvenile drug court program end up graduating and that youth who enroll but do not graduate (i.e., are terminated unsuccessfully) do not appear to benefit from participating in the program based on later measures of substance use and recidivism [24]. Given these findings, researchers conclude that it is necessary to implement additional efforts to engage youth in treatment, beyond or instead of court mandates [24]. Especially given the increase in diversion of youth away from the justice system, it is critically important to understand how to constructively engage youth without the force of court mandates [25]; the first step in this process is understanding complexities in current rates of treatment completion among JIY and identifying potential moderators.
Purpose of Study
Longitudinal studies of behavioral health treatment among JIY report inconsistent findings, with wide ranges in rates of treatment completion [26][27][28] even in the context of additional resources available to researchers to help engage participants. The current study aims to conduct a meta-analysis to quantify treatment completion among JIY enrolled in behavioral health treatment studies.
In addition, the current study aims to determine whether demographic variables (i.e., gender, race/ethnicity) and methodological variables (i.e., intervention focus, type of treatment, the presence or absence of interventions to increase treatment engagement) moderate the prevalence of treatment completion among JIY.
Methods
Procedures and results are reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) Statement [29,30], which is available in Table 1.
Inclusion and Exclusion Criteria
Studies were included if: 1) they were available in English, 2) the sample included youth who are involved in the juvenile justice system and who resided in the community at the time of the study (e.g., youth on probation, youth who have been arrested and then diverted from the justice system through a diversion program), 3) the study included an assessment to determine eligibility for treatment and the provision of a behavioral health treatment, and 4) the authors reported criteria for treatment completion. Studies in which all JIY were eligible (i.e., the primary goal of services was to prevent recidivism) were excluded; however, studies targeting behavioral problems (i.e., youth adjudicated through drug court, youth who sexually offend) indicative of a specific behavioral health disorder being treated were included. Book chapters and dissertations were included in literature searches. Studies were excluded if they were cross-sectional, were primarily studies of behavioral health service utilization (i.e., assessing whether youth access treatment not provided as part of the study), or represented evaluations of treatment services in which only participants who completed treatment were included in analyses. Studies that combined youth who were and were not JIY or included both adult and juvenile participants were excluded unless information for only the JIY participants could be obtained.
6-7
Data collection process 10 Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and any processes for obtaining and confirming data from investigators.
Data items 11
List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions and simplifications made.
-9
Risk of bias in individual studies 12 Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at the study or outcome level), and how this information is to be used in any data synthesis.
9
Summary measures 13 State the principal summary measures (e.g., risk ratio, difference in means). 9 Synthesis of results 14 Describe the methods of handling data and combining results of studies, if done, including measures of consistency (e.g., I [2]) for each meta-analysis.
9-11
Risk of bias across studies 15 Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting within studies).
11
Additional analyses 16 Describe methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression), if done, indicating which were pre-specified.
Study selection 17
Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram. 7, 11, Fig. 1 Study characteristics 18 For each study, present characteristics for which data were extracted (e.g., study size, PICOS, follow-up period) and provide the citations. Table 3 and 4 Risk of bias within studies 19 Present data on risk of bias of each study and, if available, any outcome-level assessment (see item 12). 13
11-13,
Results of individual studies 20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group and (b) effect estimates and confidence intervals, ideally with a forest plot.
13, Fig. 2 Synthesis of results 21 Present results of each meta-analysis done, including confidence intervals and measures of consistency. 13, Table 5 Risk of bias across studies 22 Present results of any assessment of risk of bias across studies (see Item 15). 14 Additional analysis 23 Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression [see Item 16]).
Journal of Clinical and Translational Science
terminology is not standardized [48]. Third, subject matter experts were contacted (i.e., authors of published articles on studies funded by JJTrials) to inquire about data from unpublished studies, but no additional studies were obtained from these contacts. Finally, we created email alerts in Google Scholar for the above search; no newly eligible studies were identified prior to manuscript submission. After duplicates were removed, we reviewed the title and abstract of 2,700 articles with the full text of 253 articles being reviewed (see Fig. 1 for PRISMA flow chart and article exclusions). Thirteen studies were coded for inclusion in the data analysis.
Coding
We coded studies according to a fixed coding protocol, following guidelines from Lipsey, Wilson [49], and Card [50]; variables included study characteristics, primary outcomes of interest, moderators, and study quality. Each study was coded independently by the first, second, and third authors; after coding, the authors met to identify and resolve discrepancies in coding.
Prevalence Estimates
We coded the number of JIY who began treatment and the number who completed treatment according to the study's stated treatment completion criteria. This prevalence rate was coded as treatment completion. If studies reported sufficient data on multiple treatment conditions, separate effect sizes were estimated for each treatment condition. For RCTs where "treatment completion" criteria were not specified for the control condition (usually "services as usual"), the treatment completion prevalence rate was only coded for the experimental condition(s).
Racial and ethnic minority participants
The prevalence of racial and ethnic minority participants was coded as a percentage for each study or treatment group and included in analysis of moderators.
Intervention Focus
We assessed intervention focus as a moderator; this variable was coded as 1 = substance use disorders and 2 = other behavioral health for analyses.
Type of Treatment
Type of treatment was coded as 1 = individual, 2 = group, and 3 = family for analyses. When treatments included multiple components (e.g., individual sessions with the youth as well as sessions with youth and parent(s), individual therapy with group skills training classes), it was coded according to the study's identified type of treatment (e.g., a family-based treatment) and the highest level of clinical intensity in the treatment. Thus, if a treatment included both individual and family components and was described as family-based treatment, it was coded as 3 = family. If a treatment included individual therapy sessions and less-frequent group skill-building classes, it was coded as 1 = individual.
Interventions to Increase Treatment Engagement
The presence or absence of reported interventions to increase treatment engagement was coded as a moderator (1 = present, 2 = not present). Interventions to increase engagement commonly included contingency management programs, engaging family in the treatment planning process, providing treatment in the youth's home or other convenient locations, providing transportation to treatment, or a comprehensive assessment of barriers to engagement and subsequent problem solving that focuses on the whole ecology of youths and families, as is standard in multisystemic therapy (MST). Basic phone or text-message reminders of treatment appointments, as are commonly provided in behavioral healthcare, were not coded as interventions to increase engagement.
Treatment Mandates
Whether or not youth were mandated by a court to participate in outpatient treatment was coded as the percentage of youth in the study who were required to participate in outpatient treatment.
Primary Study Quality
Standardized measures for assessing primary study quality in systematic reviews and meta-analyses typically focus on specific study designs (e.g., quasi-experimental studies, noncontrolled longitudinal studies) and are geared toward assessing risk of bias in the study's main outcome variable. Because we were primarily interested in judging the quality of measures of attendance and attrition (rather than bias in outcome) across studies with varied designs, we developed a checklist after reviewing existing measures of study quality [51][52][53] as well as review articles on the Summarize the main findings including the strength of evidence for each main outcome; consider their relevance to key groups (e.g., healthcare providers, users, and policy makers).
Conclusions 26
Provide a general interpretation of the results in the context of other evidence and implications for future research.
15-18
Funding Funding 27 Describe sources of funding for the systematic review and other support (e.g., supply of data); role of funders for the systematic review.
NA measurement of treatment engagement [16,17,54,55]. The checklist was composed of 5 yes/no questions for all primary studies and 2 additional questions for studies with randomized designs; the checklist is available as a Supplementary file.
Data Analysis
Effect Size Calculations All analyses were performed in R (Version 4.1.3) with packages Metafor and Meta [56][57][58]. To assess proportional data, an analysis of binary outcomes was pooled in the form of proportions with a generalized linear mixed model (GLMM) using the logit link function with Clopper-Pearson intervals to stabilize the variance [59,60]. Simulation studies indicate that the GLMM model provides the most accurate estimate in proportional meta-analysis because GLMM models do not require data transformations within studies [61,62], fully account for uncertainties, and produce confidence intervals with satisfactory coverage probabilities [63]. We implemented all parameters via the maximum likelihood approach. Random effects models were selected to calculate effect sizes because they represent a more conservative estimate of mean prevalence and to account for heterogeneity between studies. We examined forest plots to identify potential outliers, i.e., studies whose individual 95% CI did not overlap with the 95% CI for the mean effect [64,65]. Potential outliers were removed from calculation of effect size if overall prevalence rates were affected [65]. Heterogeneity of the studies was assessed using Cochran's Q-Test, which tests for the presence of heterogeneity across studies [66], and Higgins I 2 , which describes the percent of variation in prevalence across studies due to heterogeneity rather than chance [67]. Low heterogeneity was defined as Q scores < critical chi square values and I 2 < 25%; moderate if Q > critical chi square values and I 2 around 50%; and high if Q > critical chi square values and I 2 > 50%. Statistical significance was defined as P < 0.05 [67].
Moderator Analyses
If significant heterogeneity of individual prevalence estimates was found via the two criteria, mixed-effect meta regression was used to attempt to explain the between-study heterogeneity based on study-level fixed-effect covariates (i.e., subgroups defined by categorical covariates or continuous covariates). Specifically, candidate variables were tested to identify significant moderators (i.e., candidate variables that account for a significant proportion of variability in individual prevalence across studies) in nonlinear mixed-effects models, such that random-effect terms were used to combine studies within each subgroup, and fixed-effect terms were used to combine subgroups and yield the overall effect [68]. Study-to-study variance (T 2 ) was not assumed to be the same for all subgroups; the value was computed within subgroups [69]. The Q between statistic (analogous to analysis of variance) tested categorical variables to report between-study variance explained by moderators. We calculated mean effect sizes within all variable levels [70]. Variables were considered moderators if the mixed (random)-effects model indicated statistical significance (P < 0.05) on the Q between statistic [71]. Interactions among moderator variables were not tested due to insufficient power.
Publication Bias
Publication bias was assessed with funnel plot symmetry both visually and statistically by using Egger's linear regression method to assess any relationship between sample size and prevalence [72]. If significant funnel plot asymmetry was present, the trim and fill method was used to determine the number of missing studies that would be needed to correct the asymmetry [73]. An additional quantitative assessment of bias used the Begg's rank method [74] to identify relationships between effect sizes and sample sizes. Low publication bias was deemed present if funnel plots were visually symmetrical and were not statistically significant. Finally, given the gaps in the ability to assess publication bias in proportional meta-analyses using established statistical methods, we offer qualitative assessments of the role of publication bias in these analyses.
Description of Included Studies
Altogether, 13 studies [9][10][11]26,28,[75][76][77][78][79][80][81][82] representing 20 treatments (e.g., services as usual, Multisystemic Therapy) met inclusion criteria (see Table 3). Three of the 13 studies included services as usual treatment conditions in which they did not specify ClinicalTrials.gov ("juvenile justice" OR "juvenile delinquent") AND ("treatment" OR "intervention" OR "therapy") | ("Mental health disorder" OR "substance use disorder") SCOPUS (("juvenile justice" OR "juvenile delinquent") AND ("mental health treatment" OR "substance use treatment" OR "behavioral health treatment" OR "mental health intervention" OR "substance use intervention" OR "behavioral health intervention")) Web of Science (ALL = (adolescent OR youth OR juvenile) AND ALL = ("juvenile justice" OR "juvenile delinquent" OR probation OR diversion OR diverted) AND ALL = ("mental health treatment" OR "substance use treatment" OR "behavioral health treatment" OR "mental health intervention" OR "substance use intervention" OR "behavioral health intervention" OR therapy)) AND LANGUAGE: (English) treatment completion criteria (see Table 4 for details); adolescents in these groups were not included in the tables or in the descriptions below. All studies were peer-reviewed published articles and were located in the United States. Complete descriptions of all 13 studies are included in Tables 3 and 4.
The 20 eligible treatment conditions included a total sample size of 1,269 adolescents, with a mean sample size of 74.7 (SD = 70.7, range = 24-320). Samples averaged 15.2 years of age, were predominantly male (80.2%), and were predominantly from minority ethnic or racial populations (68.1%). Studies were primarily focused on substance use disorders (n = 10, 76.9%). Individual treatment conditions utilized different types of treatment, categorized as family (n = 9, 52.9%), individual (n = 4, 23.5%), or group (n = 4, 23.5%). When treatment length and completion criteria were reported, treatments were designed to last an average of 18.4 weeks (SD = 8.6).
Studies employed a wide range of strategies to increase treatment completion; of 20 individual treatment groups (including services as usual and experimental treatment conditions, which often used different treatment engagement strategies between groups), 11 (65%) reported employing interventions to increase treatment engagement; see Table 4. Six treatment groups provided services in locations convenient to the youth or family (e.g., home, school, community spaces). Four treatment groups offered financial assistance with transportation to treatment, three made on-call therapists available to families at all times, and 4 included family in-treatment planning (e.g., contacting family each week to describe the group session topic, encouraging regular contact between therapists and families). Two treatment groups modified services to be culturally adapted, e.g., by recruiting providers from the local community. Finally, only one treatment group utilized a contingency management intervention to increase youth attendance in treatment. Nine treatment groups did not specify any treatment engagement strategies. See Table 4 for descriptions of engagement strategies used in each study.
Supervision or JIY court involvement varied widely both between and within studies; youth were on probation, arrested and entering treatment pre-adjudication, in formal diversion programs, or enrolled in drug court. In three studies (23.1%), participants were recruited entirely from juvenile drug courts. Whether or not youth had been mandated to participate in treatment also varied widely both within and between studies; 7 studies (53.8%) did not report information on treatment mandates; see Table 4. 6 Johnson-Kwochka et al.
Treatment Completion
A total of 13 studies yielded 20 individual prevalence estimates; see Fig. 2 for the forest plot of effect sizes and Table 5 for associated model statistics. Although there were two significant outliers from the mean prevalence estimate [9,77], neither of these individual estimates had a significant effect on the overall estimate when removed from analyses, so individual estimates from all studies were retained. The main effect size for treatment completion was pr
Moderator Analyses
Subgroup analyses revealed that treatment completion was higher in studies that provided family-or group-based treatment (pr = 87.8, CI = 78.9, 93.2, k = 14) compared to studies providing individual treatment (pr = 61.1, CI = 30.4, 84.9, k = 6); see Fig. 2 for a forest plot of effect sizes by type of treatment and Table 5 for full results from moderator analyses. No other significant moderators were identified.
Primary Study Quality
Studies included a range of designs: 9 studies (52.9%) were RCTs (some of these employed a cluster-randomized design), one (7.7%) was a quasi-experimental trial with a control group, and three (23.1%) were noncontrolled longitudinal trials of treatment effectiveness (see Supplemental Table). All but one study conducted intent-to-treat analyses, in which all participants who entered treatment were included in outcome analyses. Only three studies (23.1%) provided a detailed description of youth who did not complete treatment, including reasons for attrition and demographic characteristics of those who did not complete treatment.
Publication bias
Visual assessment of the funnel plot for treatment completion suggests moderate publication bias (see Fig. 3 for the funnel plot with unpublished studies imputed). Quantitative assessments using Begg's rank correlation analysis trended toward significance (p = 0.08), while assessment of Egger's test of the intercept was statistically significant (p < 0.01), suggesting some publication bias. We used the trim-and-fill method to determine the number of missing studies that would be needed to correct the asymmetry (see Fig. 3); this method suggests 5 imputed studies; however, current literature indicates that substantial heterogeneity in effect sizes (as is present in this analysis) seriously impairs the power of the trim-and-fill method, since the plot's asymmetry may be confounded by heterogeneity [83][84][85]. Therefore, these results should be interpreted with caution. Although established statistical methods for assessing publication bias in meta-analysis are traditionally used to examine bias in treatment effects, these results may also have implications for examining bias in study completion rates.
In particular, studies with low completion rates may be likely to have fewer positive treatment effects, increasing the probability that results will not be published.
Discussion
This systematic review and meta-analysis represent the first attempt to quantitatively review and synthesize behavioral health treatment completion data among JIY enrolled in treatment studies. This meta-analysis aimed to derive a more accurate estimate of treatment completion among JIY and better understand issues related to treatment engagement strategies and barriers to treatment completion. Findings indicated relatively high rates of treatment completion, with 82.6% of youth who initiated treatment completing treatment according to the treatment's specified completion criteria. Our findings suggest when some systemic barriers to treatment (i.e., low treatment availability, difficulty identifying appropriate treatment, treatment cost) are removed by the presence of providing treatment within a study, JIY may not be any more difficult to engage in treatment than the general adolescent population. One aim of this investigation was to begin to disentangle the influence of barriers to behavioral health treatment from the challenge of helping JIY complete treatment. That is, when behavioral health treatment is available and accessible (i.e., where treatment is provided through the study and JIY have been determined to be inneed of treatment), is it still difficult for JIY to complete treatment? This does not seem to be true in these studies, as the majority did complete treatment. Another aim of this investigation was to identify moderators of treatment completion. Type of treatment (i.e., family, group, individual) was a significant moderator, with treatments that provided family-or group-based treatment having higher rates of treatment completion compared to treatment groups that provided individual treatment. This is consistent with prior research findings that family-based treatments are associated with greater engagement in care [17,86]. Some have argued that the greater engagement associated with family and group-based interventions for adolescents may relate to the social components inherent to these types of interventions [87]. Adolescence, marked by many important socio-developmental milestones, typically includes dynamic changes to the role and identification of supportive others in their lives [16,88,89], and this may be particularly true for JIY [90]. For adolescents who may not have a strong, existing support network, group-and family-based interventions may be one desirable method of facilitating or improving these social connections, making them more engaging and desirable. These types of interventions may align more with helping adolescents meet important developmental milestones than individual modalities might. Even further, qualitative literature examining perspectives from youth, caregivers, treatment providers, and juvenile justice personnel consistently suggests that caregiver involvement is essential to achieve youth uptake in treatment and maintain engagement [91,92]. Thus, family-focused treatments may increase rates of treatment completion by increasing caregiver involvement and support.
Influence of Publication Bias and Study Quality
Analyses indicated some likelihood that publication bias has influenced the results; funnel plots show imputed studies with lower effect sizes than those reported by published studies, which suggests rates of treatment completion may be lower. While funnel plots should be interpreted with caution given the high level of heterogeneity present in the data [93,94], 6 studies that otherwise met eligibility criteria for inclusion in these analyses did not report full data on treatment completion and therefore could not be included in these analyses. Further, the majority of studies did not sufficiently describe withdrawals and dropouts, making it difficult to identify reasons or demographic characteristics predictive of treatment noncompletion.
Moderators not Supported by the Current Investigation
Hypotheses regarding the intervention focus (i.e., substance use disorders, conduct disorders, etc.) and racial or ethnic minority participants were not supported as moderators. In the case of intervention focus, the majority of studies (n = 15, 71.4%) were focused on addressing substance use disorders or problematic substance use, so it is possible that our analyses were underpowered to detect differences between studies with different intervention foci. It is less likely that moderator analyses of the prevalence of racial or ethnic minority participants were underpowered, given that there was a wide range of study participants who identified as racial or ethnic minorities. Previous research suggests that minority youth face significant barriers to accessing behavioral health treatment but fewer barriers when that treatment is available and accessible [20]. For example, a systematic review of literature on referrals to behavioral health services from the juvenile justice system [95] finds that a majority of the 26 articles reviewed reveal at least some evidence of racial disparities in decisions to refer youth. Thus, disparities in "utilization" may be more appropriately named disparities in access.
The inclusion of interventions to increase treatment engagement was not a significant moderator of treatment completion. Given the broad heterogeneity in treatment engagement interventions provided by studies included in this analysis, it may be important to conduct additional research assessing the success of such interventions.
Another potentially important moderator that should be examined in future studies is mandated treatment (i.e., when youth are required by the court to seek outpatient behavioral health treatment). We did not examine this as a moderator in the current study since only 6 studies reported information about whether or not youth were court-mandated to engage in behavioral health treatment. Of the 6 that reported on treatment mandates, results were highly heterogeneous (i.e., 42-100%). Literature on the effectiveness of court mandates for outpatient treatment is mixed and may depend in particular on the variability of court mandates by jurisdiction [24,96]. However, recent research [97] found that youth who attended treatment at court-direction (compared to voluntarily) demonstrated higher rates of SUD treatment completion; this variable should be further examined as a possible moderator in future research. One limitation of this analysis is that, due to the small number of control groups providing services as usual that reported data on treatment completion (k = 2), we were unable to consider experimental treatment group as a moderator of treatment completion. It is important for studies examining treatment completion in JIY to report this data for all treatment groups. Future systematic reviews should consider this factor in their analyses.
Implications and Recommendations for the Future
Overall, our findings suggest that existing high-quality studies of behavioral health treatment among JIY have generally achieved high rates of treatment completion. While included studies were not limited by presenting problem, all the studies included in this meta-analysis examine treatment of either substance use or problematic sexual behavior. Notably, the majority of treatments provided in these studies (Multisystemic Therapy, MET-CBT, Multidimensional Family Therapy) are not specific to inappropriate sexual behavior or substance use and are frequently provided to youth and families with a broad range of diagnoses. Thus, we expect that these results are generalizable to many JIY and families receiving behavioral healthcare. However, it is likely helpful for the field to consider what the effectiveness of other evidence-based practices (e.g., behavioral activation) may look like for JIY. Expanding access to evidence-based treatment and helping youth and their families remain engaged in services will both be critical challenges for researchers, policymakers, juvenile justice professionals, and community mental health administrators. Based on the results of our systematic review and meta-analysis, we make three recommendations for future research, implementation, and practice of behavioral health interventions for JIY: 1. Researchers should place a greater focus on measuring and reporting treatment completion. This includes thoroughly describing dropouts and withdrawals and reporting treatment completion or "dose" criteria, to ensure that estimates of treatment efficacy are not confounded by youth engaging in different amounts of treatment.
2. Given the potentially important role that treatment mandates play in the referral of JIY to behavioral health treatment, researchers should attempt to document the nature of youth's justice involvement and whether they have been mandated to participate in outpatient treatment (even if study participation is not mandatory).
3. Researchers should consider utilizing interventions that include family-or group-based services to improve rates of treatment completion. | 2022-06-15T15:19:48.798Z | 2022-06-13T00:00:00.000 | {
"year": 2022,
"sha1": "233344af60fa258bb44af0d7f9d4dc35e8f3f8bf",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/5593E7E76871E490B04BF28F22115C5C/S2059866122004186a.pdf/div-class-title-treatment-completion-among-justice-involved-youth-engaged-in-behavioral-health-treatment-studies-in-the-united-states-a-systematic-review-and-meta-analysis-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Cambridge",
"pdf_hash": "1fa87b9d24649fc3457852b8f9490d8b5d1dff5f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57192885 | pes2o/s2orc | v3-fos-license | Integrating proteomic and phosphoproteomic data for pathway analysis in breast cancer
Background As protein is the basic unit of cell function and biological pathway, shotgun proteomics, the large-scale analysis of proteins, is contributing greatly to our understanding of disease mechanisms. Proteomics study could detect the changes of both protein expression and modification. With the releases of large-scale cancer proteome studies, how to integrate acquired proteomic and phosphoproteomic data in more comprehensive pathway analysis becomes implemented, but remains challenging. Integrative pathway analysis at proteome level provides a systematic insight into the signaling network adaptations in the development of cancer. Results Here we integrated proteomic and phosphoproteomic data to perform pathway prioritization in breast cancer. We manually collected and curated breast cancer well-known related pathways from the literature as target pathways (TPs) or positive control in method evaluation. Three different strategies including Hypergeometric test based over-representation analysis, Kolmogorov-Smirnov (K-S) test based gene set analysis and topology-based pathway analysis, were applied and evaluated in integrating protein expression and phosphorylation. In comparison, we also assessed the ranking performance of the strategy using information of protein expression or protein phosphorylation individually. Target pathways were ranked more top with the data integration than using the information from proteomic or phosphoproteomic data individually. In the comparisons of pathway analysis strategies, topology-based method outperformed than the others. The subtypes of breast cancer, which consist of Luminal A, Luminal B, Basal and HER2-enriched, vary greatly in prognosis and require distinct treatment. Therefore we applied topology-based pathway analysis with integrating protein expression and phosphorylation profiles on four subtypes of breast cancer. The results showed that TPs were enriched in all subtypes but their ranks were significantly different among the subtypes. For instance, p53 pathway ranked top in the Basal-like breast cancer subtype, but not in HER2-enriched type. The rank of Focal adhesion pathway was more top in HER2- subtypes than in HER2+ subtypes. The results were consistent with some previous researches. Conclusions The results demonstrate that the network topology-based method is more powerful by integrating proteomic and phosphoproteomic in pathway analysis of proteomics study. This integrative strategy can also be used to rank the specific pathways for the disease subtypes. Electronic supplementary material The online version of this article (10.1186/s12918-018-0646-y) contains supplementary material, which is available to authorized users.
Background
Following the quick accumulation of large-scale genome, transcriptome and other omics data, some studies or approaches integrating multiple omics data into pathway analysis have been reported [1][2][3][4]. Mass-spectrometry -based proteomics provides insights into cell-type protein expression patterns, post-translational modifications (PTMs) and protein-protein interactions [5][6][7]. As the most common PTMs, up to 30% of all human proteins may be modified by kinase activity (Phosphorylation), and kinases are known to regulate the majority of cellular signal pathways. To date, how to integrate the information of protein expression, PTMs and protein interactions in pathway analysis is still a big challenge.
Signal pathways describe a group of molecular in a cell that work together to control one or more cell functions, such as cell division or cell death. Pathway analysis gives an insight into the underlying mechanism in a given condition and makes it more explanatory in comparison with the studies at individual gene or protein level. Pathway analysis methods include gene set analysis and topology-based analysis. Gene set methods only consider the set of genes/proteins in the pathways while the topology-based methods use both genes/proteins and the interactions among them. Gene set methods consist of Over-Representation Analysis (ORA) based on the Hypergeometric test or Fisher exact test [8,9] and Functional Class Score (FCS) based on ranked gene list and Kolmogorov-Smirnov (K-S) test [10]. The ORA only considers the differentially-expressed (DE) genes and the representative tools of ORA include DAVID [11], Onto-Expression [9], GenMAPP [12], GOMiner [13], GOstat [14] and so on. FCS considers the position of all genes in the ranked list, which is produced by a selected statistical test for differential expression, such as Gene Set Enrichment Analysis (GSEA) [15], Gene Set Analysis (GSA) [16] and so on. Topology-based pathway analysis integrate both changes in expression level and in topology of protein/gene interaction network, which includes Signal pathway impact analysis (SPIA) [17] and Bayesian Pathway Analysis (BPA) [18]. In SPIA, the score of the pathway is based on the impact analysis consisting of two types of evidence. One is the over-representation of DE genes in a given pathway and the other is the abnormal perturbation of that pathway, which is measured by propagating expression changes across the pathway topology.
In this work, we tried to integrate proteomic and phosphoproteomic data in pathway analysis in breast cancer and its subtypes. The results showed that integrating protein and phosphorylation differential expression with the network-topology based method can identify the target pathways more accurately. What's more, we also identified the top ranked pathways in four subtypes of breast cancer specifically.
Proteomics data and preprocessing
The proteomic and phosphoproteomic data of breast cancer in this study included 77 tumor samples and 3 normal breast tissue samples, which were downloaded from Clinical Proteomic Tumor Analysis Consortium (CPTAC). The process of quality control and normalization for both the proteomic and phosphoproteomic data was presented in Mertin et al.'s work [5]. As the result, 12,553 proteins (10,062 genes) and 33,239 phosphosites with their relative abundances quantified across tumors were used in this work. The missing value in the data matrix was filled with the minimum value.
Integrating proteomic and phosphoproteomic data
Since ORA, GSEA and SPIA are the representatives of three kinds of pathway analysis, which are Over-Representation analysis, Functional Class Score and topology-based pathway analysis, we used these three strategies to do pathway analysis. We used R package 'HTSanalyzeR' [19] to do ORA, GSEA pathway analysis and another R package 'SPIA' [17] to do SPIA pathway analysis. P-values for pathway analysis resulting from the permutation (n = 2000) were provided in Additional File 1: Table S1.
Different methods of pathway analysis require different input data. For ORA, the input file is the list of DE proteins/modifications or the intersection of the DE protein and phosphoprotein as an integration (Student's t-test, with BH-adjusted p < 0.05). The input file for GSEA method in our study was the list of all proteins/phosphoproteins with fold change between the case and control. We summed up and sorted the fold changes for the overlapping proteins in the protein expression and phosphorylation profiles as the integrated information for GSEA. As for SPIA, the input files consisted of the topology of the pathways downloaded from KEGG database and the DE proteins with their fold change. The topology changes of the pathways could be calculated by the 'SPIA' R package. The input for SPIA was the intersection list of the DE proteins and DE phosphoproteins with the sum of their fold change.
Performance evaluation
For the performance evaluation of pathway analysis, a widely used validation method is using the ranks of the target pathways in disease that have been validated or curated in publication, topper rank is better. This method is proposed in PADOG [20] and used in other studies of pathway analysis methods comparison [21,22].
We manually selected twelve breast cancer related TPs from literatures. Most of TPs are mentioned in the work about comprehensive molecular portraits of human breast tumors [23,24] and the others are also widely accepted. The TPs and their references were listed in Table 1.
Results
The workflow of integrating proteomic and phosphoproteomic data to perform pathway analysis was shown in Fig. 1. Firstly, integrating information from proteomic and phosphoproteomic data were used as the input of pathway analysis. Then, we processed ORA, GSEA and SPIA pathway analysis on the integrated information. Finally, the methods were evaluated by the ranks of the TPs. In our study, we identified 2337 DE proteins and 3973 DE phosphoproteins respectively. The intersection of the two lists were 641 proteins.
Performance evaluation of pathway analysis with protein expression and/or phosphorylation profiles
To assess the integrating strategies in pathway ranking with proteomics data, we compared the ranks of TPs in three kinds of pathway analysis methods with integrated information, including protein expression and phosphorylation datasets separately. Fig. 2 showed the box plots of normalized ranks in the range of 1 to 100 (the lower, the better). It could be concluded from the figure that all the pathway analysis methods performed better using the integrated data than using single information. Especially, topology-based pathway strategy introduced in SPIA performed best as the median rank of rankings for all the TPs was lower than any other methods.
Besides the TPs, we found nineteen pathways appearing in the overlap of the top 50 pathway ranking lists of three kinds of pathway analysis methods with integrated information, such as Fanconi anemia pathway, GABAergic synapse (as shown in Table 2). Although these pathways are not validated, as well as TPs to be related to breast cancer, there are still researches indicated the correlation with breast cancer. For example, Fanconi anemia pathway is closed linked to breast and ovarian cancer susceptibility gene BRCA1 [25,26]. Abnormal GABA expression or GABAergic participation has been described in primary colon, gastric, ovarian, pancreatic, and breast cancers [27], while GABA and GABAergic participation are involved in GABAergic synapse [28]. What's more, it has been reported that morphine can stimulate angiogenesis by activating proangiogenic and survival-promoting signaling and promote breast tumor growth [29].
Pathway rankings in subtypes of breast cancer
The subtypes of breast cancer, which consist of Luminal A, Luminal B, Basal and HER2-enriched [23,30], are various in prognosis and require distinct treatment [24,31]. Genomic, transcriptomic, and proteomic analyses of the breast cancer also reveal subtypes would differ in pathway activity [32]. If the specific pathways and the underlying mechanism of each subtype are identified, more precision treatments can be applied. Based on the performance evaluation of different pathway analysis, we analyzed and ranked the perturbed pathways for each subtype by integrating protein expression and modification profiles using the network-topology based approach. The results showed ranking of the perturbed TPs in four subtypes (Additional File 2: Figure S1). Some pathways, like cell cycle and pathway in cancer, were among top10 rankings in all of subtypes. The ranks of other TPs were different among the subtypes though they all play important roles in four subtypes. We selected representative top-ranked pathways in each subtype and display them in Fig. 3.
As shown in Fig. 3a, p53 pathway ranked lowest in the Basal-like breast cancer type and ranked lower in Luminal A than in Luminal B. It is reported that TP53 are the most recurrently mutated genes in breast cancer, with frequency of 84% in Basal-like tumors [23] and p53 pathway remains largely intact in Luminal A cancers but is often inactivated in the more aggressive Luminal B cancers [33].
In accordance with previous research, expression levels of Focal adhesion kinase (FAK/PTK2) are correlated strongly with poor tumor differentiation and significantly associated with HER2 overexpression in breast cancer [34]. The highest level of FAK (Y861) and the lowest level of epidermal growth factor receptor 2 (HER2) activity can be observed in MDA-361 cells (ER+/HER2+ cell) [35]. As FAK is the important role in the Focal adhesion pathway, we can infer that the activation of the Focal adhesion pathway was negative correlated with the expression of HER2. The rank hsa04010 MAPK signaling pathway [23] hsa04150 mTOR signaling pathway [45,47,49] hsa04310 Wnt signaling pathway [23] hsa04115 p53 signaling pathway [23,33] hsa01521 EGFR tyrosine kinase inhibitor resistance [23] hsa04012 ErbB signaling pathway [23] hsa04510 Focal adhesion [34,35] hsa04350 TGF-beta signaling pathway [60] hsa04110 Cell cycle [24] hsa05200 Pathways in cancer [23] of Focal adhesion pathway was lower in HER2subtypes (Luminal A and HER2) than HER+ subtypes (Luminal B and Basal), as shown in Fig. 3b. PI3K/AKT/mTOR pathway is a key intracellular signaling system that drives cellular growth and survival.
Hyperactivation of this pathway is implicated in the tumorigenesis of ER+ breast cancer [36][37][38][39][40][41][42][43][44][45]. Besides, the pathway is also important in Triple-negative breast cancer [46] and HER2-overexpressing breast cancer [47]. Preclinical studies indicate that inhibitors Fig. 1 A workflow of integrating proteomic and phosphoproteomic data in pathway analysis. Firstly, the fold change of the two protein lists from proteomic and phosphoproteomic respectively were summed up. The interaction of the DE proteins and the DE phosphoproteins were also recorded. Secondly, we performed ORA, GSEA and SPIA pathway analysis by using the integrated information, and obtained the ranks of the target pathways. Finally, the methods were evaluated by the ranks of the target pathways Fig. 2 Box plot of the ranks for the target pathways in breast cancer. The ranks were normalized in the range of 1 to 100. Lower rank is, better performance of the method is. The orchid, blue and pink represent the method based on the integrated information, information from proteomic and phosphoproteomic data of the pathway can act synergistically with trastuzumab in resistant cells [48].
Many studies have established that mTOR pathway has tightly interaction with PI3K-AKT and MAPK signaling pathways. Inhibition of mTORC1, an important part of mTOR pathway, leads to MAPK pathway activation through a PI3K-dependent feedback in human cancer [49]. It can be verified by the ranks of these pathways in four breast subtypes, the low rank of mTOR pathway corresponded to the high rank of PI3K-Akt signaling pathway (Fig. 3c and d). Luminaltype cells might use the MEK-ERK pathway to a lesser extent and seem to be more dependent on the PI3K pathway, shown by the preferential occurrence of PI3K mutations in this subtype [10]. As show in Fig. 3d, PI3K-Akt signaling pathway in Luminal subtype ranked higher than the other two subtypes.
We also took a look at the top 20 ranked pathways for each subtype of breast cancer. There were 7 common pathways among the four subtypes. Besides two TPs cell cycle and pathways in cancer, the other common pathways have been reported to be related with breast cancer pathways which consist of Fanconi anemia pathway [50], Progesterone-mediated oocyte maturation [51,52], Axon guidance [53], Basal cell carcinoma [54] and Thyroid cancer [55,56]. As shown in Fig. 4, some pathways were specifically ranked in top 20 for Basal, HER2, Luminal A and Luminal B respectively. This result indicated that the subtypes share some common molecular mechanisms during carcinogenesis and development, but the differences between them also exist. For example, as we mentioned above, p53 pathway is significantly perturbed in Basal-like subtype but it also play key role in the other three subtypes [23,57]. Notch pathway in Luminal breast cancer is activated more than in Basal and HER2 subtypes [58,59].
Discussion
Expression and modification describe the in vivo changes of proteins in cancer proteome at different views. The pathway analysis based on the information at single level, such as protein expression or protein phosphorylation alone, often brings high risk of both false positive and false negative due to technological limitations. To the best of our knowledge, the integration proteomic and phosphoproteomic data in pathway analysis in cancer has not been evaluated and reported. In this study, the pathway analysis was performed and compared using the integration of proteomic and phosphoproteomic data in CPTAC's breast cancer dataset. Moreover we tried to find the different patterns in pathway ranking among the subtypes.
Our results suggested that both differential expression of proteins and phosphorylation were useful for identifying the important pathways in cancer or cancer subtypes. Furthermore, the integration of protein expression and modification profiles could provide more comprehensive information and rank TPs more accurately. Although the ranking lists of three kinds of pathway analysis were different, some consistent results were observed since the expression change of proteins and phosphoproteins are used in all of strategies. While the GSEA requires the fold change of all proteins, it has more complete information reflecting the expression profile. SPIA needs the topology information of the pathways in addition, which can provide detailed influence between the nodes of pathways.
We also tested the performance using the union of DE proteins and phosphoproteins information in pathway ranking, but poor accuracy was obtained. It's possibly because of too much noise in individual omics data. In order to control the risk of false positive, the intersection of the DE proteins or DE PTMs were used as input in this study that might be too conservative. Because only one dataset was tested here, for some new pathways in top ranking list, more independent proteomics datasets in cancer need to be processed and validated in the future.
Conclusions
Integrative pathway analysis by combing the information from protein expression, protein modification and the topology of protein interaction network is more efficient way to identify key pathway in breast cancer. Pathway ranking in certain subgroup of patients can provide insight into the specific mechanisms and be helpful for the precision medicine for each subtype.
Additional files
Additional file 1: Table S1. The results of different pathway ranking methods. (XLSX 89 kb) Additional file 2: Figure S1.
Acknowledgements
We thank the High Performance Computing Center (HPCC) at Shanghai Jiao Tong University for the computation.
Funding
The work and the publication of this article were sponsored by grants from the National Natural Science Foundation of China (31271416).
Availability of data and materials
The datasets used in our study were downloaded from the publicly available databases mentioned in the text. The source code is available from the corresponding author on reasonable request.
About this supplement
This article has been published as part of BMC Systems Biology Volume 12 Supplement 8, 2018: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2018: systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.c om/articles/supplements/volume-12-supplement-8.
Authors' contributions JR worked on the method, experiment and analyses. BW designed the Figures. JL contributed to the experiment and writing of the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate Not applicable.
Consent for publication
Not applicable. | 2018-12-22T14:40:50.771Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "13d8950b8a8f81d38c6e02efeb18224aa6b54f1a",
"oa_license": "CCBY",
"oa_url": "https://bmcsystbiol.biomedcentral.com/track/pdf/10.1186/s12918-018-0646-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13d8950b8a8f81d38c6e02efeb18224aa6b54f1a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
235266058 | pes2o/s2orc | v3-fos-license | Generalized AdaGrad (G-AdaGrad) and Adam: A State-Space Perspective
Accelerated gradient-based methods are being extensively used for solving non-convex machine learning problems, especially when the data points are abundant or the available data is distributed across several agents. Two of the prominent accelerated gradient algorithms are AdaGrad and Adam. AdaGrad is the simplest accelerated gradient method, which is particularly effective for sparse data. Adam has been shown to perform favorably in deep learning problems compared to other methods. In this paper, we propose a new fast optimizer, Generalized AdaGrad (G-AdaGrad), for accelerating the solution of potentially non-convex machine learning problems. Specifically, we adopt a state-space perspective for analyzing the convergence of gradient acceleration algorithms, namely G-AdaGrad and Adam, in machine learning. Our proposed state-space models are governed by ordinary differential equations. We present simple convergence proofs of these two algorithms in the deterministic settings with minimal assumptions. Our analysis also provides intuition behind improving upon AdaGrad's convergence rate. We provide empirical results on MNIST dataset to reinforce our claims on the convergence and performance of G-AdaGrad and Adam.
Introduction
In this paper, we consider the minimization problem where the objective function f : R d → R + is smooth and possibly non-convex. In machine learning, f is typically approximated by the average of a large number of loss functions, each loss function being associated with individual training examples or mini-batches, and x typically represents the unknown weights of a model. The goal is to find a critical point of the aggregate loss function f . Several first-order iterative methods exist for solving the optimization problem (1). First-order methods are preferred when the available data-size is large [1]. Besides expensive computations, another drawback of second-order methods, such as Newton's method [2], is that for linear models these methods cannot be implemented over a distributed network, as the agents do not share their data points with the server [3].
The classical gradient-descent method is the basic prototype of first-order optimization methods [4]. Its stochastic version, known as the stochastic gradient-descent (SGD), has become a popular method for solving machine learning problems, especially the largescale problems [1]. Several accelerated variants of SGD have been proposed since the past decade [5][6][7][8][9]. Two of such notable methods are the adaptive gradient-descent method (Ada-Grad) [5], and the adaptive momentum estimation method (Adam) [6]. Both AdaGrad and Adam methods maintain an estimate of a local minima in (1) and update it iteratively using the gradient of the objective function multiplied by an adaptive learning rate.
AdaGrad is a prominent optimization method that achieves significant performance gains compared to SGD. As the name suggests, AdaGrad adaptively updates the learning rate based on the information of all the previous gradients. Specifically [5], for each iteration k ∈ {0, 1, . . .}, let x k = [x k,1 , . . . , x k,d ] T denote the estimate of a local minima in (1) maintained by AdaGrad. In addition, AdaGrad maintains a set of real valued scalar parameters denoted by {b k,i : i = 1, . . . , d}. The algorithm is initialized with arbitrarily chosen initial estimate x 0 ∈ R d and {b k,i > 0 : i = 1, . . . , d}. Let the gradient of the objective function evaluated at x ∈ R d be denoted as ∇f (x) ∈ R d , and its i-th element in be denoted . . , d}. The real valued scalar parameter η > 0 is called the step-size. Thus, the learning rate in AdaGrad is adaptively weighted along each dimension by the sum of squares of the past gradients. AdaGrad has been shown to particularly effective for sparse gradients [10], but has under-performed for some applications [11].
The Adam algorithm has been observed to compare favorably with other optimization methods for a wide range of optimization problems, including deep learning [12][13][14]. Like AdaGrad, Adam also updates the learning rate based on the information of past gradients. However, unlike AdaGrad, Adam effectively updates the learning rate based on only a moving window of the past gradients. Specifically [6], Adam maintains two sets of ddimensional vectors, respectively denoted by µ k = [µ k,1 , . . . , µ k,d ] T and v k = [v k,1 , . . . , v k,d ] T . µ k and v k are respectively known as the biased first moment estimate and biased second raw moment estimate. These vectors are initialized with µ 0 = 0 d and {v k,i > 0 : i = 1, . . . , d}. Three parameters η > 0, β 1 ∈ [0, 1), and β 2 ∈ [0, 1) are chosen before the iterations begin. At each iteration k ∈ {0, 1, . . .}, the vectors µ k and v k are updated according to is responsible for the initial bias correction, as proposed in the original Adam algorithm [6]. Thus, the learning rate in Adam is weighted by the exponentially moving averages of the past gradients.
Several algorithms have been proposed to improve upon the convergence of the Adam method, such as AdaShift [15], Nadam [9], AdaMax [6]. Although these algorithms have demonstrated good performance in practice, they do not have theoretical convergence guarantees. While AMSGrad has been shown to perform better than Adam on CIFAR-10 dataset [8], other experiments suggest AMSGrad be similar or worse than Adam. The recently proposed AdaBelief [16] is another variation of Adam with a theoretical convergence guarantee. Note that the RMSprop method is a special case of Adam with the parameter β 1 = 0 [17].
We aim to present simplified proofs of convergence of the AdaGrad and Adam algorithms to a critical point for non-convex objective functions in the deterministic settings. The first convergence guarantee of a generalized AdaGrad method for non-convex functions was proved recently in [18], where the additional parameter ≥ 0 generalizes the AdaGrad method. However, the parameter in [18] has been assumed to be strictly positive for the convergence guarantee, which excludes the case of the original AdaGrad method [5] where = 0. We first propose a more general AdaGrad model, coined G-AdaGrad, that subsumes the work in [18]. Our model and corresponding convergence proof allow the parameter to be negative, as well as the case of the original AdaGrad. Besides, our proof provides intuition behind how this generalization of AdaGrad impacts its convergence. The analysis for AdaGrad in [19] assumes the gradients to be uniformly bounded. We do not make such an assumption. Other works also analyze the convergence of AdaGrad-like algorithms for non-convex objective functions, notable among them being WNGrad [20] and AdaGrad-Norm [21]. Note that all of the aforementioned analyses of AdaGrad and AdaGrad-like algorithms are in discrete-time. We analyze AdaGrad in the continuous-time domain.
Previous works that demonstrate convergence of the Adam algorithm for non-convex objective functions include [17,[22][23][24][25][26]. In [17], the proof for Adam is provided when the algorithm parameter β 1 = 0. We consider the general parameter settings where β 1 ≥ 0. An Adam-like algorithm has been proposed and analyzed in [19]. The proofs in [17,[22][23][24][25] do not consider the initial bias correction steps in the original Adam [6]. Our analysis of Adam considers the bias correction steps. The analyses in [17,19,22,23,25] assume uniformly bounded gradients. We do not make such an assumption. The aforementioned analyses of Adam are in discrete-time. A continuous-time version of Adam has been proposed in [26], which includes the bias correction steps. However, compared to the convergence proof in [26], our proof for Adam is simpler. In addition, [26] assumes that the parameters β 1 and β 2 in the Adam algorithm are functions of the step-size η such that β 1 and β 2 tends to one as the step-size η → 0. We do not make such an assumption in our analysis.
Summary of Our Contributions
• In this paper, we first propose a more general AdaGrad algorithm, which we refer to as Generalized AdaGrad (G-AdaGrad). The proposed optimizer improves upon the convergence rate of the original AdaGrad algorithm. The original AdaGrad, discussed in Section 1, is a special case of the proposed G-AdaGrad algorithm.
• We propose two state-space models, each for the G-AdaGrad algorithm and the original Adam algorithm, in continuous time-domain. The proposed state-space models are an autonomous and non-autonomous system of ordinary differential equations, respectively, for G-AdaGrad and Adam. The non-autonomy of the model for Adam is due to initial bias correction steps.
• Using a simple analysis of the proposed state-space models, we prove the convergence of the G-AdaGrad and the Adam algorithm to a critical point of the possibly nonconvex optimization problem (1) in the deterministic settings. Our analysis requires minimal assumptions about the optimization problem (1).
Compared to the existing works that analyze the convergence of the AdaGrad or the Adam algorithm for non-convex objective functions, the major contributions of our presented analysis are as follows.
Includes the original AdaGrad algorithm and a more generalized version with intuition
behind the generalization, compared to [18].
7. Simple proof of convergence, compared to the continuous-time version in [26].
Continuous-Time Generalized AdaGrad
In this section, we propose a set of autonomous ordinary differential equations. Using firstorder Euler discretization, we show that the proposed set of differential equations coincides with a general version of the AdaGrad algorithm, which we refer to as the Generalized AdaGrad (G-AdaGrad). The proposed differential equations include the original AdaGrad as a special case.
We make the following assumptions in order to present our algorithms and their convergence results.
Assumption 1. Assume that the minimum of function f exists and is finite. In other words, Assumption 2. Assume that f is twice differentiable over its domain R d and the entries in the Hessian matrix ∇ 2 f (x) are bounded above for all x ∈ R d .
The above assumptions about the objective function f are mild and standard in the literature of gradient-based optimization. Assumption 2 is equivalent to the gradient ∇f being Lipschitz continuous, which is often referred as the function f being smooth [17,18,22].
Description of Generalized AdaGrad
We propose the Generalized AdaGrad (G-AdaGrad) method which is parameterized by a positive real scalar α. For each dimension i ∈ {1, . . . , d} and t ≥ 0, consider the following pair of differential equationsẋ with initial conditions x c (0) ∈ R d and x(0) ∈ R d . We assume that the initial condition {x ci (0) > 0 : i = 1, . . . , d}. The variable x ci , ∀i can be abstracted as dynamic controller state.
The above pair of differential equations (2)-(3) can be seen as a continuous-time variation of the following algorithm, when (2)-(3) are discretized with a fixed sampling time δ > 0.
For each i ∈ {1, . . . , d} and k ∈ {0, 1, . . .}, This fact can be seen from the following argument. From Taylor series expansion of x ci ((k + 1)δ) and x i ((k + 1)δ) we obtain that, Upon substituting from above, (4)-(5) can be rewritten as Defining t = kδ, in the limit δ → 0, the above equations coincide with (2)-(3). Note that, (4)-(5) represents a generalization of the AdaGrad algorithm discussed in Section 1 with step-size η = δ and an additional parameter α. The controller states x c (t) in continuous-time corresponds to the variable b k in discrete-time of the AdaGrad algorithm. When we set α = 0.5, (4)-(5) correspond to the original AdaGrad algorithm. Introducing the parameter α can further improve its convergence. This is discussed in the following subsection.
Convergence of Generalized AdaGrad
Define the set of critical points of the objective function f as Theorem 1 below presents a key result on the convergence of the G-AdaGrad algorithm (2) Moreover, for all t ≥ 0, we have Proof. The time-derivative of f along the trajectories x(t) of (3) is given bẏ Further utilizing (2) we get,ḟ Integrating both sides above with respect to (w.r.t) t from 0 to t, we get Since α < 1, upon evaluating the integral we have Integrating both sides of (2) w.r.t t from 0 to t, we have Using the above equation in (11) proves (8). Since x ci (0) > 0, we have x ci (t) > 0. The above equation implies that x ci (t) is nondecreasing w.r.t t, which combined with (8) and α ∈ (0, 1) implies that f (x(t)) in nonincreasing w.r.t. t. From Assumption 1, f is bounded below. Thus, lim t→∞ f (x(t)) is finite. From (11) then it follows that, lim t→∞ x c (t) is finite. Thus, the above equation implies that ∇f (x(t)) is square-integrable w.r.t t. Hence, ∇f (x(t)) is bounded above.
Since ∇f (x(t)) is bounded and x ci (t) > 0, from (3) we have thatẋ(t) is bounded above. Now, the time-derivative of ∇f 2 along the trajectories x(t) is given by We have shown that ∇f (x(t)) andẋ(t) are bounded above. From Assumption 2, we have all the entries in ∇ 2 f (x(t)) bounded above. Then, from the above equation we have d dt ∇f (x(t)) 2 bounded above. Thus, ∇f (x(t)) 2 is uniformly continuous.
Theorem 1 implies that the G-AdaGrad algorithm, proposed in (2)-(3), converges to a critical point in X * of the non-convex optimization problem (1). Furthermore, (8) implies that the convergence of G-AdaGrad is affected by the algorithm parameter α. As we will show through simulations in Section 4, α = 0.5, which corresponds to the original AdaGrad method [5], is not the optimal value of α.
Another significance of the above proof is that, it explains why the exponent α of x c (t) (equivalently, b k in discrete-time) in the update equation of the estimate x(t) is limited to α < 1. If α > 1, (8) implies that f (x(t)) will be increasing in t. If α = 1, evaluating the integral in (10) we have Thus, f decreases at a slower rate for α = 1 compared to α < 1, because of logarithmic decrements in case of α = 1 compared to exponential decrements. Note that, the parameter in [18] plays the same role as α. However, the convergence results in [18] is only for ∈ (0, 0.5] which corresponds to α ∈ (0.5, 1]. Thus, our analysis is more general compared to [18]. In addition, our analysis in Theorem 1 explains the significance of the parameter α, as discussed in the previous paragraph.
Continuous-Time Adam
In this section, we propose a set of non-autonomous ordinary differential equations. Using first-order Euler discretization, the proposed set of differential equations coincides with the original Adam algorithm.
In the next subsection, we present the convergence of our proposed state-space model in (13)-(15).
Convergence of Adam
Recall the definition of the set of critical points X * from (6) If Assumptions 1-2 hold, then lim t→∞ ∇f (x(t)) = 0 d .
Proof. The time-derivative of f along the trajectories x(t) of (15) is given bẏ Multiplying with α(t) on both sides above we get Upon integrating both sides above w.r.t. t from 1 to t and substituting from (13) we have Integrating by parts we have the first term on R.H.S. as Upon substituting above from (14), and using that µ(1) = 0 d we have Upon substituting above in (20) we obtain that We define, γ 1 = 1 − λ 1 and γ 2 = 1 − λ 2 . Upon differentiating both sides of (12) w.r.t t . So we havė From the condition λ 1 > λ 2 in (19), we have 1 > γ 2 > γ 1 > 0. Then, are, respectively, increasing and decreasing functions of t. Since 1 > γ 2 > γ 1 > 0, we have is an increasing function of t. Then, there exists T < ∞ such that (22) holds for all t ≥ T . Integrating by parts we rewrite the L.H.S. in (21) as (s)f (x(s))ds.
Upon substituting from above in (21), for t ≥ T , Due to (19) and (22), the R.H.S. in (23) is decreasing in t ≥ T . Then, the L.H.S. in (23) is also decreasing in t ≥ T . From (14) and Since the L.H.S. in (23) is decreasing in t ≥ T , we have the L.H.S. in (23) bounded above by M T for all t ≥ T . From (23) then we have that µ i (t) 2 v i (t) −1.5 ∇ i f (x(t)) 2 and µ i (t) 2 v i (t) −0.5 are integrable w.r.t. t and bounded above. It implies that, ∇ i f (x(t)) is bounded unless µ i (t) = 0 or v i (t) = ∞. From (15), either of the conditions µ i (t) = 0 and v i (t) = ∞ implies thaṫ x i (t) = 0 and, hence, d dt ∇ i f (x(t)) = 0. Due to continuity of ∇ i f and ∇f (x(1)) < ∞, we then have ∇ i f (x(t)) is bounded above for all t. Integrating both sides of (13) and (14) w.r.t t from 1 to t, we have for i ∈ {1, . . . , d}, Since ∇f (x(t)) is bounded above and λ 1 , λ 2 > 0, the above equations implies that µ i (t) and v i (t) are bounded above. Moreover, v i (t) > 0 as v i (1) > 0. From (15) then we have, x(t) is bounded above. From (13) and (15), µ i (t) = 0 implies thatμ i (t) = λ 1 ∇ i f (x(t)) anḋ x i (t) = 0. Thus, µ i (t) can be zero only at isolated points t. Otherwise, for some h > 0 there exists an interval (t − h, t + h) such that µ i (s) = 0 for all s ∈ (t − h, t + h). In that case,μ i (s) = 0 for all s ∈ (t − h, t + h). Sinceμ i (s) = λ 1 ∇ i f (x(s)) for all s ∈ (t − h, t + h), we then have ∇ i f (x(s)) = 0 for all s ∈ (t − h, t + h), which proves the theorem.
We have shown above that µ i (t) = 0 only at isolated points and v i (t) is bounded above. −1.5 integrable. Now, we apply Cauchy-Schwartz inequality on the functions ) is bounded and integrable, it is also square-integrable. Thus, ∇f (x(t) 2 is integrable.
So we have shown that ∇f (x(t) 2 is integrable andẋ(t) is bounded above. Following the same argument in last two paragraphs in the proof of Theorem 1, under Assumption 2, we conclude that lim t→∞ ∇f (x(t)) = 0 d .
Experimental Results
In this section, we present our experimental results validating the convergence guarantees from Section 2.2 and Section 3.2. We consider the problem of recognizing handwritten digit one and digit five. Although it is a binary classification problem between the digits one and five, we solve it as a regression problem first. The obtained linear regression model can be a good initial decision boundary (ref. Fig. 2) for classification algorithms. We conduct experiments for minimizing the objective function f (x) = 1 2 Ax − B 2 . The training data points (A, B) are obtained from the "MNIST" [28] dataset as follows. We select 5000 arbitrary training instances labeled as either the digit one or the digit five. For each instance, we calculate two quantities, namely the average intensity of an image and the average symmetry of an image [29]. Let the column vectors a 1 and a 2 respectively denote the average intensity and the average symmetry of those 5000 instances. We perform a quadratic feature transform of the data (a 1 , a 2 ). Then, our input matrix before pre-processing is A = a 1 a 2 a 1 . 2 a 1 . * a 2 a 2 . 2 . Here, (. * ) represents element-wise multiplication and (. 2 ) represents element-wise squares. This raw input matrixà is then pre-processed as follows. Each column ofà is shifted by the mean value of the corresponding column and then divided by the standard deviation of that column. Finally, a 5000-dimensional column vector of unity is appended to this pre-processed matrix. This is our final input matrix A of dimension (5000 × 6). Next we consider the logistic regression model and conduct experiments for minimizing the cross-entropy error on the raw training data.
We train both of these models with the G-AdaGrad algorithm (2)-(3) and the Adam algorithm (13)- (15). We initialize the algorithms according to the conditions in Theorem 1 and Theorem 2. Specifically, we initialize the G-AdaGrad algorithm with x c (0) = x(0) = [0.01, . . . , 0.01] T , and the Adam algorithm with µ(1) = [0, . . . , 0] T , v(1) = x(1) = [0.01, . . . , 0.01] T . Moreover, we set λ 2 = 0.0067 for Adam. G-AdaGrad converges for different values of α (ref. Fig. 1a and Fig. 3a). We observe that the convergence is faster when α is smaller. Thus, the coefficient α = 0.5, which corresponds to the original AdaGrad method, is not the optimal choice. In addition, α = 1 leads to poor convergence, as we have theoretically explained in Section 2.2. Fig. 1b and Fig. 3b show the effect of the relative values of λ 1 and λ 2 on the convergence of Adam algorithm. The standard choices for β 1 and β 2 in discrete-time Adam are respectively 0.9 and 0.999 [6]. With a sampling time δ = 0.15, from the relation between discrete-time and continuous-time Adam we have λ 1 = 0.67 and λ 2 = 0.0067 (ref. Section 3.1). Thus, our result in Fig. 1b and Fig. 3b agrees with the standard choices of these two parameters. A smaller or larger λ 1 λ 2 leads to oscillations or slows down the convergence. Note that, the condition (19) in Theorem 2 is satisfied with these standard parameter values. | 2021-06-02T01:16:01.729Z | 2021-05-31T00:00:00.000 | {
"year": 2021,
"sha1": "7ae0ab0913b28d40faef7ef322a2aac5bab1e8c7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7ae0ab0913b28d40faef7ef322a2aac5bab1e8c7",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
} |
220922728 | pes2o/s2orc | v3-fos-license | Skeletal Muscle Mass by Bioelectrical Impedance Analysis and Calf Circumference for Sarcopenia Diagnosis
Abstract Skeletal muscle mass (SMM) plays an important role in health and physical performance. Its estimation is critical for the early detection of sarcopenia, a disease with high prevalence and high health costs. While multiple methods exist for estimating this body component, anthropometry and bioelectrical impedance analysis (BIA) are the most widely available in low- to middle-income countries. This study aimed to determine the correlation between muscle mass, estimated by anthropometry through measurement of calf circumference (CC) and skeletal mass index (SMI) by BIA. This was a cross-sectional and observational study that included 213 functional adults over 65 years of age living in the community. Measurements of height, weight, CC, and SMM estimated by BIA were made after the informed consent was signed. 124 women mean age 69.6 ± 3.1 years and 86 men mean age 69.5 ± 2.9 years had the complete data and were included in the analysis. A significant positive moderate correlation among CC and SMI measured by BIA was found (Pearson r= 0.57 and 0.60 for women and men respectively (p=0.0001)). A moderate significant correlation was found between the estimation of SMM by CC and by BIA. This suggests that CC could be used as a marker of sarcopenia for older adults in settings in lower-middle-income countries where no other methods of diagnosing muscle mass are available. Although the CC is not the unique parameter to the diagnosis of sarcopenia, it could be a useful procedure in the clinic to identify patients at risk of sarcopenia.
Introduction
In 2010, a European Consensus defined sarcopenia ("sarx" for muscle, "penia" for loss) as low SMM plus low muscle function either as reduced strength or performance [1]. Recently, this geriatric syndrome has been recognized as a disease entity with the awarding of an ICD-10-CM (M62.84) code in September 2016. This designation enables the syndrome to be considered as a primary or secondary condition [2].
Sarcopenia has become a factor common to many of the chronic diseases of the elderly (heart failure, diabetes, type 2, obesity, COPD, CVD, and dementia, among others) [3][4][5]. The overall estimates of prevalence are in the range of 10-58% depending on the methods used and the proposed cutoff points. This prevalence is considered very high and highlights the need for early diagnosis. Sarcopenia is one of the most relevant public health problems in the elderly and is associated with a high rate of adverse outcomes and high healthcare costs [6]. In the United States, the mere costs of hospitalization, nursing home income, and home health care expenses amounted to USD 18.5 billion in 2000, representing approximately 1.5% of total health spending [7,8].
However, sarcopenia, although primarily described in older subjects, it is not exclusively a disease of elderly. It is also a condition that can occur in young people and different pathological conditions such as malnutrition associated to malignancy [9], rheumatoid arthritis [10], COPD patients [11] and other chronic inflammatory diseases. Moreover, a new type of sarcopenia has emerged in recent years named "sarcopenic obesity" (SO) that occurs when sarcopenia and obesity are simultaneously present in the same individual [12]; in which case, the risks of presenting other comorbidities are greater than in people who only have one of the two conditions [13]. This pathology is also presented by young people and is associated to nutrition and lifestyle factors. A national survey for 3937 middle-aged Koreans and older Korean individuals found that the SO group had a lower overall dietary quality, were more sedentary and had a greater number of adverse psychological conditions than the non-sarcopenic obesity group [14]. On the other hand, undernutrition can also be associated with loss of muscle mass. Beaudart et al. (2019) [15], studied the association between these two conditions in 336 Belgian men and women aged 72.5 ± 5.8 years and found that undernutrition was a strong predictor of sarcopenia and that these subjects had a fourfold increased risk of developing severe sarcopenia during a four-year follow-up.
According to the definition, the estimation of SMM is a critical component of sarcopenia. There are a variety of skeletal mass assessment tools; however, the choice for clinical practice depends largely on availability. Technologies such as Magnetic Resonance Imaging (MRI), Dual Energy Xray Absorptiometry (DXA), Ultrasonography, and Computerized tomography, are not available in all clinical locations [16].
CC has been considered a sensitive anthropometric parameter of muscle mass in the elderly [17]. On the other hand, BIA is considered an intermediate technique between these more accurate but more expensive methods and anthropometry that is cheaper but less reliable [18]. Moreover, it is necessary to define user-friendly tools in clinical practice. This should facilitate early detection of the disease and its inclusion in public health programs. Even more, it was recently shown that the limits for definitions must be ethnically sensitive, and different countries may need their separate cut off points [19].
Thus, the objective of this study was to evaluate the association between the estimation of SMM by SMI through BIA and CC by anthropometry. There are very few articles doing this comparison since most studies evaluate SMI by DXA to relate it to CC.
Materials and methods Participants
The sample was estimated using registries of the National Administrative Department of Statistics; 1085 older individuals randomly selected were eligible and 213 agreed to participate. 3 patients did not have complete data and were excluded, thereby, 210 provided written consent and were included in the study developed during the months of March 2013 -February 2014.
The inclusion criteria were being between 55 and 75 years old and living in the community. The exclusion criteria were living at nursing homes, having a decompensated chronic disease, pacemakers, chronic kidney disease in hemodialysis, presence of edemas, metallic nonremovable pieces or prosthesis, diuretic consumption, limb amputation, hemiparesis or hemiplegia.
Anthropometric parameters
Weight (ICOB®) and height (Seca®) were measured by standardized protocols [20]. CC was measured to the nearest 0.1 cm in the standing position using a non-elastic, flexible plastic tape (Lord®). The tape was moved on the right calf along the length to find the maximal circumference according to the International Society for the Advancement of Kinenthropometry recommendations (ISAK) [21]. A low muscle mass assessed by CC was determined using the cutoff points from [22] which are <33 cm for women and <34 cm for men. Japanese population was chosen on the basis that anthropometry of Colombians is more like that of Asians, maybe because Native Americans come from at least three different Asian genetic influences [15].
SMI
Whole-body BIA measurements were performed according to a protocol previously published by [23] using a Hydra 4.200, Xitron Technologies®, San Diego (USA), device. For these measurements, verification was made of the previous fulfillment of the necessary conditions to carry out the measurements.
SMM was calculated using the Janssen formula Where Ht is height in centimeters; R50 is BIA resistance in ohms; for gender, men = 1 and women = 0; and age is in years.
Afterward the SMI was calculated by the equation: SMI (kg/m 2 ) = SMM/height 2 The cut-off points for low muscle mass by this technique were defined as an SMI of less than −2 standard deviations (SD) of the mean value for Colombian young adults, as defined previously from [25]; 6.42 and of 8.39 kg/m 2 for women and men respectively.
Statistical analyses
Before analyzing the data, a Kolmogorov-Smirnov test was performed, which showed a normal distribution of the data. Qualitative variables were analyzed using absolute and relative frequencies and mean and SD for quantitative variables. Pearson´s correlation coefficient was used to evaluate the associations between SMI and CC. A t-test was applied for correlation coefficients. P-values of less than 0.05 were considered to indicate statistical significance. The data obtained were analyzed using SPSS 21.0 (SPSS, Chicago, IL).
Informed consent
Informed consent has been obtained from all individuals included in this study.
Ethical approval
The research related to human use has complied with all relevant national regulations, institutional policies, and in accordance with the tenets of the Helsinki Declaration. The study protocol was reviewed and approved by the Ethics Committee of the Universidad de Caldas.
Results
124 women mean age 69.6 ± 3.1 years and 86 men mean age 69.5 ± 2.9 years were evaluated. The characteristics of the subjects are shown in Table 1. Figures 1 and 2 depict a direct positive correlation between SMI and CC for women and men respectively, although slightly higher in men. For women, the Pearson's correlation coefficient between CC and SMI was 0.57 (p-value <0.0001). For men, Pearson's correlation coefficient was 0.6 (p-value <0.0001).
Discussion
This study showed that there was a significant positive moderate correlation between CC and SMI measured by BIA (r=0.57 and 0.60 for women and men, respectively (p=0.0001). Quinonez-Olivas et al., 2016 [26], evaluated 105 Mexican patients with a mean age of 76 years (±7.3) and showed a lower positive correlation between SMI and CC than in the present study (r=0.31; p=0.000). However, they considered that CC could be a reliable measure to assess muscle mass in older adults in geriatric ambulatory clinics.
On the other hand, Handayani et al., 2018 [27], examined 96 elderly healthy women aged 60 years or more, independent in their daily activities. They found a Spearman correlation of 0.43 (p< 0.05).
More recently, Santos et al. (2019) [28], evaluated DEXA and calf circumference data from 15,293 adults surveyed in the 1999-2006 NHANES. They found a higher correlation (r= 0.79 for males and 0.74 for females) between calf circumference and appendicular skeletal mass as measured by DXA. This finding was not only in older adults like those in the present study but also in adults of early and middle age.
Average CC for women was 33.3 (±2.9) cm and 35.2 (±2.5) cm for men. The difference between men and women is interesting in this study. Women had a higher body mass index, however, their CC was lower than that of men, which would suggest that they may have more visceral fat and less skeletal muscle mass, placing them at higher risk of developing sarcopenic obesity, as suggested by [29].
Moreover, other Asian countries such as Malaysia, show slightly different cut off points from their Japanese neighbors (32.0 ±4.2 cm in men and 30.5 ±4.6 cm in women) [30]. The European Consensus established a lower limit (31 cm) for CC, which would also lead to different results from those reported in this study [31].
Skeletal mass for our participants was 16.7 kg for women and 27.5 kg for men and SMI was 7.2 and 9.9 kg/m 2 , respectively. The above study from Handayani et al., 2018 [32], found a women´s mean muscle mass of 14.2 kg and SMI of 6.6 kg/m 2 . However, these women were living in a nursing home for at least 2 years and probably were more sedentary.
The first European Consensus on definition and diagnosis of sarcopenia did not recommend anthropometry for routine diagnosis of this condition [1]; however, in the new consensus [27], the researchers tried to facilitate the early detection of sarcopenia and consequently its timely treatment. On this occasion, they admit that the CC may be an alternative for clinical environments where there are no facilities such as BIA to estimate muscle mass. This is how the findings of this study, with a moderate correlation between the SMI and the thigh circumference, lead us to recommend this alternative in low-middle income countries such as Colombia, wherein many occasions and clinical contexts there is only a measuring tape. A European survey aimed to assess the usage of tools for the assessment of muscle mass, muscle strength, and physical performance for the diagnosis of sarcopenia was completed by 255 clinicians from 55 countries across 5 continents. The authors found that only 53.3% of the responders assess muscle mass in their daily practice with different tools among which the most used was the CC in 57.5% of the cases, followed by DXA (45.9%), skinfold thickness (30.8%), BIA (22.6%), ultrasonography 18.5%, MRI (16.4%), CT-scan (14.4%) and other not specified (8.9%) [32]. The authors call for the need of standardizing the tools and the cut-off values.
The cross-sectional design of the study constitutes a limitation since it was only possible to establish an association between the muscle mass estimated by BIA and CC. Thus, subsequent studies with a larger number of subjects and more elaborate designs are required to validate the usefulness of CC in sarcopenia diagnosis and to have cut off points representing the national population for this parameter to establish the true usefulness of CC as a tool for the early detection of sarcopenia.
As the general population tends to age worldwide, an increase in the costs for their health care is expected. This situation makes us think about the need to timely prevent sarcopenia that starts from earlier ages in life. Concomitant, obesity, and malnutrition that can lead to muscle weakness, must be subject to public health authorities. A prevention strategy could be carried out by improving the quality of the diet and physical activity of young people [14] as well as an earlier diagnosis using simple and inexpensive tools, such as CC measurement.
Conclusion
Although the CC is not the unique parameter to aid in the diagnosis of sarcopenia, it does seem to have a significant correlation with SMM and it could be a useful procedure in the clinic to identify patients at risk of sarcopenia if we stick to the moderate association that was found between the SMI and CC. This makes us suggest CC later as a substitute marker for muscle mass evaluation for older adults in settings in lowmiddle income countries where no other muscle mass diagnostic methods are available. However, the diagnosis can vary depending on the reference cut of point used and it would be important to validate the existing cut-off points in the literature and build own values for each region or country. | 2020-07-30T02:10:42.531Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "1a59e2806b7c7706ab44840e0fee52a4a2b20bc4",
"oa_license": "CCBYNCND",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/joeb/11/1/article-p57.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e96f1297dabd6c9a4e0c8de7463dede09b86b84a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52159727 | pes2o/s2orc | v3-fos-license | COSIMO – patients with active cancer changing to rivaroxaban for the treatment and prevention of recurrent venous thromboembolism: a non-interventional study
Background Around 20% of venous thromboembolism (VTE) cases occur in patients with cancer. Current guidelines recommend low molecular weight heparin (LMWH) as the preferred anticoagulant for VTE treatment. However, some guidelines state that vitamin K antagonists (VKAs) and direct oral anticoagulants (DOACs) are acceptable alternatives for long-term therapy in some patients if LMWHs are not available. LMWHs and VKAs have a number of drawbacks that can increase the burden on patients. DOACs, such as rivaroxaban, can ameliorate some burdens and may offer an opportunity to increase patient satisfaction and health-related quality of life (HRQoL). The Cancer-associated thrOmboSIs – patient-reported outcoMes with rivarOxaban (COSIMO) study is designed to provide real-world information on treatment satisfaction in patients with active cancer who switch from LMWH or VKA to rivaroxaban for the treatment of acute VTE or to prevent recurrent VTE. Methods COSIMO is a prospective, non-interventional, single-arm cohort study that aims to recruit 500 patients in Europe, Canada and Australia. Adults with active cancer who are switching to rivaroxaban having received LMWH/VKA for the treatment and secondary prevention of recurrent VTE for at least the previous 4 weeks are eligible. Patients will be followed for 6 months. The primary outcome is treatment satisfaction assessed as change in the Anti-Clot Treatment Scale (ACTS) Burdens score at week 4 after enrolment compared with baseline. Secondary outcomes include treatment preferences, measured using a discrete choice experiment, change in ACTS Burdens score at months 3 and 6, and change in HRQoL (assessed using the Functional Assessment of Chronic Illness Therapy – Fatigue questionnaire). COSIMO will collect data on patients’ medical history, patterns of anticoagulant use and incidence of bleeding and thromboembolic events. Study recruitment started in autumn 2016. Conclusions COSIMO will provide information on outcomes associated with switching from LMWH or VKA therapy to rivaroxaban for the treatment or secondary prevention of cancer-associated thrombosis in a real-life setting. The key goal is to assess whether there is a change in patient-reported treatment satisfaction. In addition, COSIMO will facilitate the evaluation of the safety and effectiveness of rivaroxaban in preventing recurrent VTE in this patient population. Trial registration NCT02742623. Registered 19 April 2016. Electronic supplementary material The online version of this article (10.1186/s12959-018-0176-2) contains supplementary material, which is available to authorized users.
Background
Cancer and its treatments (e.g. chemotherapeutic or anti-angiogenic agents) are well-established risk factors for venous thromboembolism (VTE) [1], and up to 20% of patients with active cancer will develop VTE, depending on the cancer type, stage and treatment [2,3]. Cancer-associated thrombosis (CAT) has a significant impact on prognosis and patients' quality of life (QoL). CAT is a leading cause of death among patients with cancer [4]; survivors of an initial event are at higher risk of recurrent events and bleeding during anticoagulation therapy compared with patients with VTE without malignancy [3,5]. In the CLOT, CATCH and DALTECAN studies evaluating low molecular weight heparin (LMWH) therapy for the treatment of CAT, the residual risk of a recurrent event with 6 months' LMWH therapy was~7-9%, and that for major bleeding was~2-6% [6][7][8]. CAT not only adds to the symptomatic burden of cancer but also to the treatment burden and emotional trauma caused by cancer and its treatment [9]. The risk of CAT is at its highest in the first few months after cancer diagnosis [10], and patients may already require multiple concurrent anti-neoplastic and supportive therapies during this time. Furthermore, the occurrence of CAT may delay critical treatments for cancer, including chemotherapy and surgery [11].
Current guidelines for the treatment of CAT recommend LMWH for initial and long-term (at least 3-6 months) therapy [12][13][14][15]. The American Society of Clinical Oncology (ASCO) guidelines also consider vitamin K antagonists (VKAs) as an acceptable alternative for long-term therapy if LMWHs are not available [16]. Although efficacious, both LMWH and VKAs have drawbacks that impose significant challenges in the care of patients with CAT: daily injections and a risk of heparin-induced thrombocytopenia with LMWHs, and frequent international normalised ratio monitoring and numerous food and drug interactions with VKAs [17].
Direct oral anticoagulants (DOACs; apixaban, dabigatran, edoxaban and rivaroxaban) were developed to overcome some of the limitations associated with traditional anticoagulants, and are now recommended over [15] or as an alternative [18] to LMWH/VKA therapy for long-term VTE treatment in patients without cancer. They have the potential benefits of fixed-dosing, no requirement for routine coagulation tests and few drug or food interactions, in addition to oral administration [17]. The phase III Hokusai-VTE-Cancer and select-d pilot trials provided the first randomised comparisons of edoxaban and rivaroxaban, respectively, versus dalteparin for the treatment of CAT, supporting their use in some patients [19,20]. Recently published international guidance suggests that DOACs can be considered for treatment of CAT in patients with stable cancer not receiving systemic anti-cancer therapy, and in cases where a VKA is an acceptable treatment choice [21]; this is also reflected in the most recent American College of Chest Physicians guidelines update [15].
The burden of care associated with traditional anticoagulation therapies for CAT may explain the high levels of non-adherence to current guidelines and frequent switching between anticoagulation therapies in clinical practice. In Europe, over 90% of patients receiving treatment for active cancer and first VTE are initially prescribed LMWH for the prevention of VTE recurrence; approximately 30% are subsequently switched to VKAs for long-term therapy ( Fig. 1) [22]. In a retrospective analysis of 52,911 patients with CAT from the US MarketScan Treatment Pathways database, 50% of patients were initially prescribed warfarin [23] despite guidelines recommending LMWH [16]. Furthermore, of the 40% of patients initially prescribed LMWH, 44% switched to another anticoagulant within 1 month [23]. Patient involvement and treatment satisfaction are increasingly emphasised as key to improving adherence with long-term therapy [24,25]. Unfortunately, there is only limited real-world information on patient satisfaction with or preferences for different anticoagulants for CAT treatment [25].
This paper presents the study design and rationale for the Cancer-associated thrOmboSIspatient-reported outcoMes with rivarOxaban (COSIMO) study, which aims to collect prospective real-world data on patient satisfaction with anticoagulation treatment after a switch from LMWH or VKA to rivaroxaban in patients with cancer. In addition, COSIMO will facilitate the evaluation of adverse events (AEs) and the recurrence of VTE with rivaroxaban in this patient population. COSIMO is part of the Cancer Associated thrombosis -expLoring soLutions for patients through Treatment and Prevention with RivarOxaban (CALLISTO) programme (Table 1) [20].
Study design and patient population
COSIMO is a prospective, non-interventional, single-arm cohort study that is recruiting patients at approximately 70 sites across Australia, Belgium, Canada, Denmark, France, Germany, Italy, Netherlands, Spain and the UK.
Adults with active cancer and acute deep vein thrombosis (DVT) and/or pulmonary embolism (PE), or with recurrent VTE, who are scheduled to be switched to rivaroxaban after having received standard of care (SOC) anticoagulation therapy (either LMWH or a VKA) for CAT for ≥4 weeks are eligible. Patients with an Eastern Cooperative Oncology Group (ECOG) performance status score of 0, 1 or 2 will be included. ' Active cancer' includes cancer (other than fully treated basal cell or squamous cell carcinoma of the skin) that has been diagnosed or treated within the previous 6 months, or recurrent or metastatic cancer. Inclusion and exclusion criteria are shown in Table 2.
Patients enrolled into the study will be observed for 6 months. Treatment duration is at the physician's discretion and is not dependent on the initially scheduled treatment duration. In addition to contact at enrolment and the end of the 6-month observational period, patients should undergo two follow-up visits (at approximately week 4 and month 3; timepoints of interest for data collection) (Fig. 2). Owing to the observational nature of the study, the protocol does not define the exact dates for the two follow-up visits, and investigators are advised to schedule these to coincide with regular physician appointments.
Study outcomes
The primary outcome of the COSIMO study is treatment satisfaction, assessed as change in the Anti-Clot Treatment Scale (ACTS) Burdens score [26] from enrolment to week 4. Secondary outcomes include patient preferences with regard to convenience attributes; the change in ACTS Burdens score at month 3 and month 6; change in health-related QoL (HRQoL); patterns of anticoagulant use and incidence of bleeding, thromboembolic events and other AEs and serious AEs.
Data collection and management
Data collection is illustrated in Fig. 2. Treatment-related data will be collected at baseline and during visits that take place in routine clinical practice. Data will be recorded in an electronic case report form. The information collected at enrolment will include prior medical history and current co-morbidities, current and previous medication, a description of the index venous thromboembolic event and its treatment, and the results of routine laboratory tests. The reasons for switching to rivaroxaban, planned treatment duration and dose will also be recorded.
Treatment satisfaction will be measured using the self-administered ACTS questionnaire. Patients will be asked to complete the questionnaire at enrolment and, after the initiation of rivaroxaban therapy, at approximately week 4, month 3 and month 6 (end of the observation period). During this time, the investigator may decide to change anticoagulation therapy; in these circumstances, the patient would remain in the study until the end of the 6-month follow-up period but would not need to complete any further ACTS questionnaires. The Initial and long-term anticoagulant therapy in patients with cancer b and a first episode of VTEdata from the RIETE registry [22]. a Includes unfractionated heparin and thrombolytic agents. b Defined as newly diagnosed cancer, metastatic cancer or cancer undergoing treatment. LMWH, low molecular weight heparin; PE, pulmonary embolism; VKA, vitamin K antagonist; VTE, venous thromboembolism Functional Assessment of Chronic Illness Therapy (FACIT) Fatigue questionnaire will be used to assess HRQoL and will be completed alongside the ACTS questionnaire at enrolment, during the two follow-up visits and at the end of the observation period. Information on convenience-related patient preferences in anticoagulation treatment will be collected by means of a discrete choice experiment (DCE) in a semi-structured telephone interview. Patients will be asked to volunteer to take part in the DCE, which will be conducted by telephone at 4-12 weeks after enrolment.
AEs and serious AEs will be documented up to the completion of the 6-month observation period or up to 30 days after rivaroxaban discontinuation, whichever occurs earlier. Bleeding events (collected as serious AEs or non-serious AEs) will be adjudicated and categorised as major or non-major bleeding. Thromboembolic events, including incidental thromboembolic events documented in routine imaging (e.g. incidental PE from staging computed tomography; collected as serious AEs or non-serious AEs) will be adjudicated and categorised (symptomatic or incidental). An independent Central Adjudication Committee of four expert physicians will adjudicate major bleeding and thromboembolic events (recurrent VTE, other thromboembolic events, major adverse cardiovascular events). All events resulting in death (as reported by the investigator) will be adjudicated. Causes of death will be categorised as being related to cancer, thrombosis, bleeding, infectious diseases or other.
In cases of rivaroxaban discontinuation, the reason for permanent cessation and potential switch to another anticoagulant will be documented.
Study questionnaires
The ACTS questionnaire uses a five-point Likert scale, ranging from '5 = not at all' to '1 = extremely' , to assess patient response [26] (Additional file 1). It is a self-administered instrument that includes 13 items about the burden of anticoagulant therapy (bleeding, bruising, limitation of activities, food and drink limitations, need to avoid other medications, daily inconveniences, occasional inconveniences, adherence issues, time spent on regimen, anxiety, frustration and overall burden), and four items about the benefits of anticoagulant therapy (confidence, reassurance, satisfaction and overall benefit) [26]. The use of separate subscales for ACTS Burdens and Benefits means that it will be possible to focus specifically on the burdens of anticoagulant therapy as the primary outcome [26]. Because the ACTS questionnaire has a recall period of 4 weeks, data should be collected between -2 to +4 weeks around each visit. Further information is given in Additional file 1.
During the DCE, participants will be asked to make a choice between options ' A' and 'B' across nine treatment scenarios (plus a control scenario) on pictorial charts, considering differing combinations of utility-increasing and utility-decreasing attribute levels (trade-off type of choice) [27]. The aim of the DCE is to define the ideal anticoagulant treatment from the perspective of patients with CAT (Additional file 2).
FACIT Fatigue is a 13-item questionnaire that assesses feelings of tiredness, weakness, listlessness, frustration, energy levels, ability to perform daily tasks (including eating) and need for help to complete tasks [28]. Patients will score each item on a five-point (0 to 4) scale; a higher score indicates better HRQoL (Additional file 3).
Sample size calculation
The sample size calculation was based on the primary outcome, a change in ACTS Burdens score at week 4 after enrolment compared with baseline. Based on data from the XALIA cancer subgroup analysis (data on file) [29], the mean difference in ACTS Burdens score between enrolment and week 4 was assumed to be 1.3, with a standard deviation of 8.0 considered reasonable. Based on these assumptions, 300 patients will be needed to reach a power of 80% for the primary analyses. Considering high drop-out rates in this patient population [6], 375 patients should be included to ensure sufficient numbers for the primary analyses. Given the heterogeneous nature of the cancer population and expected high drop-out rates after week 4, the COSIMO study aims to enrol 500 patients overall to have sufficient numbers for the secondary analyses.
Statistical analyses
Analyses will generally include all patients who received at least one dose of rivaroxaban, and who completed the ACTS questionnaire at the particular time point being assessed (e.g. week 4, month 3 or month 6). The questionnaire responses are multiple measurements on patient-reported treatment satisfaction over time; therefore, a mixed model repeated measures analysis will be used to analyse the data. The null hypothesis is no change in ACTS Burdens score between enrolment and week 4; hypothesis testing will be at a 5% significance level. The change in the ACTS Burdens score is assumed to be normally distributed and will be analysed using a paired t-test. The assumption of normality will be tested using the Shapiro-Wilk test at the 0.10 level of significance. If the test shows significance, the Wilcoxon signed-rank test will be used. For missing items, imputation to the mean will be used where there are >50% of the questions (>6 items for ACTS Burdens) completed. Otherwise, the item will be regarded as a missing value. Subgroup analyses, by type and duration of SOC therapy and by reason for switching from SOC, will be provided. Sensitivity analyses will be conducted to investigate the potential impact of patients discontinuing the study earlier than week 4 on the outcome.
Discussion
For some patients undergoing treatment for cancer, the necessity for anticoagulant therapy may be regarded as an added burden [30]. LMWHs are recommended as first-line therapy for acute and long-term treatment of CAT in clinical guidelines; nevertheless, many patients with CAT are switched to, or even initiated on, a VKA [22,23], possibly because of a preference for oral over injectable therapies [25] or for cost reasons. DOACs such as rivaroxaban are considered more convenient than VKAs because of their simple dosing regimens and lack of the need for routine coagulation monitoring [31,32]. A subgroup analysis of pooled results from EIN-STEIN DVT and EINSTEIN PE demonstrated that the rate of recurrent VTE was similarly reduced in patients treated with rivaroxaban or enoxaparin/VKA therapy, and that the number of major bleeding events was reduced with rivaroxaban therapy in patients with or without active cancer [33]. More recently, the efficacy and safety of edoxaban and rivaroxaban for the treatment of CAT have been demonstrated in the first randomised head-to-head comparisons with a LMWH (dalteparin) in the phase III Hokusai-VTE-Cancer study [34] and the phase III pilot study select-d [20], respectively. Several studies on the use of rivaroxaban for the treatment of CAT in clinical practice have also been published; these results provide some reassurance that rivaroxaban is safe and effective in this clinical setting [29,[35][36][37]. Furthermore, in the EINSTEIN studies, patients treated with rivaroxaban reported greater treatment satisfaction than patients treated with enoxaparin/VKA, as measured by the ACTS questionnaire [32,38]. The role of DOACs in the treatment and secondary prevention of CAT is being investigated in ongoing studies [39][40][41][42][43].
The COSIMO study aims to collect real-world data in consecutive patients with cancer switching from SOC therapy (LMWH or VKA) to rivaroxaban in circumstances where SOC therapy cannot be continued [44]. The study began recruiting patients in October 2016. The primary outcome is a change in patient-reported treatment satisfaction (specifically, the ACTS Burdens score) between baseline (the point of switching) and week 4. Treatment satisfaction will also be measured at month 3 and at the end of the 6-month observation period, so that changes in ACTS Burdens and Benefits scores can be compared over time. Effectiveness and safety data will be gathered through AE reporting by study investigators. To improve the current understanding of treatment needs, comprehensive data on cancer type and stage, treatment patterns and clinical management will be collected. In this regard, COSIMO will provide prospective real-world data on the effectiveness and safety of rivaroxaban according to cancer type and stage, as well as on overlapping toxicities and interactions between rivaroxaban and anti-cancer therapies. COSIMO will also contribute important information on the management of challenging patient populations with CAT, such as patients in whom AEs occur because of chemotherapeutic agents (e.g. thrombocytopenia) or patients who require surgery or other interventions (biopsies, etc.).
COSIMO is a non-interventional study; therefore, the inclusion and exclusion criteria are deliberately minimal, to mirror real-world practice. However, there are some restrictions to enrolment to ensure the following: the welfare of the population under study (e.g. inclusion is restricted to patients with an ECOG performance status score of ≤2) and alignment with current guideline recommendations (e.g. exclusion of patients pre-treated with anticoagulants other than SOC). These criteria will also ensure a level of study homogeneity for the facilitation of data analyses.
The choice of instruments for evaluating HRQoL is critical for accurate interpretation of patient self-reporting. The COSIMO study will use the ACTS and a DCE to record patient treatment satisfaction and preferences, respectively, and FACIT Fatigue instruments to measure changes in HRQoL relating to the cancer itself.
The ACTS questionnaire is specific for anticoagulation and, therefore, the score should not be affected by the patient's cancer stage and/or cancer treatment. It is a modified form of the Duke Anticoagulation Satisfaction Scale, a 25-item, single scale, which is used to assess limitations, inconveniences and/or discomforts, as well as positive impacts, related to anticoagulant treatment [45]. ACTS was validated using data from the EINSTEIN DVT study [31], which included patients with acute symptomatic DVT treated with rivaroxaban or enoxaparin/VKAs [46]. A rigorous development process was used to ensure that it was appropriate for patients with atrial fibrillation and VTE globally [31]. The DCE is a validated tool for assessing patient preference for anticoagulation therapy [47][48][49].
Fatigue is one of the most common side effects in patients with cancer who are receiving cancer therapy [28,50], and it may have a pervasive effect on treatment satisfaction. The FACIT Measurement System offers several benefits for measuring HRQoL in people with cancer and other chronic diseases and has proven utility for measuring change in HRQoL in observational studies [51,52]. The content was developed jointly by experts and patients, and the scales have been validated in patients with different forms of cancer [52].
One of the limitations of this study is that it was not designed to examine the impact of cancer subtypes, or other potential confounding factors that vary over time, on outcomes. An additional limitation, which applies to all studies enrolling patients with cancer, is the high discontinuation rates over time. Nonetheless, this should have minimal impact on the primary outcome, which is measured at week 4 after initiation of treatment with rivaroxaban; there would likely be minimum impact on other outcomes. There might also be the potential to overestimate treatment satisfaction with rivaroxaban due to selection bias, because the patients eligible for COSIMO (or their physicians) had chosen not to continue with SOC treatment. Finally, the lack of a control patient cohort might make it difficult to put the results into perspective, but finding a matched comparator group of patients with CAT would have been a major challenge.
Conclusions
The ongoing COSIMO study is designed to evaluate satisfaction with anticoagulation treatment in patients with active cancer who are at risk of recurrent VTE or have switched from LMWH/VKA to rivaroxaban. It will also evaluate the safety and effectiveness of rivaroxaban in preventing recurrent VTE in this important patient population. The evaluation tools in this studythe ACTS and FACIT Fatigue questionnaires and a DCE focused on treatment preferenceshave been specifically chosen to provide information that might help guide the future management and treatment of patients coping with serious concurrent illnesses. | 2018-09-06T03:15:02.344Z | 2018-09-04T00:00:00.000 | {
"year": 2018,
"sha1": "1d10d8cdcb018aa42fd6a645d2be7592c1aedac7",
"oa_license": "CCBY",
"oa_url": "https://thrombosisjournal.biomedcentral.com/track/pdf/10.1186/s12959-018-0176-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d10d8cdcb018aa42fd6a645d2be7592c1aedac7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198133327 | pes2o/s2orc | v3-fos-license | A New Broadband and Strong Absorption Performance FeCO3/RGO Microwave Absorption Nanocomposites
A novel composite of FeCO3 nanoparticles, which are wrapped with reduced graphene oxide (RGO), is fabricated using a facile one-spot solvothermal method. The composite consists of a substrate of RGO and FeCO3 nanoparticles that are embedded in the RGO layers. The experimental results for the FeCO3/RGO composite reveal a minimum refection loss (−44.5 dB) at 11.9 GHz when the thickness reaches 2.4 mm. The effective bandwidth is 7.9 GHz between 10.1 and 18 GHz when the refection loss was below −10 dB. Compared to GO and RGO, this type of composite shows better microwave absorption thanks to improved impedance matching. Overall, this thin and lightweight FeCO3/RGO composite is a promising candidate for absorbers that require both strong and broad absorption.
Introduction
Because of the ubiquity of electronic devices, electromagnetic radiation, and, in particular, signal interference have become a global problem [1][2][3]. As a result, big efforts have been made to reduce electromagnetic pollution and other related problems. One promising approach is the use of high-performance microwave absorbing materials (MAMs). Graphene, a relatively new carbon-based material, has excellent properties such as high electron mobility, high permittivity and a high specific surface area, which can dampen electromagnetic waves effectively using polarization relaxation [4][5][6][7]. However, pure graphene can reflect most of the electromagnetic waves, resulting in being unsuitable for MAM due to their poor impedance matching.
Fortunately, it is possible through to combine graphene with magnetic materials to overcome this problem [8,9]. Most of the related studies are focused on soft magnetic materials, which have high magnetic loss due to natural resonance, and they can produce better results as compounded with graphene. For example, Cui et al. [10] prepared a hollow Fe 3 O 4 @RGO composite by a facile route. The minimum reflection loss is −41.89 dB at 6.7 GHz. In the range of 1-4 mm, the reflection loss of nanocomposite thickness is less than −10 dB at 3.4 GHz to 13.6 GHz. Wang et al. [11] loaded MnFe 2 O 4 nanoparticles on RGO sheets by one-step hydrothermal method. The minimum reflection loss of MnFe 2 O 4 /RGO is −32.8 dB at 8.2 GHz with the thickness of 3.5 mm, and the absorption bandwidth with the reflection loss below-10 dB is up to 4.8 GHz (from 7.2 to 12 GHz). Feng et al. [12] synthesized ZnFe 2 O 4 @SiO 2 @RGO core-shell microspheres by "coating-coating" method. The minimum reflection loss of the sample with a thickness of 2.8 mm can reach −43.9 dB at 13.9 GHz. In recent years, the composite of paramagnetic FeCO 3 and RGO has shown great brilliance in the field of batteries due to its excellent electrochemical properties [13][14][15]. However, as far as we know, the microwave absorption properties of FeCO 3 /RGO, especially its magnetic loss characteristic spectrum, have not been investigated. Therefore, FeCO 3 /RGO composites were synthesized using a one-pot solvothermal method. To reveal the microwave absorption mechanism of the FeCO 3 /RGO composite, the frequency dependence of both complex permittivity and the reflection-loss formation were studied and compared with GO and RGO. The outcome of this study can aid the development of light-weight and broadband electromagnetic-wave absorbers.
Experimental
10.8 g FeCl 3 ·6H 2 O, 7.2 g urea, 5 g PVP and 1.2 g nano-iron powder were added into a 400 mL graphene-oxide slurry (purchased from Qitaihe Baotailong New Materials Co., Ltd. (Qitaihe, China) Containing GO 3.3 mg/mL). The mixture was dispersed, aided by ultrasonic treatment for 30 min, to form a homogeneous solution, and subsequently put into a 500 mL Teflon-lined stainless-steel autoclave, where it was kept at 200 • C for 12 h. After cooling to room temperature, the reaction products were washed with deionized water and alcohol, three times. Finally, the reaction products were dried in a vacuum furnace at 60 • C for 24 h.
The morphology, structure, surface elements, and the electromagnetic parameters were analyzed using SEM, TEM, XRD, XPS, and VNA, field emission scanning electron microscopy (FE-SEM, Nava Nano FE-SEM450/650, Eindhoven, Netherlands), transmission electron microscopy (TEM, LI-BRA200, Oberkochen, German), X-ray diffraction (XRD, D/MAX-2500PC, Rigaku, Tokyo, Japan), X-ray photoelectron spectroscopy (XPS, PHI5300, Ulvac-Phi, Tokyo, Japan), and Vector network analyzer (VNA, PNA-N5244A, AGILENT, Santa Clara, CA, USA),respectively. The electromagnetic parameters of the measured samples were prepared by mixing the products (60%) with molten paraffin wax (40%) and placing them into a toroidal mold (Φ in = 3 mm, Φ out = 7 mm) with a thickness of 2.0-3.0 mm. The test software (AGILENT, Santa Clara, CA, USA) is 85071 and the calibration part is 85050D. Before the test, the permittivity of air was measured as an evaluation of the calibration effect. Figure 1a shows the XRD patterns of GO, RGO, and the FeCO 3 /RGO composite. There is a broad peak at 13.4 • in GO (pattern a), which corresponds to the (001) reflection of GO [16]. The broad peak at 25.2 • and the disappearance of the peak at 13.4 • (pattern b) indicate that GO was reduced to RGO. The XRD patterns of the FeCO 3 /RGO composite (pattern c) show that all the diffraction peaks match JCPS No.29-0696, which confirms that the FeCO 3 /RGO composite was indeed obtained. The disappearance of the RGO peaks in FeCO 3 /RGO [17] due to the uniform distribution of FeCO 3 particles between graphene layers (Figure 2), which prevents the interlayer aggregation of RGO sheets, causes the diffraction intensity, i.e., the RGO peak, to be much smaller than for FeCO 3 .
Results and Discussion
The XPS survey spectrum of FeCO 3 /RGO (Figure 1b) shows that the composite consists of Fe, O, C, and N. Four peaks were detected (284.4 eV, 285.8 eV, 287.7 eV, 289.2 eV) in C1s spectrum (Figure 1c), which correspond to C=C/C-C, C-O, C=O, and FeCO 3 [13], respectively. As shown in Figure 1d (Fe2p), two peaks appear at 710.1 eV and 723.4 eV, which correspond to Fe2p 3/2 and Fe2p 1/2 . Furthermore, a satellite peak of Fe2p 3/2 appears at 714.1 eV [18]. These characteristic peaks confirm the presence of FeCO 3 /RGO. The formation of FeCO 3 can be derived from the following chemical equations: Figure 3a,b illustrate the associated real (ε ) and imaginary (ε") complex permittivity. The values of ε show the same trend, ε decreases with increasing frequency. Furthermore, the ε of FeCO 3 /RGO is higher than for both GO and RGO due to the enhanced polarization characteristics. Also, the ε" of FeCO 3 /RGO is larger than for both GO and RGO due to higher conductivity [19]. Figure 3c,d depict the real (µ ) and imaginary (µ") complex permeability of the composites. The complex permeability of GO and RGO varies similarly with frequency, indicating that there is little effect of the reduction reaction on the magnetic properties of GO. However, the observed trend for FeCO 3 /RGO is different from GO and RGO. The complex permeability of FeCO 3 /RGO varies greatly between 10 GHz and 16 GHz because the magnetic FeCO 3 nanoparticles can produce natural resonance loss and exchange resonance loss (due to size effect, surface effect, and spin wave excitation) [20][21][22][23]. frequency. Furthermore, the ε′ of FeCO3/RGO is higher than for both GO and RGO due to the enhanced polarization characteristics. Also, the ε″ of FeCO3/RGO is larger than for both GO and RGO due to higher conductivity [19]. Figure 3c and 3d depict the real (μ′) and imaginary (μ″) complex permeability of the composites. The complex permeability of GO and RGO varies similarly with frequency, indicating that there is little effect of the reduction reaction on the magnetic properties of GO. However, the observed trend for FeCO3/RGO is different from GO and RGO. The complex permeability of FeCO3/RGO varies greatly between 10 GHz and 16 GHz because the magnetic FeCO3 nanoparticles can produce natural resonance loss and exchange resonance loss (due to size effect, surface effect, and spin wave excitation) [20][21][22][23]. As well known, reflection loss (RL) can assess and characterize microwave absorption performance. According to the transmission line model, RL of a metal-backed microwave absorption layer can be calculated by the following formulas [24]: Here, Zin is the input impedance of the absorber, Z0 is the impedance of free space (Z0 is generally 1), εr and μr are the complex permittivity and permeability, c is the speed of light in vacuum, d is the thickness of the absorber, and f is the microwave frequency. 3D theoretical RL plots of the composites are shown in Figure 4a-c. It can be observed that with the reduction of GO and the introduction of FeCO3, the microwave absorption properties improves significantly. FeCO3/RGO nanocomposites show excellent microwave absorption properties as the thickness between 2 mm and 3 mm. RL curves of composites versus frequency is shown in Figure 4d. RL(min) appears in X and Ku band as thickness in the range of 2-3 mm and shifts to lower frequency with increasing thickness. When the thickness is 2.4 mm, FeCO3/RGO shows optimal microwave absorption and reaches a RL(min) of −44.5 dB, while the corresponding bandwidth is less than −10 dB is 7.9 GHz (10.1~18 GHz). It is noteworthy that the effective bandwidth of FeCO3/RGO can reach up to 6 GHz and keep As well known, reflection loss (R L ) can assess and characterize microwave absorption performance. According to the transmission line model, R L of a metal-backed microwave absorption layer can be calculated by the following formulas [24]: Here, Z in is the input impedance of the absorber, Z 0 is the impedance of free space (Z 0 is generally 1), ε r and µ r are the complex permittivity and permeability, c is the speed of light in vacuum, d is the thickness of the absorber, and f is the microwave frequency. 3D theoretical R L plots of the composites are shown in Figure 4a-c. It can be observed that with the reduction of GO and the introduction of FeCO 3 , the microwave absorption properties improves significantly. FeCO 3 /RGO nanocomposites show excellent microwave absorption properties as the thickness between 2 mm and 3 mm. R L curves of composites versus frequency is shown in Figure 4d. R L(min) appears in X and Ku band as thickness in the range of 2-3 mm and shifts to lower frequency with increasing thickness. When the thickness is 2.4 mm, FeCO 3 /RGO shows optimal microwave absorption and reaches a R L(min) of −44.5 dB, while the corresponding bandwidth is less than −10 dB is 7.9 GHz (10.1~18 GHz). It is noteworthy that the effective bandwidth of FeCO 3 /RGO can reach up to 6 GHz and keep steadily when the thickness is 2~3 mm. Table 1 lists some reported microwave absorption composites of soft magnetic based material, graphene-based material, and FeCO 3 /RGO composite prepared in this work. Notably, FeCO 3 /RGO composite not only displays a promising negative R L value, but also has a wide effective absorption bandwidth due to the good impedance matching. steadily when the thickness is 2~3 mm. Table 1 lists some reported microwave absorption composites of soft magnetic based material, graphene-based material, and FeCO3/RGO composite prepared in this work. Notably, FeCO3/RGO composite not only displays a promising negative RL value, but also has a wide effective absorption bandwidth due to the good impedance matching.
Conclusions
The FeCO3/RGO composite produced and investigated in this study is a novel material with excellent microwave absorption. The composite can not only effectively facilitate electromagnetic loss but also improve impedance matching. Specifically, the refection loss at 11.9 GHz, when the composite thickness is 2.4 mm, reaching a minimum of −44.5 dB, and the effective bandwidth is 7.9 GHz (from 10.1 to 18 GHz). In addition, we observed very stable broad characteristics for a thickness range of 2-3 mm. Because of the good properties mentioned above, this composite is can be regarded as an excellent microwave absorber with the potential for many commercial applications.
Conclusions
The FeCO 3 /RGO composite produced and investigated in this study is a novel material with excellent microwave absorption. The composite can not only effectively facilitate electromagnetic loss but also improve impedance matching. Specifically, the refection loss at 11.9 GHz, when the composite thickness is 2.4 mm, reaching a minimum of −44.5 dB, and the effective bandwidth is 7.9 GHz (from 10.1 to 18 GHz). In addition, we observed very stable broad characteristics for a thickness range of 2-3 mm. Because of the good properties mentioned above, this composite is can be regarded as an excellent microwave absorber with the potential for many commercial applications.
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-07-23T13:07:16.143Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "d7ffb59228e38be0fda7386b92f6fc10758e93a9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/12/13/2206/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18f42e927e81ca9754f8b121dd47a2950b3cb970",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
85560536 | pes2o/s2orc | v3-fos-license | Increased Forest Soil CO 2 and N 2 O Emissions During Insect Infestation
Forest soils are major sinks of terrestrial carbon, but this function may be threatened by mass outbreak events of forest pests. Here, we measured soil CO2-C and N2O-N fluxes from a Scots pine (Pinus sylvestris L.) forest that was heavily infested by the nun moth (Lymantria monacha L.) and an adjacent noninfested (control) forest site during one year. In the infested forest, net emissions of CO2-C were higher during main defoliation, summer and autumn, while indications of increased N2O-N emissions were found at one sampling date. On basis of this, a microcosm incubation experiment with different organic matter treatments was conducted. Soil treatments with needle litter, insect feces plus needle litter, and insect feces showed 3.7-, 10.6-, and 13.5-fold higher CO2-C emissions while N2O-N of the insect feces plus needle litter, and insect feces treatment was 8.9-, and 10.4-fold higher compared with soil treatments without added organic matter (control). Hence, the defoliation in combination with high inputs of organic matter during insect outbreaks distinctly accelerate decomposition processes in pine forest soils, which in turn alters forests nutrient cycling and the functioning of forests as carbon sinks.
Introduction
The worldwide forest area affected annually by insect outbreaks amounts about 36.5 million hectares [1], thereby representing a thread to the function of forests as a carbon sink [2][3][4][5].Insect defoliation not only leads to decreased canopy biomass, tree growth, and inhibits the production new foliage, but can be accompanied by changes in the N nutritional status of infested trees (e.g., decreased net N uptake, N accumulation in fine roots and needles) [6,7].Defoliation of forest areas during forest pest outbreaks distinctly increase organic input into the soil in the form of insect feces, cadavers, litter, and other plant material [7][8][9].Such changes of litter quality and quantity affect the soil organic matter composition [10], and alter nutrient cycling [11][12][13].Depending on the outbreak intensity, insect feces can account for up to 46% of the total litterfall amount in Scots pine forests [14].The easy soluble structure of insect feces, with high amounts of labile C and extractable N, can facilitate nutrient release in soils [15][16][17].Therefore, decomposition processes in the course of defoliations may be enhanced [10,18], thereby triggering CO 2 emissions from forest soils [15,19,20].In contrast, findings from xylophagous insects (compared to phytophagous insects) showed contradictory results.For example, Ponderosa pine (Pinus ponderosa Laws.) stands infested by different bark beetle species (Scolytinae) [21] as well as temperate mixed forests infested by the emerald ash borer (Agrilus planipennis F.) [22] showed no effect on soil CO 2 emissions.Bark beetle Forests 2018, 9, 612 2 of 11 induced tree diebacks in lodgepole pine (Pinus contorta Dougl.ex Loud.) forests caused even decreased CO 2 emissions [23,24].
N 2 O flux changes during insect outbreaks are less well studied.No effects on N 2 O emission and nitrate leaching were detected in a manipulation experiment with intensive defoliation of hybrid poplar (Populus x euroamericana cv.Eugeneii) stands by gypsy moth (Lymantria dispar L.) [25].A rather decelerating effect on nutrient cycling in the course of insect outbreaks, with reduced litter decomposition and accumulation of the soil nitrogen storage was assumed by Madritch and Lindroth (2015) [26], Verkaik et al. (2006) [27], and Ritchie et al. (1998) [18], which may be related to high amounts of tannins in feces, that can build recalcitrant protein-tannin complexes, and impair soil microbial activity.Therefore, nitrogen deriving from insect outbreaks is hypothesized to be rather redistributed in the ecosystem than to be lost [28].
Carbon and nitrogen cycling is a function of complex effects and interdependencies between climate, topography, soil properties and microbial soil community structures [25].Moreover, forest soils seem to have a certain resistance to biotic disturbances.This may explain why effects of organic input from insect outbreaks on microbial respiration or nitrogen immobilization in soils are often only detectable at relatively high levels of defoliation (>70%) [19,29].Nevertheless, the effects of forests pests on greenhouse gas emissions are not sufficiently understood to predict environmental effects.The lack of knowledge about C and N balances at the soil-atmosphere interface during biotic disturbances contributes to this unpredictability.
In this study, we analyzed CO 2 -C and N 2 O-N fluxes in the course of a nun moth (Lymantria monacha L.) outbreak in Scots pine (Pinus sylvestris L.) forests in Germany.Moth larvae regularly hatch in April/May, feed on pine needles until July, and can completely defoliate trees during this period [30].We quantified net CO 2 -C and N 2 O-N fluxes from a forest soil in an infested compared with a noninfested forest stand.In a consecutive microcosm incubation experiment, we analyzed net CO 2 -C and N 2 O-N fluxes under controlled conditions with treatment additions of pine needles, insect feces, and pine needles plus insect feces.We expected that (i) decomposition processes would be accelerated by organic inputs in the course of the insect outbreak and (ii) CO 2 -C and N 2 O-N emissions from soils would respond to these biogeochemical changes.
Field Measurement
The measurements were conducted in a 65-year-old Scots pine forest (52 • 8 38 N, 13 • 45 14 E, 42 m above sea level) heavily infested by the nun moth (~80% crown defoliation) compared with a noninfested 65-year-old Scots pine forest growing on a site with similar site conditions (52 • 9 29 N, 13 • 36 47 ; 35 m above sea level).Both forests are characterized as "white moss pine forests" (Leucobryo-Pinetum W.Mat.) grown on podzol (Food and Agriculture Organization of the United Nations (FAO) classification) on aeolian sand (0.2-0.63 mm), with little gravel and a pH (1:10 in H 2 O) ranging from 3.2 to 3.9 in the mineral soil (Ah) horizon.Soil C/N ratio of the infested site was 29.4 and of the noninfested site 30.3.Both sites did not differ significantly in their soil microbial community composition in early May prior the nun moth population peak [31].Average annual temperature was 9.2 and 10.8 • C and average annual precipitation was 611 and 474 mm for the study sites in 2013 and 2014, respectively (German Federal Meteorological Service (DWD) and Climate Data Center (CDC), Weather Station Lindenberg (Station ID: 3015), 13.07.2018,see also Supplementary Figure S1; for a description of site properties, see [6,31]).We measured the CO 2 -C and N 2 O-N fluxes across one year (August 2013-July 2014), i.e., on three dates in 2013 (August-October with n = 18 for each date) and five dates in 2014 (March-July with n = 30 for each date).The study sites were arranged in a paired sample comparison of noninfested versus infested forests that were located in spatial proximity to each other with n = 9 in 2013 and n=15 in 2014 for each plot.To quantify soil CO 2 -C and N 2 O-N emissions, a polyvinyl chloride (PVC) lid (25 cm diameter, 13 cm high) was applied to a cylindrical PVC-U-frame (25 cm diameter, 10 cm high) which was permanently inserted (7-8 cm deep) in the organic layer and Ah horizon of sampling sites.Litter was not removed within the frames.CO 2 -C fluxes were measured with a four-point-sampling method, where 20 mL air samples were taken with a syringe via a septum from the closed chamber after 0, 20, 40, and 60 min following its sealing and stored in evacuated glass exetainers.CO 2 -C and N 2 O-N concentrations were determined by gas chromatography (ECD, Shimadzu, Duisburg, Germany).Fluxes were calculated from the linear change of the gas concentrations during chamber closure, the volume of the chamber and the enclosed surface area, according to Lessard et al. (1993) [32].Values were corrected for air temperature and air pressure using the following equation: factor = air pressure (pa) × molar weight g mol gas constant and projected to one square meter and one hour.Simultaneously, temperature of the top 10 cm soil depth, gravimetric soil water content of the Ah, and sampling time were recorded for each plot.
Incubation Experiment
For the microcosm incubation experiment, randomly collected upper mineral soil (Ah) of the control site was used, homogenized, sieved at 2 mm and 200 g fresh weight were transferred to 20 glass incubators (1000 cm 3 ).Incubators were attached to an automated gas chromatographic system (GC) with a 63 Ni electron capture detector (ECD) for the measurement of CO 2 and N 2 O concentrations (Shimadzu, Duisburg, Germany) (for a description, see also [33]).The air space of the incubators was flushed with synthetic air and the flux was calculated by determining the differences between inlet air and exhausted air (12 measurements per incubator and day).Experimental runtime was 31 days, including six days of soil pre-incubation.Treatments were added on day seven in form of feces, Scots pine needle litter and a mixture of both with five biological replicates for each treatment.The feces was produced under laboratory conditions from Dendrolimus pini L. Feces was mixed, dried at 20 • C for 72 h and needle and bark residues were removed.Needle litter was collected from the noninfested control site in 2014 and dried at 20 • C for 72 h.The total amount of needle and feces input was 5 g dry-weight-equivalent of total C, resulting in the addition of 49.2 mg feces (feces treatment), 48.9 mg needle (needle treatment) and 24 mg of both (feces plus needle litter treatment), respectively, per gram soil.Temperature was constant 20 • C and no light exposure during the experiment.On day 4 and day 18, 60 mL dH 2 O were added to the incubators.The amount of added water was adjusted to reach 70-80% of the soils maximum water holding capacity.The analyses of element contents of Aluminum (Al), Calcium (Ca), Iron (Fe), Potassium (K), Magnesium (Mg), Manganese (Mn), Sodium (Na), Phosphorus (P), Sulphur (S) of the used soil, feces, and needle litter samples are given in Supplementary Table S1.Values are based on HNO 3 extraction (described in [34]) and subsequent measurement by using an ICP-OES (iCAP 6300 Duo VIEW ICP Spectrometer, Thermo Fischer Scientific GmbH, Dreieich, Germany).For the measurement of total carbon (C tot ) and nitrogen (N tot ) content soil was dried at 105 • C for 24 h, finely ground, and analyzed by a total organic carbon analyzer multi C/N (Analytik Jena, Jena, Germany).
Statistical Analyses
Statistical analyses were conducted in R 3.3.3[35].All data sets were tested for distribution of normality and homogeneity of variances by applying the Shapiro-Wilk test and Levene's test, respectively.CO 2 -C and N 2 O-N fluxes were analyzed separately for each of the seven sampling dates by paired Wilcoxon signed-rank tests, with n = 18 (for each of the three the dates in 2013), and n = 30 (for each of the five dates in 2014), respectively.The Kruskal-Wallis test was used to detect differences between the accumulated CO 2 -C and N 2 O-N fluxes from the four treatments of the incubation experiment.In addition, Spearman's rank correlations (r S ) were used to assess the relationships between soil greenhouse gas fluxes (CO 2 -C and N 2 O-N) and soil temperature as well as soil water content from the field study.CO 2 -C fluxes correlated positively with soil temperature (r S = 0.738, p = 0.046, Supplementary Figure S2), but not with soil water content (r S = −0.700,p = 0.233).N 2 O-N emissions were neither correlated with soil temperature (r S = −0.095,p = 0.840) nor with soil water content (r S = −0.300,p = 0.683).
Field Measurement
Forests 2018, 9, x FOR PEER REVIEW 4 of 11
Incubation Experiment
The accumulated carbon and nitrogen fluxes across the 31-day study period were 54.04 ± 1.27 mg CO2-C h −1 in soil treatments without addition of organic matter (control), 202.11 ± 4.39 mg CO2-C h −1 in soil treatments with addition of needles, and 574.83 ± 37.85 mg CO2-C h −1 in soil treatments with addition of needles plus insect feces, and 731.46 ± 7.30 mg CO2-C h −1 in soil treatments with addition of insect feces (Figure 3).N2O-N fluxes amounted 0.59 ± 0.34 µ g N2O-N h −1 in soil treatments without addition of organic matter (control), 0.91 ± 0.13 µ g N2O-N h −1 in soil treatments with addition of pine needles, 5.25 ± 0.45 µ g N2O-N h −1 in soil treatments with addition of pine needles plus insect feces, and 6.14 ± 0.27 µ g N2O-N h −1 emissions in soil treatments with addition of insect feces (Figure 4).
Maximum emissions of 10.98 mg CO2-C h −1 were reached by the insect feces treatment on the fourth day after treatment addition (with 48-fold higher emissions compared with the control).Similarly, maximum N2O-N fluxes of 0.07 µ g N2O-N h −1 were reached by the feces treatment at the fourth day after treatment addition (with 13-fold higher emissions compared with the control).From that day, fluxes of both gases, decreased slowly with time.
Despite similar element contents of needles and feces (see Supplementary Table S1), inputs of feces significantly accelerated soil CO2-C and N2O-N fluxes.When compared with the soil treatment without addition of organic matter (control), the experimental inputs of C and N via needles and feces accelerated CO2-C emissions 3.7-fold in treatments with addition of needles, 10.6-fold in treatments with addition of needles plus feces, 13.5-fold in treatments with addition of feces (all pvalues < 0.009).N2O-N emissions were accelerated averagely 8.9-fold in treatments with addition of needles plus feces (p < 0.010) and 10.4-fold in treatments with addition of feces (p < 0.010), while it was not increased compared to the treatments with addition of needles (p = 0.117).
Incubation Experiment
The accumulated carbon and nitrogen fluxes across the 31-day study period were 54.04 ± 1.27 mg CO 2 -C h −1 in soil treatments without addition of organic matter (control), 202.11 ± 4.39 mg CO 2 -C h −1 in soil treatments with addition of needles, and 574.83 ± 37.85 mg CO 2 -C h −1 in soil treatments with addition of needles plus insect feces, and 731.46 ± 7.30 mg CO 2 -C h −1 in soil treatments with addition of insect feces (Figure 3).N 2 O-N fluxes amounted 0.59 ± 0.34 µg N 2 O-N h −1 in soil treatments without addition of organic matter (control), 0.91 ± 0.13 µg N 2 O-N h −1 in soil treatments with addition of pine needles, 5.25 ± 0.45 µg N 2 O-N h −1 in soil treatments with addition of pine needles plus insect feces, and 6.14 ± 0.27 µg N 2 O-N h −1 emissions in soil treatments with addition of insect feces (Figure 4).
Maximum emissions of 10.98 mg CO 2 -C h −1 were reached by the insect feces treatment on the fourth day after treatment addition (with 48-fold higher emissions compared with the control).Similarly, maximum N 2 O-N fluxes of 0.07 µg N 2 O-N h −1 were reached by the feces treatment at the fourth day after treatment addition (with 13-fold higher emissions compared with the control).From that day, fluxes of both gases, decreased slowly with time.
Despite similar element contents of needles and feces (see Supplementary Table S1), inputs of feces significantly accelerated soil CO 2 -C and N 2 O-N fluxes.When compared with the soil treatment without addition of organic matter (control), the experimental inputs of C and N via needles and feces accelerated CO 2 -C emissions 3.7-fold in treatments with addition of needles, 10.6-fold in treatments with addition of needles plus feces, 13.5-fold in treatments with addition of feces (all p-values < 0.009).N 2 O-N emissions were accelerated averagely 8.9-fold in treatments with addition of needles plus feces (p < 0.010) and 10.4-fold in treatments with addition of feces (p < 0.010), while it was not increased compared to the treatments with addition of needles (p = 0.117).Soil C/N ratio before treatment addition 32.06 in all incubators.At the end of the experiment, C/N ratio of the control almost stayed the same with 32.08, while in the feces treatment C/N increased to 32.23.In contrast, the needle treatment and the needle plus feces treatment decreased to 31.43 and 31.46,which was significantly lower compared to the feces treatment (p = 0.018 and p = 0.024).
Forests 2018, 9, x FOR PEER REVIEW 6 of 11 Soil C/N ratio before treatment addition 32.06 in all incubators.At the end of the experiment, C/N ratio of the control almost stayed the same with 32.08, while in the feces treatment C/N increased to 32.23.In contrast, the needle treatment and the needle plus feces treatment decreased to 31.43 and 31.46,which was significantly lower compared to the feces treatment (p = 0.018 and p = 0.024).Soil C/N ratio before treatment addition 32.06 in all incubators.At the end of the experiment, C/N ratio of the control almost stayed the same with 32.08, while in the feces treatment C/N increased to 32.23.In contrast, the needle treatment and the needle plus feces treatment decreased to 31.43 and 31.46,which was significantly lower compared to the feces treatment (p = 0.018 and p = 0.024).
Discussion
Scots pine forests infested with the nun moth showed increased soil emissions of CO 2 -C during several sampling dates and indications of increased N 2 O-N emissions at one date, which both may be related to the altered quality and quantity of organic inputs during the pest outbreak.In the incubation experiment, feces input rapidly accelerated CO 2 -C and N 2 O-N emissions from soil with up to 14-and 25-fold, respectively, higher fluxes compared to those from needle litter.The increased deposition of organic matter during the defoliation of a pine stand provides large amounts of labile organic C and N, which in turn may positively influence microbial decomposition processes [36].An experiment with feces from Melanoplus borealis F. and Chorthippus curtipennis F. feeding on different diets demonstrated that up to 46% of the emitted CO 2 -C from soil can originate from the added feces [37].Considering that the organic input during nun moth outbreaks can be much higher compared to those from natural (noninfested) conditions (e.g., 300% higher feces and needle litter N input on our study site in 2014 [6]), as well as the easily biodegradable structure of feces can be an explanation for the higher CO 2 -C emissions from the infested forest site (even following the outbreak when the actual defoliation activity has already ceased).For example, fir forests (Abies spec.)defoliated by the siberian moth (Dendrolimus superans sibiricus Tschtvrk.)showed increased rates of soil respiration even three years after the pest outbreak [38].Therefore, increased deposition of organic matter (especially feces) during our nun moth defoliation may have contributed to the enhanced greenhouse gas emissions from forest soils during the outbreak years.
The biogeochemical pathways by which carbon is transformed and moves through forest ecosystems are strongly coupled with those of nitrogen [39].High inputs of labile carbon enhance microbial growth and nitrogen immobilization, while low C inputs rather promote N leaching [19].The increased soil C/N ratios of the feces treatment in our incubation experiment may therefore be an indicator of microbial immobilization, and this is supported by the relatively slow decrease of the gas emission rates following peak emissions.In contrast, C/N ratios under field conditions are often observed to decrease during insect outbreaks [20,23,24,40], even on our sampling sites [41].
The availability of organic inputs, microbial activity and emerging greenhouse gas emissions are influenced by soil aeration, fluctuation of the water table, nutrient availability, temperature, and favorable microclimatic conditions (e.g., temperature and precipitation) [42][43][44][45].Our field study was conducted in a continental climate, with (temporally) semi-arid conditions during summer and autumn which can hamper a fast microbial decomposition [46,47].This might have a negative impact on the microbial decomposition of organic matter during pest outbreaks, and explain the relative differences in CO 2 -C and N 2 O-N emissions between noninfested and infested real forests compared with those from our microcosm experiment under optimized conditions (see also [15]).Further, N 2 O emissions from forest soils are spatial and temporal highly variable ("hot spots" and "hot moments" of N 2 O emissions [48]), which makes measurement and comparability across sites difficult [45].However, on our infested study site, the abundance of NO 2 -reducers (nirK genes) in the soil was also found to be increased [41], indicating a genetic potential for accelerated N 2 O emissions.
To our knowledge, we show for the first time that both CO 2 and N 2 O emissions can be triggered simultaneously by organic inputs deriving from pest insects.Nitrification and NO 3 − losses as well as denitrification and N 2 O losses are expected to increase as the fraction of mineralized ammonium increases [49] These processes can take place simultaneously in the same soil, e.g., in large, air-ducting pores and inside large soil aggregates, respectively [44,45,50].Further, accelerated tree growth and increased carbon storage in biomass as well as increased autotrophic respiration from the rhizosphere and decreased heterotrophic respiration from soil microorganisms are consequences of N inputs, thereby contributing to the carbon-sink potential of a forest [39].However, infested forest trees are often physically impaired in their N nutrition, N uptake and reduced in biomass growth rates as consequence of the defoliation [6,51,52].Additionally, our results suggest increased microbial decomposition and CO 2 emissions during insect outbreaks.All this can have the potential to reduce the forests carbon sequestration capacity or even switch the forest to a carbon source [2][3][4][5].
Figure 1 .
Figure 1.CO2-C emissions (mg m −2 h −1 ) from the mineral soil during an outbreak of the nun moth (Lymantria monacha L.) in 2013 and 2014, and the adjacent noninfested control of Scots pine (Pinus sylvestris L.) forest sites.Infested = red, noninfested = green, Aug = August, Sep = September, Oct = October, Mar = March, Jun = June, Jul = July.Box plots show means (dotted lines) and medians (solid lines) (n = 9 in 2013 and 15 in 2014 for each plot).Whisker extension equals 1.5 times interquartile range distance.Asterisks indicate significant differences between infested and control plots within one sampling time (paired Wilcoxon signed-rank tests, p ≤ 0.050).
Figure 1 .
Figure 1.CO 2 -C emissions (mg m −2 h −1 ) from the mineral soil during an outbreak of the nun moth (Lymantria monacha L.) in 2013 and 2014, and the adjacent noninfested control of Scots pine (Pinus sylvestris L.) forest sites.Infested = red, noninfested = green, Aug = August, Sep = September, Oct = October, Mar = March, Jun = June, Jul = July.Box plots show means (dotted lines) and medians (solid lines) (n = 9 in 2013 and 15 in 2014 for each plot).Whisker extension equals 1.5 times interquartile range distance.Asterisks indicate significant differences between infested and control plots within one sampling time (paired Wilcoxon signed-rank tests, p ≤ 0.050).
Figure 2 .
Figure 2. N2O-N emissions (µ g m −2 h −1 ) from the mineral soil during an outbreak of the nun moth (Lymantria monacha L.) in 2013 and 2014 and the adjacent noninfested control of Scots pine (Pinus sylvestris L.) forest sites.Infested = red, noninfested = green, Aug = August, Sep = September, Oct = October, Mar = March, Jun = June, Jul = July.Box plots show means (dotted lines) and medians (solid lines) (n = 9 in 2013 and 15 in 2014 for each plot).Whisker extension equals 1.5 times interquartile range distance.Asterisks indicate significant differences between infested and control plots within one sampling time (paired Wilcoxon signed-rank tests, p ≤ 0.050).
Figure 2 .
Figure 2. N 2 O-N emissions (µg m −2 h −1 ) from the mineral soil during an outbreak of the nun moth (Lymantria monacha L.) in 2013 and 2014 and the adjacent noninfested control of Scots pine (Pinus sylvestris L.) forest sites.Infested = red, noninfested = green, Aug = August, Sep = September, Oct = October, Mar = March, Jun = June, Jul = July.Box plots show means (dotted lines) and medians (solid lines) (n = 9 in 2013 and 15 in 2014 for each plot).Whisker extension equals 1.5 times interquartile range distance.Asterisks indicate significant differences between infested and control plots within one sampling time (paired Wilcoxon signed-rank tests, p ≤ 0.050).
Figure 3 .
Figure 3. Accumulated CO2-C flux (mg h −1 ) of the incubators with treatments of feces from the pinetree lappet (Denrolimus pini L.), feces plus Scots pine (Pinus Sylvestris L.) needle litter, needle litter, and a control with soil only during the 31 days of the incubation experiment.Treatments were added on day 7 with n = 5.A total of 12 measurements per day were conducted.
Figure 4 .
Figure 4. Accumulated N2O-N flux (µ g h −1) of the incubators with treatments of feces from the Pinetree lappet (Denrolimus pini L.), feces plus Scots pine (Pinus Sylvestris L.) needle litter, needle litter, and a control with soil only during the 31 days of the incubation experiment.Treatments were added on day 7 with n = 5.A total of 12 measurements per day were conducted.
Figure 3 .
Figure 3. Accumulated CO 2 -C flux (mg h −1 ) of the incubators with treatments of feces from the pine-tree lappet (Denrolimus pini L.), feces plus Scots pine (Pinus Sylvestris L.) needle litter, needle litter, and a control with soil only during the 31 days of the incubation experiment.Treatments were added on day 7 with n = 5.A total of 12 measurements per day were conducted.
Figure 3 .
Figure 3. Accumulated CO2-C flux (mg h −1 ) of the incubators with treatments of feces from the pinetree lappet (Denrolimus pini L.), feces plus Scots pine (Pinus Sylvestris L.) needle litter, needle litter, and a control with soil only during the 31 days of the incubation experiment.Treatments were added on day 7 with n = 5.A total of 12 measurements per day were conducted.
Figure 4 .
Figure 4. Accumulated N2O-N flux (µ g h −1) of the incubators with treatments of feces from the Pinetree lappet (Denrolimus pini L.), feces plus Scots pine (Pinus Sylvestris L.) needle litter, needle litter, and a control with soil only during the 31 days of the incubation experiment.Treatments were added on day 7 with n = 5.A total of 12 measurements per day were conducted.
Figure 4 .
Figure 4. Accumulated N 2 O-N flux (µg h −1 ) of the incubators with treatments of feces from the Pine-tree lappet (Denrolimus pini L.), feces plus Scots pine (Pinus Sylvestris L.) needle litter, needle litter, and a control with soil only during the 31 days of the incubation experiment.Treatments were added on day 7 with n = 5.A total of 12 measurements per day were conducted. | 2019-03-27T04:07:54.578Z | 2018-10-05T00:00:00.000 | {
"year": 2018,
"sha1": "c3e5e81df28292c511592b0795b46a0e511cae6c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/9/10/612/pdf?version=1538729505",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a002de0ffc76c04caa68dc38a867c500206f839a",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
233447796 | pes2o/s2orc | v3-fos-license | Social connectedness and dementia prevention: Pilot of the APPLE-Tree video-call intervention during the Covid-19 pandemic
Background and Objectives The Covid-19 pandemic reduced access to social activities and routine health care that are central to dementia prevention. We developed a group-based, video-call, cognitive well-being intervention; and investigated its acceptability and feasibility; exploring through participants’ accounts how the intervention was experienced and used in the pandemic context. Research Design and Method We recruited adults aged 60+ years with memory concerns (without dementia). Participants completed baseline assessments and qualitative interviews/focus groups before and after the 10-week intervention. Qualitative interview data and facilitator notes were integrated in a thematic analysis. Results 12/17 participants approached completed baseline assessments, attended 100/120 (83.3%) intervention sessions and met 140/170 (82.4%) of goals set. Most had not used video calling before. In the thematic analysis, our overarching theme was social connectedness. Three sub-themes were as follows: Retaining independence and social connectedness: social connectedness could not be at the expense of independence; Adapting social connectedness in the pandemic: participants strived to compensate for previous social connectedness as the pandemic reduced support networks; Managing social connections within and through the intervention: although there were tensions, for example, between sharing of achievements feeling supportive and competitive, participants engaged with various lifestyle changes; social connections supported group attendance and implementation of lifestyle changes. Discussion and Implications Our intervention was acceptable and feasible to deliver by group video-call. We argue that dementia prevention is both an individual and societal concern. For more vulnerable populations, messages that lifestyle change can help memory should be communicated alongside supportive, relational approaches to enabling lifestyle changes.
Introduction
Dementia and its prevention constitute one of the greatest health and social challenges of our time (Prince et al., 2013). The global Covid-19 pandemic has exacerbated most modifiable dementia risk factorsincluding cardio-metabolic disease, physical inactivity, social isolation, mental illness and alcohol consumption (Livingston et al., 2020). Covid-related social distancing measures reduce opportunities for activities, socialising and exercise (Heid et al., 2020), and non-Covid health and social care availability has also been affected by the pandemic (Giebel & Cooper, 2020).
The pandemic has, at least to some extent, shifted responsibility for lifestyle choices, such as social encounters, from individuals to society. This may influence already controversial debates around how responsibility for dementia prevention is shared across individuals and society. Half of over 65s in the United Kingdom fear dementia more than any other condition (Monitor, 2019), so it is unsurprising that interventions discussing dementia risk are anxiety-provoking. We have previously described how living with memory problems without dementia may be conceptualised as liminal, between dementia and wellness, and that individuals may experience the burden of responsibility for managing dementia risk, without access to the help that may follow a definitive diagnosis . Libert et al. (2019) explore individualistic attitudes around dementia prevention. He suggests that adopting lifestyle change for dementia prevention can be viewed as an emotional, as well as practical response to fear of dementia: as emotional distancing from dementia, a condition associated with 'ageing without agency'.
Resilience is defined as the process of 'bouncing back' from difficult experiences (MacLeod et al., 2016). In this study, we seek to support older people experiencing memory concerns to adopt lifestyle changes that reduce dementia risks; put another way, we seek to enable a resilient response to the often anxiety-provoking experience of developing memory concerns. The older population have exhibited high resilience levels in studies that interviewed relatively healthy older populations, including cohorts recruited early in the pandemic, about their reactions to stressful events (Knepple Carney et al., 2020). Yet resilience is an interaction between individuals and the social environment and should not be construed as an individual achievement (Kok et al., 2018). Previous work critiques the positioning of all older people as consumers of lifestyle choices enabling the 'third age', defined by Laslett as 'a period of agentic self-fulfilment' (Gilleard & Higgs, 1998). Not all older people are equally able to exhibit resilience, leading to new social divisions. An emphasis on agency has the effect of making individuals responsible for their own health whether or not this is possible; dementia prevention must also be viewed as a societal concern (Higgs & Gleard, 2015).
In reality, while there is evidence that risk factor modification reduces dementia risk (e.g. Ngandu et al., 2015), dementia prevention efforts, whether targeted at individuals or society, are in their infancy. Certainly, no currently available interventions, with proven efficacy, are scalable to whole populations (Brug, 2008). Rapid expansion in eHealth interventions due to social distancing will influence future dementia prevention, and eHealth dementia prevention interventions targeted at the general, older population are under evaluation (Heffernan et al., 2018).
We coproduced the APPLE-Tree (Active Prevention in People at risk of dementia through Lifestyle, bEhaviour change and Technology to build REsiliEnce) intervention, specifically for people with memory concerns without dementia, who are at increased dementia risk (Mitchell & Shiri-Feshki, 2008). In response to the pandemic, we adapted our face-to-face group programme, which is based on current evidence (Whitty et al., 2020), to remote delivery. While remote interventions can have excellent reach and cost-effectiveness, they may be challenging for people with memory concerns to access and can compound socio-economic inequalities (Jaffe et al., 2020). They could also engender shifts towards individualistic approaches to dementia prevention.
To our knowledge, this is the first study to explore how older people with memory concerns experienced and used a video-call, group-based cognitive well-being intervention, which also included individual phone calls to participants to support goal-setting. Our research objective was to investigate how acceptable and feasible the intervention was to deliver in practice, in the context of the pandemic. We were interested in exploring through participants' accounts how the intervention was experienced and used in the pandemic context. Our research questions were thus: 1. How acceptable and feasible was the intervention to deliver in practice? 2. How was the intervention experienced and used in the pandemic context?
Design
We conducted a pre-/post-test single group, pilot study of a remote (group-based video-call) cognitive well-being intervention, APPLE-Tree; with a multiple-method exploratory design.
Ethical approval and trial registration
London-Camden and Kings Cross National Research Ethics Committee approved the study (20/LO/ 0034); and we registered the protocol (ISRCTN17325135) .
Intervention development
We coproduced APPLE-Tree with older people with memory concerns, their family members, health practitioners and researchers, informed by the behaviour change framework (Michie et al., 2011). Six coproduction workshops involved academic professionals, healthcare practitioners, third sector workers and experts by experience in the intervention target domains: nutrition, physical exercise, physical health, social engagement, cognitive stimulation, sleep and mental well-being. We used the groups' expertise, informed by current evidence and existing interventions (Hassan et al., 2018;Livingston et al., 2019) to produce participant workbooks and facilitator manuals to guide the planned, structured sessions.
We originally initially designed 10, 1.5-2 h face-to-face groups for 10-12 participants, led by two facilitators, with a refreshment break when facilitators would support participants to set goals. In April 2020, our coproduction group held remote workshops, to consider how the intervention might be adapted to remote delivery and to account for pandemic-related social changes. We developed a remote version that was similar in content and intended mechanisms of action to the planned faceto-face format, for delivery on ZoomÔ. We added facilitator prompts acknowledging that lifestyle change may be more challenging and need adapting, in the pandemic context.
Intervention structure
Before the first session, participants received a non-perishable food delivery (e.g. olive oil and frozen vegetables) costing approximately £18, to support home cooking; a step-counting watch; the session workbook and a structured booklet for recording goals and progress.
Each week, participants were invited to · A one-hour group video-call (run as 2 smaller groups a couple of hours apart, with ≤6 participants, with 2 facilitators and 1 helper): discussing ways to promote cognitive well-being (related to intervention targets; Figure 1), including short video cookery demonstrations, which participants were encouraged to try and bring to 'tea break'. Sessions were fully manualised. Participants were encouraged to share photos and short videos of lifestyle changes and activities tried. · A half-hour 'tea break' with all participants together on one video-call (i.e. ≤12 participants).
Sessions were unstructured; facilitators encouraged discussion of how participants were implementing the well-being-promoting lifestyle changes. Whereas the structured groups were kept small to enable focussed discussions, the tea break was a larger group intended as a less formal space. · A phone call (up to 30 min) with one facilitator. Participants were encouraged to set new and revise existing goals, recording progress in their goal-setting booklet. Possible goal areas were as follows: nutrition (participants set bronze, silver and gold goals, to increase their Mediterranean Diet Score (MDS) score by 1, 2 then 3 points from baseline); physical activity (to increase activity, which could be measured by recording highest daily step count, using provided step-counting watches); engaging with life (planning activities to move nearer to the life they want to live); connecting with others and health (e.g. planning blood pressure or hearing checks, staying hydrated and reducing alcohol intake and smoking and increasing the use of mindfulness and sleep hygiene).
Training and supervision
We trained two facilitators with experience of working with people with dementia: a UCL-employed psychology graduate (HM) and a worker from the voluntary organisation from which we recruited participants. They role-played sessions with the research team, which PR/CCo formally assessed for adherence to the manual and skill prior to delivery. They received weekly group supervision with a clinical psychologist (PR) and/or psychiatrist (CCo), troubleshooting barriers to delivery and exploring engagement strategies. PR/CCo was available for support between supervision meetings. We trained facilitators on adaptations to remote delivery, for example, how to introduce video calling to new users and use of the mute facility to ensure smooth running of the groups. In addition to the two facilitators, CCa joined groups as a helper to support participants with their internet connectivity if required and ensure group continuity if there were technical problems.
Sampling and participants
We recruited older adults with mild cognitive impairment (MCI) or subjective cognitive decline (SCD) from one-third sector partner organisation and one London NHS Trust. The partner organisation advertised the sessions in their newsletter and at events; and staff sought agreement of interested members to be approached by researchers. We also advertised groups on social media. NHS staff approached patients at the NHS Trust. We included adults aged 60+ years, who selfdetermined that they were sufficiently proficient in English to participate in groups, without a known dementia diagnosis and with capacity to consent to participation, as judged by the research team after appropriate training. Having internet access or computer proficiency was not inclusion criteria. We excluded people with a terminal condition, considered to be in the last 6 months of life.
As part of screening, participants completed The Quick MCI has good psychometric properties for distinguishing normal cognition from MCI/ dementia; we excluded people scoring under accepted age and education-adjusted cut-points that indicated dementia (O'Caoimh et al., 2017). We included participants scoring in the range of subjective cognitive impairment (SCD) (>62; total possible score range 0-100) (O'Caoimh et al., 2012) where respondents gave an affirmative response to the question: 'has your memory deteriorated in the last 5 years?'; and to either the question 'Are you concerned about this?' or 'Is your memory persistently bad?' This approach is adapted from published measures of SCD (Jessen et al., 2020).
The Functional Assessment Questionnaire scale (Pfeffer et al., 1982), measuring dependency for activities of daily living. We excluded participants scoring 9+ (indicating possible functional impairment; score range 0-30, with 30 indicating greatest dependency) unless impairment related to physical rather than cognitive symptoms.
Alcohol Use Disorders Identification Test (AUDIT) -C: We excluded participants scoring 5+, the cut-point indicative of an increasing risk drinker; this was to exclude people in whom memory concerns were directly related to alcohol consumption (Ng Fat et al., 2020).
Participants were invited to be accompanied by a relative/friend (described henceforth as a study partner) in the groups if it facilitated their participation; study partners gave informed consent to participate.
Interviews and measures
After screening and obtaining written or recorded verbal informed consent, HM conducted baseline assessmentsby phone, video-call or prior to lockdown, face-to-face. We recorded sociodemographic characteristics (Table 1), physical disabilities that might restrict participation and screening questionnaire scores (above). An interviewer-administered, semi-structured questionnaire asked participants how the pandemic had influenced: who they spoke to each week, what they ate, their activities, how they accessed help and who provided emotional support or practical help; mental and physical well-being and who they cared for and recording responses in detail. We noted the devices on which they could access groups. We recorded sociodemographic details of study partners.
Intervention sessions were video-recorded. During goal-setting phone calls (see below), facilitators wrote contemporaneous notes about aids and barriers to achieving goals and recorded participants' scores on the MDS during sessions 1, 6 and 10. This validated questionnaire is scored from 0 to 16, with higher scores denoting greater Mediterranean-style diet adherence (Valls-Pedret et al., 2015).
Post-intervention, MPo, JBu, CCa and MB conducted semi-structured, virtual qualitative focus groups with intervention participants exploring their experiences; and individual interviews with participants unwilling or unable to attend focus groups, facilitators and study partner(s) (Supplementary Appendix 1: Topic Guides, developed by the study team).
Analysis
Quantitative. We described participants' sociodemographic characteristics using summary statistics and reported adherence (intervention sessions attended, whether in a planned group, catch-up group or an individual catch-up session) and MDS scores.
Fidelity of intervention delivery. Two researchers independently applied checklists to one of the two recorded groups for each of the 10 sessions (after removal of any sessions that failed to record), selected using random number generation (random.org) by the trial manager. We calculated the proportion of expected intervention components (Figure 1) delivered. We adopted established thresholds to rate fidelity (Noell et al., 2002): 81-100% constituted high fidelity, 51-80% moderate and <50% low fidelity. We noted where individual participants did not receive intervention components, and the reason (e.g. connectivity issues and bathroom break). The researchers discussed any discrepancies in ratings, to attain agreement. We reported the mean proportion of intervention components delivered and received by participants, across assessed sessions. We rated on a 5-point scale (1-not at all to 5-very much) whether the facilitator kept the group focused on the manual, and participant(s) engaged, for each intervention component, and for each session, whether the facilitators kept to time. Lives with others (with relatives or employer) 6 (50) 6 (60) Lives alone 6 (50) 4 (40) Accommodation type Owner occupied 5 (41.7) 4 (40) Lives with employer 1 (8.3) 1 (10) Council rented 6 (50) 5 (50) Quick MCI score (mean, SD) 60.2 (7.4) 60.7 (7.3) Functional assessment score (mean, SD) 3 (3.4) 3.4 (3.6) AUDIT score (mean, SD) Data presented represent number (percent) unless otherwise specified. n = total number of participants with data available; SD = standard deviation. MCI: mild cognitive impairment.
Qualitative. We analysed data collected (1) before the intervention, to provide context, (2) during goal-setting phone calls and (3) post-intervention focus groups and interviews.
Content analyses. We carried out content analyses in which two authors (CCo, MPa or JBu) independently evaluated: (1) the extent to which responses to pre-intervention semi-structured questionnaires about how lifestyle and routines had been affected by the Covid-19 pandemic predominantly indicated a negative, positive or neutral/equivocal impact; (2) the types of goals set during goal-setting phone calls, and the aids and barriers participants noted to attaining them.
Thematic analysis. We used NVivo12 software to organise data, taking an inductive, adapted thematic analytic approach (Braun & Clarke, 2006). Co-authors (JBu, MPa, CCo, PR, MPo, MB, JBr and NS) systematically and independently double-coded the three sources of qualitative data, initially analysing each source separately. Researchers read texts for accuracy, anonymity and to familiarise themselves with the data, then labelled meaningful fragments of text with initial codes. Discrepancies were discussed by researchers, until a consensus was reached.
We met as a group to discuss preliminary codes emerging from the data sources and to begin to organise them into preliminary themes addressing research objectives, including to investigate how acceptable and feasible the intervention was to deliver in practice, in the context of the pandemic. We drew on the 'following a thread' methodology to iteratively integrate findings from the three data sources, exploring how codes from one dataset followed into the other, and vice versa, developing one interwoven framework (Moran-Ellis et al, 2006). We did not prejudice findings from one data source over another as they provided different insights into the intervention process that we considered equally valid, although most material analysed and reported, stemmed from postintervention interviews (Figure 2).
Recruitment and retention
Twelve of 17 participants approached were eligible, agreed to participate and completed baseline assessments (Figure 1: Flow diagram); four completed baseline assessments in March. One participant withdrew before, and one after being informed in April of plans to shift to remote delivery; the withdrawal after related to a preference for face-to-face groups. Eight further baseline assessments were conducted in June. The semi-structured interview about the impact of Covid was added as an amendment to the design in June and completed by the 10 participants who remained in the study. 10/12 participants completing baseline assessments participated in the intervention and attended post-intervention interviews (n = 1) or focus groups (n = 5, n = 2) or declined to participate in either but sent email feedback (n = 2). Table 1 shows sociodemographic characteristics. Three participants scored >62 on the Quick MCI and met criteria for SCD; and seven met criteria for MCI. Three participants reported hearing loss, and two reported visual impairment that may have interfered with participation.
Intervention adherence
Groups occurred over 10 weeks in July-September 2020. Two cohabiting participants took part using the same computer. Only 3/10 participants had used ZoomÔ before. HM held 10-min practice sessions with all but one participant (who did not need this), before the first group, to explain how to enter the room and use the mute/video buttons. Two participants also required telephone support at the beginning of groups to help them log in. Three participants required technological help throughout the sessions, for example, returning to the correct screen format after viewing videos. One participant involved a study partnera non-resident daughter, who set up the call and joined the groups. Table 2 describes attendances and reasons for non-attendance. 92/120 (76.7%) of all possible main group sessions (i.e. for 12 participants completing baseline assessments) were attended or 100/ 120 (83.3%) including individual catch-up sessions. In addition to the planned sessions, we held one additional catch-up group (for four people) and a total of eight individual catch-up sessions. 77/120 (64.2%) possible refreshment breaks were attended: five participants attended 10; four attended 5-9 refreshment breaks and one participant only joined the final break. Individual goal phone calls took place at each of the 10 time points for all 10 participants attending the intervention. Participants achieved 140/170 (82.4%) of lifestyle goals set (further details in Supplementary Appendix 2).
Fidelity
Overall fidelity (86%) was in the range specified a priori to be high. Mean fidelity scores across intervention components we intended to deliver were assessed as: 4.5 (range 3-5) for 'keeping the group focussed on the manual/task'; 4.7 (range 3-5) for 'keeping participants engaged' and 4.1 (range 3-5) for 'keeping the session to time'. 23/165 (14%) of components were fully/partially missed by attendees, primarily due to problems with connectivity (assessed for recordings of sessions 2-10 as session 1 recordings were audio, from which continuous presence could not be discerned).
Thematic analysis: Social connectedness
We identified social connectedness as an overarching theme, across the three qualitative data sources: pre-intervention interviews (PRE), goal-setting facilitator notes (FN, also listed in Figure 3) and post-intervention focus groups and interviews (POST). We present these findings, noting the source and relevant quantitative data regarding adherence and participant characteristics, which are listed by participant in Table 2.
We describe our theme of social connectedness, with reference to three sub-themes below: (I) Retaining independence and social connectedness (social connectedness could not be at the expense of independence); (II) Adapting social connectedness in the face of the pandemic (participants strived to compensate for previous social connectedness, as the pandemic reduced support networks) and (III) Managing social connections within and through the group intervention (although there were tensions for some participants, they enjoyed social aspects of the groups, which for most were an introduction to the video-call modality. Social connections supported both group attendance and implementation of lifestyle changes, through helping participants to overcome barriers to change, including memory concerns).
Subtheme I: Retaining independence and social connectedness
It was clear from participants' accounts that social connectedness was important to them but could not be at the expense of independence. There was a sense that demonstrating independent and resilient behaviours, including providing support to others and adoption of healthier lifestyles, could Table 2. Description of participants and APPLE-Tree intervention attendance at each of the 10 sessions (and reasons for non-attendance at group sessions) and post-intervention focus group. Session be reassuring, and a means of distinguishing memory concerns experienced from any intimations of dementia. While most participants had objective cognitive impairments, and all experienced memory loss, there was a strong sense of independence and resilience in their accounts. Participants described (PRE) providing support to others, including family and friends paying clients and the wider community. For one participant, community work was a major focus; this included 'taking a blind person out for guided walks and is involved with local activities at the church'. (P9, PRE). This next quote illustrates the sometimes complex interplay between supporting and being supported: a participant described being supported by her friend, while making adjustments to her life to accommodate her friend's worries: [P3, PRE] "is living with her friend who is able to go and get shopping for her. They have also been using online deliveries to get food. Friend was more worried about Covid so participant was unable to go out as much as she would have liked in order to respect her friend's wishes." Wishes to retain independence and avoid burdening family and friends were predominant sentiments around negotiating support. One participant declined help from neighbours because 'she tries to remain independent and do things on her own'. (P4, PRE), while another felt her daughter was 'already busy enough to check in on her regularly' (P9, PRE).
There was a sense that activity and social contact reassured participants that independence and resilience could be retained. One benefit of attending the APPLE-Tree groups seemed to be the opportunity to demonstrate independence to oneself and the group. This was seen in the context of photo sharing (facilitators showing slides with pictures of crafts or food the group sent to them); these seemed to represent tangible evidence of continued capability, as described by one participant: 'just projecting those pictures was … kind of positive reassurance'. (P9, POST).
This sense of reassurance was not universal. One participant, who had SCD and attended all groups, but only the final tea break experienced the photo-sharing as 'a bit competitive, you know, pictures of people's beautiful pies and stuff….'. (P8, POST).
For the helper who attended groups (CCa), the immediacy with which photos of achievements discussed could be shared 'potentially add[ed.] to both the positive and negative effects' described here.
Various health or social-related issues were projected as barriers to lifestyle change (Figure 3), which appeared difficult for individuals to circumvent alone. For example, P2 needed the help of his family to renegotiate his care package if he was to be able to achieve his goal to go for a morning walk more regularly: 'normally goes for a walk in the morning but this is difficult because he does not get dressed until the carers come round in the morning (FN)'. Despite this, change itself was positioned an individual choice and responsibility, with participant P9, who has SCD (POST) describing the groups as: "Being kicked up the backside, in a way, to look at oneself all over again and to re-evaluate what we are doing at our age, you know this time of life when we really have to say to ourselves that, "OK, you're old, but it doesn't mean to say it's the end of your life." Adopting individualistic approaches to dementia prevention may have fulfilled an emotional need to distance oneself from intimations of dementia. This could be inferred from this next quote, which also illustrates the reassurance provided by peer support: "Somehow, there's just a reassurance for us people who live alone that maybe we are not going mad and that maybe other people also have memory losses like us, which does not necessarily mean Alzheimer's."
P9 (POST)
This illustrates the central tenet of this subtheme, that dementia prevention is best supported by a social connectedness that supports continuation with life despite memory loss and is reassuring rather than one that appears to herald dementia that would be anxiety-provoking and disabling.
Subtheme II: Adapting social connectedness in the face of the pandemic Participants described how they strived to maintain social connectedness, as the pandemic reduced support networks, with new arrangements compensating for suspension of face to face activities and services. One person commented that face-to-face contact now only happened 'by chance' (P7, PRE); another that he did not 'go out for food as much and has less family gatherings' (P2, PRE) and another 'used to look after her grandchildren but can no longer do this due to lockdown' (P4, PRE). The pandemic also changed social encounters, even very brief encounters in the community. One participant 'has stopped going out for walks as she does not like people looking at her if she wears a mask' (P6, PRE).
For many participants, the online modality could not entirely compensate for the loss of face-toface activities, although a minority discovered new connections in the disruption of previous routines. For example, compensatory activities discussed spanned face to face and online modalities, including a group exercise class held by a neighbour on the street and attendance at Vatican Mass online in place of local church attendance. Participant P1 who was recently retired, described how 'using more telephone and Zoom meeting … helped widen her social network.' (PRE).
We note that P1 was the only participant who did not require facilitator support to access the video-call groups; for most others, the APPLE-Tree groups were their most sustained experience of using video-call and thus of social connections online. P3 (POST), who lived alone, referenced the particular value of the groups as an opportunity for social connection during the pandemic: "especially during this Covid time when you couldn't go out. So, we were able to communicate with each other and looking at each other, and I think that was very good."
Subtheme III: Managing social connections within and through the group intervention
Following on from the previous subtheme, the opportunity for social contact groups provided appeared to be an important reason for the good attendance rates and also for their success in enabling lifestyle change. As P3 (POST), who had MCI and attended all the sessions (Table 2) commented 'because we have learned all these things through discussions and connecting to each other, I think we will not forget it'.
The group planned to continue meeting after the end of the sessions, as noted by P4 (POST), who had MCI and attended all groups and most tea breaks: " [facilitator] did encourage us to form a WhatsAppÔ group and then we can still connect together, and we maybe can help each other." (P4, POST) Video calling was a qualitatively different modality for social connections, which was experienced as more distant, and less textured and adaptable than face-to-face contact, although also welcome and novel. P9 [POST] commented 'we still get to know each other's personalities through [videocall] and we don't have to put on pyjamas or whatever underneath'.
Facilitators sometimes struggled to address the needs of people who needed more support, within the video-call groups that did not allow for conversations separate from the group.
''Everybody is in front of you and you are saying that it is sort of a bit upsetting maybe, I did not want to hurt their feelings. Whereas if it is on the side of a table 1 can say "we can talk about that a bit later" quietly so they do not feel like everybody has heard.' (Facilitator 2/POSTI).
This was illustrated from the participant's perspective by P8, who felt a prevailing positive atmosphere left no space to express other emotions: ''It was quite nice to listen to other people, but it was all very positive. Nobody ever said "I feel like a lump of shit today" or anything. Nothing like that in it at all. It was all a bit if you weren't positive you felt you couldn't say anything". (P8, POST) The differences from face-to-face contact were exemplified by a challenging dynamic created by two participants sharing a device as they were able to talk to each other, while others could not. P8 described how they were 'yacking away in their room … You couldn't hear what anyone else was saying'.
Goal phone calls were able to compensate for this aspect of the main groups, providing, Facilitator 2 noted, an opportunity for personalisation of the intervention: "[Goal phone calls] showed we really cared, and the people noticed that".
Social contacts supported the intervention. For example, memory concerns were barriers to lifestyle change that participants often overcame with support from their social networks (FN: Table 3). Forgetting health appointments and social arrangements were of concern to participants, and involving others was one strategy adopted to address these that were often successful. For example, one facilitator recorded that a participant: 'finds it difficult to remember to [do relaxation exercises] every night and [his] daughter will remind him when she calls him before bed'. (P2/FN). This strategy required support from the participant's network but also promoted independence.
For one participant, as described in the FN below, compensatory memory strategies suggested in the group that did not involve a relational approach (relying instead on technology) felt too unacceptable and compromising of independence to adopt: ''Didn't have time to take blood pressure; tried setting up a reminder on phone but they don't like to be tied down by a specific day or time to do a BP check' ' (P8,FN) Both facilitators interviewed reflected on their experiences of co-facilitating the group remotely. The facilitator who was employed by the third sector organisation felt less connected to the organisation of the groups than the university facilitator, commenting: [Facilitator 1] was really supportive…. He has a lot to do with the project and I had nothing to do with the project.' The third sector facilitators relative lack of familiarity with video calling appeared to contribute to this sense of being a relative outsider, although this also reflected the realities of her employment.
Discussion
In this pilot trial, adherence to the intervention and fidelity of delivery were high, indicating that it was acceptable and feasible to deliver in practice, even during the Covid-19 pandemic, which did not prevent participants meeting most of the goals they set. For most participants, the groups were a first experience of video calling, so participation directly supported social connectedness during social distancing. Qualitative data indicated that most participants valued the social aspects of the intervention and felt supported by it to make lifestyle changes.
In the thematic analysis, our overarching theme was social connectedness. Three sub-themes gave different perspectives on our central argument that dementia prevention is a social phenomenon, as well as an individual concern. We described how participants negotiated social connectedness while retaining valued independence. Demonstrating independent and resilient behaviours, including providing support to others and adopting healthier lifestyles, was reassuring, a means of distinguishing the memory concerns experienced from intimations of dementia. We explored how participants strived to maintain social connectedness as the pandemic reduced support networks. We describe how the opportunities for social connection groups provided contributed to good attendance; participants and facilitators described the video-call modality as enabling contact, although sometimes as restricting, with one-to-one communication needing to wait for individual goal phone calls. Memory problems and other barriers to changes the intervention targeted were often successfully overcome within relationships.
Living with memory problems can be experienced as a liminal state between wellness and dementia, which medicalises memory concerns yet situates responsibilities for their management with patients and families . Our findings that lifestyle change was attainable but often needed support from others, accord with discourses that criticise such individualistic approaches to risk reduction and advocate a social and community psychology of resilience (Cowen, 1994). This reflects concerns regarding the valorisation of agency in contemporary health and social policy (Higgs, 2015).
There was initially some discomfort in our coproduction group that delivering wellbeing groups to older people in a pandemic, which reduces life expectancy (Marois et al., 2020), might seem irrelevant, insensitive, or exacerbate immediate and existential worries. In practice, the intervention was acceptable and feasible to deliver, but these concerns are important to reflect on. Perhaps they represent an attitudinal shift within the team during this period, from individual to societal responsibility for prevention, mirroring reduced individual freedoms around lifestyle and healthcare access during this pandemic. Community-based interventions which promote social support may help create a space for secondary dementia prevention that neither medicalises nor negates the central role of relationships in enabling change. We designed the APPLE-Tree intervention groups for co-facilitation by trained and supervised, nonclinical psychology graduates and community workers. This delivery mode worked well and the intervention was experienced as helpful. Our pragmatic approach mirrors calls in a recent Canadian report, for an integrated approach to later life dementia prevention, which addresses multiple, proximal risk factors, is cost-effective and priced so as to be widely available (Rockwood et al., 2020).
Limitations
Participants were interviewed immediately post-intervention, so we do not know how changes were sustained, or if memory was impacted over time. Participants may have been more socially connected than those declining participation. Older people are less likely than younger people to use the internet regularly (ONS, 2019); so video-call interventions potentially exclude many older people and could compound existing inequalities. Although most participants were new to video calling, they all had access to devices, and all but one used a device regularly for other purposes. Our current APPLE-Tree trial will evaluate whether our intervention can improve cognition relative to a control group over 2 years. We will, in addition to video-calls, offer face-to-face groups when possible and will loan devices to those without online access. As remote interventions are preferred by some, this blended approach may become standard for future psychological interventions.
Conclusions
Our intervention was acceptable and feasible to deliver by video-call. Increasing awareness that lifestyle change can help memory could be beneficial at a population level. For more vulnerable populations, such messages need to be communicated alongside supportive, relational approaches to enabling lifestyle changes. The APPLE-Tree intervention manualises such an approach. We commenced an effectiveness trial of the intervention in October 2020 (due to complete 2024). Currently it is delivered remotely, as in the pilot, although when social distancing guidelines allow, we plan to introduce blended remote/face-to-face delivery. If proven effective, this flexible delivery modality is likely to be highly suitable to delivering to populations at scale.
Claudia
Cooper is a professor of older people's psychiatry at UCL Division of Psychiatry and an honorary consultant old age psychiatrist in Camden and Islington NHS Foundation Trust memory services. She is Chief Investigator of the APPLE-Tree programme.
Hassan Mansour was a research assistant at the Division of Psychiatry, UCL, during the submitted study, for which he collected data and facilitated the groups. Since October 2020, he has been a clinical psychology doctorate student at UCL.
Christine Carter is a PhD student on the APPLE-Tree programme, undertaking an ethnographic study of active ageing and how theories may be reconceptualised in dementia prevention. Her background is in mental health nursing.
Penny Rapaport is a clinical psychologist in the Division of Psychiatry at UCL. Her research interests include collaborating on the development, testing and implementation of nonpharmacological interventions for people living with dementia, their families and paid carers, especially widening access through innovation. She clinically supervises the APPLE-Tree intervention.
Sarah Morgan-Trimmer is a social scientist. She conducts process evaluations, realist evaluations, qualitative research and mixed methods studies of complex interventions. She supports qualitative methods, mixed methods and process evaluation at the Institute of Health Research at University of Exeter.
Natalie L Marchant conducted postdoctoral research at the University of California Berkeley, before coming to UCL, where she leads several studies exploring subjective cognitive decline and the links between mental and cognitive well-being.
Michaela
Poppe studied Patholinguistics at the University of Potsdam in Germany and completed an MSc in Human Communication at UCL. She went on to do a PhD in Psychology at King's College London investigating language function in mild cognitive impairment and Alzheimer's disease. She currently manages the APPLE-Tree programme.
Paul Higgs is a professor of Sociology of Ageing at UCL. His research interests focus on the contexts of ageing, social class and later life, and personhood, identity and care in older age. He is a co-investigator on the APPLE-Tree programme.
Janine Brierley currently works as a research assistant on the APPLE-Tree study (Active Prevention in People at risk of dementia through Lifestyle, bEhaviour change and Technology to build REsiliEnce) at UCL's Division of Psychiatry. She has a BSc in Biological Sciences from UCL and am MSc (conversion) in Psychology from UEL. She has previously been employed in a specialised learning disabilities scheme, and an NHS IAPT service.
Noa Solomon currently works as a research assistant on the APPLE-Tree study (Active Prevention in People at risk of dementia through Lifestyle, bEhaviour change and Technology to build RE-siliEnce) at UCL's Division of Psychiatry. Formerly, she has worked as a research assistant at King's Centre for Military Health Research (KCMHR), at the Institute of Psychiatry, Psychology and Neuroscience (IoPPN) on projects relating to the mental health and well-being of emergency responders, military personnel and their families. She has an undergraduate degree in Psychology (BSc) and a master's degree in Neuroscience (MSc) which she obtained from the University of Sussex.
Jessica Budgett has a BSc in Psychology from Durham University and an MSc in Cognitive and Clinical Neuroscience from Goldsmiths College. She has previously worked in the NHS as an assistant psychologist in a Community Stroke and Neuro Rehabilitation team and in memory clinics. Jessica has worked on multiple dementia care research studies at UCL.
Megan Bird has a BSc in Psychology from Cardiff University. She has previously worked on research projects at the University of Oxford. Prior to joining UCL, Megan was working for the NHS as an assistant psychologist for a Community Stroke Team.
Kate Walters is Director of the Centre for Ageing and Population Studies, UCL. Her main research interests are in ageing, mental health, public health, primary care epidemiology and trials of complex interventions in primary care and community settings. This includes both epidemiological studies and complex interventions in the fields of health and well-being for older people, disease risk/ prevention and mental health. Alongside her academic work, she continues in clinical practice as a general practitioner in North London.
Julie Barber is an associate professor in Medical Statistics at the Department of Statistical Science, UCL and a member of the Biostatistics Group within the UCLH/UCL Joint Research Office. Julie completed a PhD looking at statistical issues in economic evaluations of randomised trials at Imperial College (2001). She has previously worked at the London School of Hygiene and Tropical Medicine, Imperial College and the MRC Clinical Trials Unit.
Jennifer Wenborn is a senior clinical research associate at UCL and an occupational therapist. She is based in the Dementia Research Centre in North East London NHS Foundation Trust where she maintains links with the old age mental health services and practitioners. She has worked on a number of studies to develop and evaluate psychosocial interventions for people with dementia and their carers.
Michaela
Poppe studied Patholinguistics at the University of Potsdam in Germany and completed an MSc in Human Communication at UCL. She went on to do a PhD in Psychology at King's College London investigating language function in mild cognitive impairment and Alzheimer's disease. She currently manages the APPLE-Tree programme.
Iain A Lang is a senior lecturer in Public Health and Associate Dean (International and Development) at the University of Exeter Medical School. He is the Executive Lead for Implementation Science in the NIHR Collaboration for Leadership in Applied Health Research and Care -South West Peninsula (PenCLAHRC). He is a speciality tutor in Public Health for the University of Exeter and a UK Faculty of Public Health Part A Examiner.
Jonathan Huntley obtained his PhD from Kings College London in 2014. His research interests are around cognitive training and dementia. He is currently a Wellcome fellow investigating awareness in people with more severe dementia.
Karen Ritchie is a research fellow with the Health Services Evaluation Unit, University of Oxford (Sir Richard Doll) and the Social Psychiatry Research Unit, MRC Australia (Professor Scott Henderson). She is a former member of the Advisory Council of the Director General of INSERM (CORES 2000(CORES -2009, the Scientific Board of the University of Montpellier and the Board of Directors of the International Psychogeriatric Association.
Helen C Kales is a fellowship-trained, board-certified geriatric psychiatrist. She has special clinical interest in the behavioural and psychological symptoms of dementia. Her research programme is directly informed by her clinical work and experiences with patients, families, providers and systems to diminish the barriers to effective and high quality care for older patients with dementia or with mental health issues. Elisa Aguirre is a health psychologist working as a clinical dementia researcher at North East London Foundation Trust. She completed her PhD in 2012 at UCL, which involved developing and evaluating the maintenance CST programme. She is the first author of the 'International CST guidelines'.
Anna Betz is a social worker and senior practitioner within Camden NHS memory service. She is also a qualified medical herbalist.
Marina Palomo is a clinical psychologist working within Camden and Islington NHS Foundation Trust. She also worked within UCL and led the coproduction of the APPLE-Tree intervention. | 2021-04-30T06:16:43.014Z | 2021-04-29T00:00:00.000 | {
"year": 2021,
"sha1": "f037ec5f129b790306ba8008b71914d06451aeb2",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14713012211014382",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "d51dbb22adace973521800fe1567831ca6283292",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237108104 | pes2o/s2orc | v3-fos-license | Anaesthesiologist-intensivist phycisians at the core of the management of critically ill COVID-19 patients in Africa: persistent challenges, some resolved dilemma and future perspective
Unlike developed countries which have purely intensivists also called critical care physicians or intensive care physicians to manage critically ill patients like those with severe forms of COVID-19, the practice of critical care medicine in Africa is coined to anaesthesiology. Hence, anaesthesiologist-intensivist physicians are the medical specialists taking care of critically ill COVID-19 patients in Africa. Likewise, unlike intensive care units (ICUs) in high income countries, those in most African countries face the challenge of a lack of emergency drugs and resuscitation equipment, limited health infrastructure and understaffed and underfunded health care systems. The COVID-19 pandemic is an unprecedented one faced by intensivists in high-income countries and anaesthesiologist-intensivist phycisians in Africa. Infected patients with severe forms of the disease like those having grave COVID-19 complications like massive pulmonary embolism, severe cardiac arrhythmias, cardiogenic shock, septic shock, acute kidney injury or acute respiratory distress syndrome require ICU admission for better management. Both intensivists or anaesthesiologist-intensivist physicians have the peculiarity of securing the airways of critically COVID-19 patients and providing respiratory support with mechanical ventilation after laryngoscopy and endotracheal intubation when needed. In so doing, they can easily be infected from respiratory droplets or aerosols expired by the COVID-19 patients. Hence, in Africa, anaesthesiologist-intensivist phycisians have a higher risk of contracting COVID-19 compared to other health professionals. It's worth to mention that the COVID-19 pandemic struck African anaesthesiologist-intensivist phycisians and ICUs when there were neither prepared skillfully or lacked the required ICU capacity to meet the demands of thousands of severe COVID-19 African patients. These further weakened the already strained health systems in Africa. It required a lot of creativity, engineering skills and courage for these ill prepared African anaesthesiologist-intensivist physicians to provide care to these critically ill patients and improve their outcomes as the pandemic progressed. However, despites the numerous efforts made in African anaesthesiologist-Intensivist phycisians to care for critically ill COVID-19 patients, the pandemic is spreading at a rapid rate across Africa. There is an urgent need for African health authorities to anticipate on how to scale up the future high ICU capacity needs and limited ICU workforce, infrastructure and equipment to manage severe forms of COVID-19 in future. It cannot be overemphasized that these severe forms of COVID-19 are potentially fatal and are a major contributor to the death toll of the COVID-19 pandemic.
Essay
A recent pandemic of a highly contagious and potentially fatal respiratory infection called coronavirus disease (COVID-19) emerged in December 2019 in China. COVID-19 is caused by a viral pathogen called "severe acute respiratory syndrome-coronavirus 2" (SARS-CoV-2) [1]. Within three months of its outbreak, this life-threatening communicable disease spread around the world so rapidly than no unprecedented infection, contaminating over three million people of which 274,985 died worldwide [2]. Fatalities are often due to severe forms of the disease which are potentially lethal especially if untreated or inadequately managed [3]. These severe forms of the disease require intensive care management which is quite difficult in Africa where the anaesthesiologistintensivist physicians´ (called the critical care physicians or the intensive care physicians in developed countries) workforce is low and intensive care units have limited equipments to resuscitate these patients with severe forms of COVID-19 [4].
Persistent
challenges confronted by anaesthesiologist-intensivist phycisians managing severe forms of COVID-19 in Africa: the African continent is reputable as one of the most financially poor regions on the globe, with a double global health burden stemming from a the constant threat of infectious diseases and a rising burden of noncommunicable diseases due to the epidemiological transition of diseases [1]. To the high burden of communicable diseases such as HIV/AIDS confronted by Africa, COVID-19 abruptly added as a de novo very infectious respiratory tract infections in March 2020 as a global health crisis called a pandemic. COVID-19 is currently ranked as the primary priority of the global burden of disease (GBD) [2]. COVID-19 can manifest as mild and severe forms. The former includes benign systemic symptom (rhinorrhea, fever, gastrointestinal upset, fatigue, headaches) which can be managed and followed-up at home or may neccessitate a short length of hospital stay for management. On the other hand, severe COVID-19 presents with emergencies such massive pulmonary embolism, acute repiratory distress syndrome (ARDS), myocardial infarction, heart failure, septic shock, and acute kidney injury which warrant immediate ICU admission for adequate intensive care to safe the live of these patients with severe COVID-19. The global prevalence of severe COVID-19 is estimated between 15 -20% [3,4], quite worrisome to be labeled as trivial. Due to its unprecedented occurrence, the COVID-19 pandemic struck all anaesthesiologist-intensivist physicians and ICUs in Africa like a tornado. All medical specialists in Africa were not prepared to manage patients with a disease they had never heard nor been confronted with. Nonetheless, they had to procure optimal case management to severe forms of COVID-19. Compared with Western countries, the emergence of COVID-19 was more terrific due to a prevailing poverty, lack of emergency drugs and resuscitation equipment, few health infrastructure and understaffed and underfunded healthcare systems [3,4] which challenged the various national, local and United Nations system responses to the pandemic [5]. These culminated to further strain the already weak healthcare systems in Africa. Overall, 769 confirmed African cases of critically ill COVID-19 and case fatality rate of 0.5% were recorded by March 20, 2020, compared with 770 956 severe COVID-19 African cases and case fatality rate of 2.3% by July 27, 2020 [2,6].
The average ICU bed capacity in most African countries is less than 24 ICU beds (if not none) [6]. For instance, Kenya has a total of 400 ICU, Nigeria about 120 ICU beds and Cameroon has 601 ICU beds [6]. WHO reports an estimate five ICU beds per one million person in Africa compared with 4000 beds per one inhabitants in Europe [4]. More reliable national African estimates depict none to 17 ICU beds per 100,000 population in Egyptians, 6 beds per 100,000 persons in Seychelles and 9 beds per 100,000 South Africans [7]. The average ICU bed capacity in most African countries is less than 24 ICU beds (if not none) Uganda in particular is has been repeatedly noted to have a significant ICU bed capacity and Ugandans have limited access to intensive care services [8]. It is extrapolated that more than 50% of the global population will be infected by COVID-19 over the next two years and the burden of this infection will disproportionately affect Africans, the limited Africa´s ICU workforce, and ICU´s capacity to provide proper healthcare. As the pandemic progresses, although its pathophysiology, diagnosis, and provisionally validated treatment protocols (while waiting for the discovery of a definite treatment and vaccine) are been understood by African anaesthesiologistintensivist physicians, patients who become critically ill of COVID-19 still require weeks of treatment in African ICUs and many undergo endotracheal intubated and mechanically ventilated to have a chance of survival [9]. In the same vein, Africa faces significant financial constraints to set up ICUs for the provision of critical care to those suffering from the most severe cases of COVID-19 [1].
Indeed, the burden of critical illness in most African countries is overwhelming and there is an unmatched number of ICU beds, ventilators, electrocardiograms, ultrasound machines, and defibrillators estimated at one per a million critically ill COVID-19 patients. Furthermore, the international market is saturated with demands of ICU equipment from high-income countries and international transportations of these ICU equipment are also a challenge due to the reduced cargo space recently implemented by airlines stopping their services to Africa [1]. Apart from the shortage of ICU beds in Africa, the burden of severe COVID-19 patients is further compounded by a drastic paucity of workforce from African anaesthesiologist-intensivist physicians, anesthetist-ICU nurses and respiratory physiotherapists for the optimal management of these critically ill patients. Anaesthesiology and critical care medicine is a unique, challenging, dynamic medical specialty in Africa requiring skilled personnel to adequately manage patients in the ICU. Unfortunately, these skilled labour remains a scarce invaluable asset in Africa [4]. Another barrier to the optimal care of severe COVID-19 cases in Africa is the shortage of personal protective equipment (PPE) and mechanical ventilators to provide respiratory support to patients with ARDS [6]. Several COVID-19 patients with ARDS in Africa have died due to lack or insufficient ventilators. According to WHO, less than 2000 ventilators are available in 41 African countries. For example, in Mali, there are only 56 ventilators for 19 million inhabitants [9]. A similar trend was observed in Cameroon on the 29 th March 2020, where a national survey reported a total of 73 mechanical ventilators for 23 million inhabitants and a case fatality rate of 60% among severe COVID-19 patients [10].
Solutions adopted by anaesthesiologist-intensivist phycisians managing severe forms of COVID-19 in
Africa: the pandemic had major adverse repercussions on the African ICUs. It required a lot of creativity, engineering skills and bravery from the un-prepared African anaesthesiologistintensivist physicians to provide care to these critically ill patients and improve patients´ outcomes as the pandemic progressed. Within few months of its outbreak the baseline ICU capacity had already been reached worldwide and particularly in Africa [9]. For instance, some creative initiatives undertaken in Africa was the conversion of several operating rooms, old hospitals and public spaces like schools, olympic stadiums and supermarkets into ICUs with subsequent assignment of intensivists to head and manage these annexed ICUs [9]. These measures slightly helped the African continent to be able to match the high ICUs demands of severe COVID-19 patients. However, the major challenge of these annexed ICUs has been a lack or paucity of mechanical ventilators to provide respiratory support to patients with severe COVID-19 when needed. Furthermore, critically ill COVID-19 patients require a long length of ICU stay, and once elective surgeries postponed during the pandemic to give preference to emergency surgeries progressively resume the ICU capacity in Africa would be more challenged [11]. However, as several African countries scramble to create ICU capacity through makeshift beds in theaters, old hospitals and public places, important questions will arise such as (a) will ICU services maintain an effective health workers-to-patient ratio? (b) Will a higher number of ICU bed capacity spread the resources thinner than before? (c) Will more ICU beds not predispose intensivists and ICU nurses to a burnout syndrome and increase their risk being infected by the viral pathogen causing COVID-19? [9].
Future perspectives in the management of critically ill COVID-19 patients by African anaesthesiologist-intensivist phycisians from the Cameroonian case study: though the number severe cases are increasing in some African countries like Cameroon, now considered the COVID-19 epicenter in Western and Central Africa [10], the severe forms of COVID-19 are now been better managed with lesser deaths thanks to the joint efforts of the United Nations International Children´s Education Fund (UNICEF), United Nations Population Fund (UNFPA), the World Health Organization (WHO) and the Cameroon Ministry of Public Health [12,13]. UNICEF limited the spread of COVID-19 to African anaesthesiologist-intensivist physicians and other medical specialists via a donation of PPE (5680 coverall protection, 6950 non-sterile gowns, 2450 N95 masks and 7850 surgical masks) to the Cameroonian Ministry of Public Health [12]. In response to the increasing trends in COVID-19 infected individuals, the high demand for PPE, and the low supply of PPE, the UNFPA donated an emergency assistance to the Cameroon National Health Emergency Operations Center [13]. The grant consisted of several packets of medical shoe covers, disposable gowns, examination and surgical gloves, masks, hand sanitizer gel and many others [13]. Likewise, at the outbreak of COVID-19, the Cameroon Ministry of Public Health in collaboration with epidemiologists and anaesthesiologist-intensivist physicians elaborated a treatment protocol for the treatment of mild and severe forms of COVID-19 nationwide [12]. Meanwhile, the Ministry of Public Health in Cameroon is constantly formulating measures for disease prevention, thereby, reducing the number of new severe cases [12]. Cameroon has also received support to fight against COVID-19 from several countries including Morocco which offered free of charge PPE worth hundreds of millions XAF in a bid to foster a sustainable mutually beneficial bilateral relations between the Kingdom of Morocco and the Republic of Cameroon [14]. Lastly, in an unpublished report, WHO authorities together with the Ministry of Public Health and the Cameroon Society of anesthesiologist, intensivists and emergency physicians organized two training session in June 2020 to capacitate Cameroonian anaesthesiologist-intensivist physicians on effective case management of COVID-19, especially patients with severe forms of the disease to reduce of their case fatality rate. Unfortunately, despite some of the above cited public interventions for primary prevention of COVID-19; therapeutic and health logistic resolutions taken by African Health authorities for secondary and tertiary prevention of COVID-19 in Africa, statistics show that the number of new infections, new severe forms and deaths are increasing daily [2].
To this effect, we recommend the following measures to mitigate the impact (number of new infected severe cases and deaths) of COVID-19 in Africa. Firstly, a reinforcement of Africans adherence to WHO's guidelines on preventive measures like regular hand washing, wearing of a face masks and physical distancing. Secondly, Africans should avoid unnecessary travelling and stay away from overcrowded areas especially if they have risk factors for contracting COVID-19 such as being aged over 60 years, having comorbidities such as obesity, cardiovascular diseases, diabetes, chronic respiratory disease or malignancies. Thirdly, there is a continuous need to train African anaesthesiologist-intensivist physicians in leadership and adequate management of severe forms of COVID-19 during this period of panic. Furthermore, although local regional anesthesia is the anaesthetic technique of choice for COVID-19 patients depending on the indication of surgery, intravenous anaesthesia is still indicated in the management of COVID-19 patients undergoing emergency surgical procedures [15]. Here, Opioid Free Anaesthesia (OFA) has the merit of a better anaesthetic technique compare to general anaesthesia mainly due to haemodynamic stability and the no postoperative respiratory depression compared with general anaesthesia [15]. Hence, OFA´s indication in COVID-19 patients undergoing emergency surgery should be considered by Anaesthesiologists for a better postoperative outcome of severe cases of COVID-19. Lastly, African anaesthesiologist-intensivist physicians should avoid as much as possible to relief severe acute or chronic pain using morphine or its synthetic derivatives in critically ill COVID-19 patients with respiratory compromise as these may worsen or precipitate respiratory distress in these patients.
Conclusion
Although the brilliant initiatives of African anaesthesiologist-intensivist physicians confronted with this unprecedented pandemic is commendable, there is an urgent need for all health organizations (WHO, UN, CDC, African Ministries of Health) and all critical care physicians´ associations across the globe to brainstorm on how to improve the capacitating of critical care medicine in Africa with more skillful human resources (African anaesthesiologist-intensivist phycisians and Anaesthetist-ICU nurses), PPE, ICU beds, emergency drugs and ventilators geared at saving the lives of critically ill COVID-19 patients. | 2020-12-17T09:07:22.719Z | 2020-12-11T00:00:00.000 | {
"year": 2020,
"sha1": "bf721886e560a48f6d4d47c0cf2b3e0cbdb67433",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11604/pamj.supp.2020.37.1.25234",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0f6347445eeed641f15916750d282d781768fd7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
213555707 | pes2o/s2orc | v3-fos-license | Phonons in short-period (GaN) m (AlN) n superlattices: ab initio calculations and group-theoretical analysis of modes and their genesis
. The results of experimental and theoretical studies of phonon modes in short-period (GaN) m (AlN) n superlattices (SLs) grown by MOVPE and PA MBE on the (0001) Al 2 O 3 substrate are reported. Using a comprehensive group-theoretical analysis, the genesis of the SL vibrational modes from the modes of bulk AlN and GaN crystals has been established, which is important for interpreting the SL Raman spectrum. In the framework of Density Functional Theory, the lattice dynamics and the structural properties of (GaN) m (AlN) n SLs ( m + n ≤ 12) were studied. An analysis of the eigenvectors of the phonon modes made it possible to reveal their microscopic nature. We established that the E (TO) modes are localized in the layers constituting the SL. It is shown that the localized nature of this mode is kept even in the SLs with the thinnest layers ( m + n =4). In turn, the A 1 (TO) mode demonstrates a delocalized nature and reflects the averaged characteristics of the SL as a whole. A combined analysis of the ab initio calculations and Raman data was performed. Thus, the above studies open new possibilities for analyzing the structural properties of GaN/AlN SLs by Raman and IR spectroscopy. of
Introduction
The great interest of researchers, aimed at studying the physical properties of GaN/AlN superlattices (SLs), is due to their enormous potential for creating new generation devices with a wide range of applications. For example, short-period GaN/AlN superlattices can be used as a replacement for AlGaN solid solutions in the emitter layers of optoelectronic and electronic devices [1]. They can also be used to create unipolar devices operating on the basis of intersubband transitions in the near-IR range [2]. In addition, the ability to control elastic stresses in SLs allows them to be used as transition layers in structures with a strong lattice mismatch (for example, under growing structures on silicon substrates). From a fundamental point of view, short-period SLs can be considered as a metamaterial, simultaneously possessing both a number of properties inherent in ordinary solid solutions (effective band gap, average lattice parameter), and a number of completely new properties that are not fully studied at the moment.
The effective use of such periodic structures requires a detailed study of their fundamental physical properties, as well as the development of new quantitative diagnostics methods in order to improve their growth technology. Raman spectroscopy is a recognized tool for non-destructive study of the phonon spectrum of SLs with a high spatial resolution. Model methods for the study of phonon spectra form the basis for the quantitative analysis and interpretation of experimental information obtained by 2 vibrational spectroscopy. The existence of localized and delocalized modes in GaN/AlN SLs at frequencies different from the frequencies of any of the modes inherent in layers that comprise the SLs was predicted in theoretical studies carried out in the framework of the dielectric continuum model [3][4][5]. It was shown that phonons propagating in the direction of growth of the SL have a localized nature, while the phonons propagating in the perpendicular direction are delocalized ones. Joint analysis of theoretical and experimental results allows not only to more reliably interpret experimental results, but also provides a basis for developing a quantitative methodology for estimating important parameters of the studied SLs from the Raman spectroscopy data. However, to the best of our knowledge, only in two papers the results of ab initio calculations of the phonon modes GaN/AlN SLs are given, but they do not agree very well with the experimental data [6,7].
Experimental details
GaN/AlN SLs were grown in a MOVPE system with a horizontal flow reactor at temperature of 1050 o C on (0001) sapphire substrates using a double AlN-GaN buffer layer. The SLs period d SL varied from 2 to 6 nm, and the thickness of the structures ranged from 0.3 to 1 μm [8]. GaN/AlN SLs were also grown by plasma-assisted molecular-beam epitaxy (PA MBE) on AlN/c-Al 2 O 3 templates. The digital allowing epitaxy (DAE) with short-term interruptions of the Al-flux under a continuous Ga-flux, resulting in Ga-rich growth conditions was used for SLs growth. Excess Ga over all GaN/AlN structures was evaporated by means of the post-growth annealing of the structures. The period of the SLs grown by this technique also varied from 2 to 6 nm, and the thickness of the structures ranged from 0.3 to 0.6 μm [9]. The high quality of GaN/AlN SLs was confirmed by highresolution X-ray diffraction (XRD), high-resolution transmission electron microscopy, and Raman spectroscopy. Micro-Raman measurements were carried out at room temperature with a Horiba Jobin-Yvon T64000 triple spectrometer at the 532 nm excitation line. All spectra were measured in a backscattering geometry with z direction oriented along the c-axis of the SL.
Ab initio calculations
The ab initio calculations were carried out within local density approximation (LDA) to density functional theory (DFT) as realized in ABINIT software package [10][11][12]. In calculations we used the local density approximation (LDA). The norm-conserving pseudopotentials from [13] were used with the 3d electrons being considered as the valence ones. It was found that the convergence of total energy within 0.1 mHa was achieved with the energy cutoff of 45 Ha. The k-point grid was chosen according to the Monkhorst-Pack scheme [14] as 6×6×4. Full geometry optimization was performed by varying both the parameters of the cell and the positions of atoms in the unit cell. As a result, the internal pressure has been found to be less than 10 -5 GPa. The phonon wave vectors and frequencies were obtained in Г-point of the Brillouin zone within the density functional perturbation theory (DFPT) [15,16]. The Raman spectra were simulated from the Raman tensor calculated with perturbation theory, whose components are the third-order total-energy derivatives (with respect to the electric field and the atomic displacements). Applying the (2n + 1) theorem, the Raman tensor can be found from the calculated first-order corrections to the wave functions of the ground state [17].
Group-theoretical analysis
In Ref. [18], the symmetry of (GaN) m (AlN) n superlattices was established to be described by two space groups depending on the number of atomic layers (m+n) per primitive unit cell: C (m+n is odd) and C (m+n is even). In the SL family with C symmetry, the primitive unit cell along the zaxis increases (m+n)/2 times and the number of atoms per primitive unit cells also increases (m+n)/2 times. The symmetry of normal modes at the Brillouin zone center is Г ac + Г opt = 2(m+n)(A 1 + E); Г ac = A 1 + E. All optical modes are Raman-active and their number equals 4(m+n) -2.
Phonons
Optical Acoustic* Modes, observed in Raman spectra Bulk GaN и AlN indicate optical and acoustic modes of SLs originated from acoustic modes of bulk crystals.
The genesis of the SL normal modes from the modes of bulk GaN and AlN crystals is very important for Raman spectra interpretation. In table 1, the sets of optical and acoustic modes originated from the bulk modes are given for 4 SLs with even m+n=2k. It should be noted that SL optical modes originate both from optical and acoustic bulk modes.
Let us consider in detail the bulk mode transformation into SL modes. The (GaN) 1 (AlN) 1 are obtained from the bulk GaN by substitution of one Ga atom in the primitive unit cell for Al. As a result, the number of acoustic modes does not change whereas their symmetry changes. In the frequency range of optical modes, the A 1 and B 1 bulk modes transform into the A 1 modes of SL (GaN) 1 (AlN) 1 . The number of the SL A 1 modes originated from the B 1 bulk modes is two times larger than the number of the SL A 1 modes originated from the A 1 bulk modes. Similarly, the number of the SL E modes originated from the E 2 bulk modes is two times larger than the number of the SL E modes originated from the E 1 bulk modes.
It can be seen that such a ratio is kept when increasing the SL period. In this case, the highest and the lowest branches originated from the successive splitting of branches will tend to the bulk GaN and AlN branches, respectively.
The optical A 1 modes of SLs originated from the acoustic A 1 bulk modes should be observed in the low frequency range of the SL Raman spectra. Their number equals [(m+n)/2 -1]. A similar situation takes place also for the Е modes of SLs. In the SL family with the C symmetry, all modes are IR-active and, as a result, have LO-TO splitting. Nevertheless, a pronounced LO-TO splitting can be expected only for the A 1 and Е modes originated from the A 1 and Е 1 bulk modes. Their number equals (m+n)/2 both for A 1 and Е SL modes.
Based on the genesis of vibrational modes, one can make a conclusion on frequencies and intensities of the corresponding lines in the Raman spectra. Thus, we can predict that the A 1 modes originating from the B 1 silent modes will have Raman tensors with small components and will not be observed experimentally. The strongest A 1 Raman lines originate from the A 1 bulk modes. The number of these lines is equal to (m+n)/2. For example, in the (GaN) 4 (AlN) 4 SLs, eight A 1 modes originating from the B 1 silent modes could not be experimentally observed. As a result, the strongest spectral lines predicted by theory are the 7A 1 and 7E ones, which agrees with the experiment.
Results and discussion
In order to verify the theoretical approach, the calculation of the structural and dynamical properties of wurtzite AlN and GaN single crystals was performed. The structural parameters, as well as the GaN/AlN SLs. The modes of the first type are the delocalized ones. The A 1 mode (ν=576.4 cm -1 ) of the (GaN) 4 (AlN) 4 SL genetically connected with A 1 mode of the bulk crystals is a good example of the delocalized one. The atomic displacement pattern of the A 1 mode is plotted in Figure 1 (a). This mode corresponds to the displacements of the anion sublattice with respect to the cation one in the opposite direction along the c axis both in GaN and AlN layers. The modes of the second type are the localized modes. These modes involve the displacements of atoms only in GaN (or AlN) layers, while the atoms of the other AlN (or GaN) layers almost do not move. The analysis of calculated phonon spectra shows that the confinement of phonon modes is valid even for SLs with the thinnest layers (m+n = 4). The (GaN) 4 (AlN) 4 E modes which are genetically connected with E 1 mode of the bulk crystal, are a good example of the localized ones. The atomic displacements of modes are plotted in Figures 1 (b,c). The conclusion about the two types of modes inherent in the short-period GaN/AlN SLs is consistent with the results of earlier works [3][4][5].
In order to study the local structural changes by elucidating the relationships between the SL structure and SL phonon spectra, the Raman spectra of (GaN) m (AlN) 8-m (m=3,4,5) and (GaN) m (AlN) 12-m (m=5,6,7) SLs were calculated and the result of simulation is plotted in Figure 2. Additional comments are needed to explain the details of Raman spectra simulations for the localized modes. The scattering efficiency [9] could be expressed in the following equation: where Ω is the angle of collection in which the outgoing light is scattered, ω m is the frequency of phonon involved in scattering process, e 0 is the unit vector of incident light polarization with frequency ω 0 , e S is the unit vector of scattered light polarization with frequency (ω 0 -ω m ), c l is the light velocity in vacuum and n( m )+1 is the Bose-Einstein occupation number, k B is the Boltzmann constant, T is the temperature (in K), and m is the Raman susceptibility tensor proportional to the derivative of the electronic linear dielectric susceptibility tensor with respect to atomic displacements τ in mode m, which could be calculated within DFPT: where u m are eigen displacements of atom k along the direction β in mode m. As follows from equation (1), one have to take into account the influence of the angle of collection (Ω) in which the outgoing light is scattered.
It was experimentally established in the present work, that the absolute intensity of the GaN E 1 bands is twice as much as the one of AlN. Thus scattering cross section for AlN is higher than the one of GaN. Such a large difference is significant in theoretical simulation of the intensity of confined Raman modes. Hence, in order to reproduce the experimental spectra, the intensities for E-symmetry modes confined in GaN layers were multiplied by factor 2.0.
It was found that the peaks positions and intensities are very close for the simulated Raman spectra of (GaN) 4 (AlN) 4 and (GaN) 6 (AlN) 6 SLs. In turn, in the Raman spectra of all m≠n SLs, drastic changes in the positions of both A 1 modes and E modes should manifest themselves. Thus, calculations performed for (GaN) m (AlN) 8-m SLs (m=3,5) predict the delocalized A 1 line position shift either toward lower frequencies (~10 cm -1 ) or higher frequencies (~9.5 cm -1 ) for m=3 and m=5, respectively. The calculations predict similar effects, though with smaller frequency shifts, also for delocalized mode in (GaN) m (AlN) 12-m SLs (m=5,7). For localized E modes, strong frequency shifts occur with AlN thickness increase. Besides, the calculations show strong dependence of localized phonon intensity on thickness of layers forming the SL: the greater the layer thickness, the greater the Raman intensity of the phonon localized in this layer and vice versa the smaller the layer thickness, the lower the intensity (Figure 3 (a) and Figure 3 (c)). The baric behavior of the delocalized mode was studied by simulating uniaxial and biaxial strain applied to (GaN) 4 (AlN) 4 SL. It was found that the frequency of the delocalized A 1 (TO) mode, in the case of uniaxial strain, increases with a slope of 0.45 cm -1 /Kbar, and decrease with a slope of 0.4 cm -1 /Kbar in the case of biaxial strain applied. Thus, one may assume that the mode is almost non sensitive to the strain in the SL. The conclusion about the delocalized nature of the A 1 (TO) mode is fully consistent with the results of the theoretical study of optical phonons in the short-period GaN/AlN SLs in the framework of the dielectric continuum model [4]. The conclusion about the weak effect of strain on the A 1 (TO) mode is also consistent with the results reported in Ref. [22], where, within the framework of the same model, it was shown that the elastic strain in SL layers caused by matching the GaN and AlN crystal lattice constants under a SL growth, has a weak effect on the delocalized mode frequency. However, in Ref. [7], where the phonons of short period SLs (GaN) m (AlN) 8-m with m=3,4,5 were calculated using the DFT in the local-density approximation, the nature of the A 1 mode was ascribed to mostly GaN-type. As a result, the variation in frequencies of this mode with the number of layers of AlN in SL was attributed to changes of the lattice constant a as well as the c/a ratio in SL. the experimental Raman spectra measured on these SLs. It can be seen that the calculated and measured spectra are in good qualitative agreement. The insets to the experimental spectra show the dependence of the frequency of the A 1 (TO) mode on the relative thickness of the AlN layer ( ̅ / ) in the short-period GaN/AlN SLs proposed in Ref. [22]. The good agreement of the experimental data with this dependence, obtained for all the SLs studied, confirms the validity of the theoretical predictions obtained in the present paper. Thus, the A 1 (TO) mode should reflect the averaged characteristics of the SL and the correlation dependence between the SL structure and the frequency value of the delocalized polar phonon A 1 (TO) can be used to quantify the Al(Ga) concentration averaged over the GaN/AlN SL period. Combining this information with an estimation of the total SL period obtained from the Raman spectra of the folded acoustic phonons [23], one can determine the absolute values of the layer thicknesses of the multilayer structure. The baric behavior of the localized modes was also studied by simulating uniaxial and biaxial strain applied to (GaN) 4 (AlN) 4 SL. It has been found that the E(TO) mode is very sensitive to the biaxial strain. Calculations show that the frequencies of the modes localized in the GaN layers turn out to be higher, and those of the modes localized in the AlN layer turn out to be lower than their values in bulk GaN and AlN crystals. From this we can conclude that the GaN layers forming the SL are compressed in the plane whereas the AlN layers are stretched in the plane. This conclusion is in good agreement with the results of [4,7]. Figures 3 (a) and 3 (c) show the results of the simulated Raman spectra for the E(TO) mode for (GaN) m (AlN) 8-m (m = 3.5) and (GaN) m (AlN) 12-m (m = 5.7) SLs, respectively. Figures 3 (b) and 3 (d) show the experimental Raman spectra measured on such SLs. It can be seen that the calculated and measured spectra are in good qualitative agreement. Thus, the positions of lines in Raman spectra corresponding to the localized modes can be used to quantify the magnitude of strain in the individual layers forming the SL.
Conclusion
Experimental Raman spectra were obtained for short-period GaN/AlN SLs grown by MOVPE and PA MBE on the (0001) Al 2 O 3 substrate. A comprehensive group-theoretical analysis made it possible to establish the genesis of the SL phonon modes from the modes of bulk AlN and GaN crystals. Based on the genesis of the phonon modes, the conclusions on the frequencies and intensities of the corresponding lines in the Raman spectra of the SL have been made. The lattice dynamics and structural properties of the (AlN) m (GaN) n SLs (m+n≤12) have been studied by ab initio calculations within the framework of the density functional theory. As a result, the phonon mode frequencies were calculated and the atomic displacement pattern was established. The number and symmetry of vibrational modes in the calculated spectrum in complete agreement with the results of the group-theoretical analysis. This led to the conclusion on the microscopic nature of the vibrational modes of the SL.
It has been established that the E(TO) modes are localized in the constituent SL layers and can be used to obtain information about the individual characteristics of each layer forming the SL. It is shown that the localized nature of the mode of this symmetry is preserved even in the SL with the thinnest constituent layers, i.e. for m+n=4. In turn, the A 1 (TO) mode has a delocalized nature. This allows one to use the parameters of this mode to estimate the averaged characteristics of the SL as a whole. On the basis of the Raman susceptibility tensor, the theoretical Raman spectra were calculated and compared with experimental ones. The results of the ab initio calculations are in a good agreement with the experimental Raman data. The correlation dependencies between the SL structure and the frequencies of the localized and delocalized polar phonons were obtained. Hence, the results of the above studies form the basis for the quantitative estimation of both strain in individual layers forming the SL and the Al(Ga) content averaged over the SL period. They also open new possibilities for analyzing other important parameters of short-period GaN/AlN SLs. | 2019-12-12T10:32:59.516Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "16766ea3982cf28020c4214685266363f03e988d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1400/6/066016",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c6beb1e68997446931666c18b0641e6d4b7ac659",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
267670028 | pes2o/s2orc | v3-fos-license | Optimization of stope structure parameters by combining Mathews stability chart method with numerical analysis in Halazi iron mine
When employing the open stope mining method for extracting materials in underground mines, it is necessary for personnel and equipment to operate within a specified range of the goaf area, thus, adoption of appropriate mining parameters in the mining field is crucial for promoting safe production practices. In this study, the Harazi iron mine is taken as the research subject, with theoretical analysis combined with numerical simulation that enables the authors to simulate and analyze the width parameter of the mining field. Theoretical analysis utilizing the Matthew stability chart method confirms that the width of the mining field in Harazi iron mine should not exceed 9 m. Based on this finding, the mechanical response characteristics accompanying the mining process at different widths within the mining field limits are analyzed via the Flac3D numerical analysis software. As a result of this analysis, the optimal width of the mining field in Harazi iron mine is determined to be 8 m. The method utilized in this article provides a rational approach to determining structural parameters of the mining field and can effectively aid in promoting mine safety practices.
Introduction
During the underground mining process, when the open stope mining method is employed for ore extraction, it is often necessary to determine appropriate parameters for the mining layout, such as the location, direction, and dimensions of mining rooms and pillars [1].This is done to ensure that the mining excavation remains in a favorable mechanical state, promoting a uniform distribution of stress and strain in the surrounding rock mass [2].The aim is to prevent stress concentration, energy concentration, and deformation failure, ultimately achieving effective ground pressure control or ground management [3][4][5][6].
The study of reasonable mining layout parameters for the open stope mining method often requires researchers to optimize through research and experimental means [7][8][9][10][11].Some scholars have used the principles of mechanics to analyze and research the design of underground mining layout parameters by establishing theoretical formulas and calculation methods.These methods were then practically applied, considering specific mining conditions and environmental factors.For example, using theories related to rock mechanics, researchers analyzed and studied the structural design of mining rooms and pillars, stress distribution, and conditions leading to instability in a given mining area [12].They summarized and derived calculation methods for pillar dimensions and identified patterns of strength variation with respect to size parameters.However, it should be noted that the mining process is influenced by the complex, uncertain, and ambiguous nature of the internal and external environment of the rock mass [13].While a specific mathematical model may work well in certain mining situations, it may not provide a comprehensive and rational description for all cases.Therefore, theoretical formula calculation methods should be regarded as design and calculation references rather than universally applicable solutions [14].
In addition, traditional analytical methods in mechanics typically involve establishing mathematical and mechanical models based on the geometric characteristics of the mining face, mechanical properties of the rock mass, initial stress conditions, boundary conditions, etc [15].These models are then utilized to solve for mechanical variables such as stress, strain, and displacement within the rock mass, allowing for the assessment of rock mass stability.However, due to issues such as nonlinearity, anisotropy of the medium, variations in properties with time and temperature, and complex boundary conditions, traditional analytical methods often struggle to address practical problems effectively [16].
With the development of computers and advancements in computing technology, numerical analysis methods have become valuable tools for solving underground engineering problems.Numerical analysis methods have gained widespread application in mining engineering due to their ability to overcome limitations of classical analytical methods.Through the use of numerical analysis methods, it becomes possible to effectively solve the mechanical response characteristics of surrounding rock under different mining layout parameter sizes in void mining [17].Additionally, computer graphic algorithms can be utilized to display contour maps of stress and displacement, providing a visual representation of the mechanical effects during the mining process.This facilitates the optimization of mining layout sizes for researchers in a convenient manner [18,19].
This study conducted a simulation analysis and optimization study on the structural parameters of the mining recovery in the Halazi Iron Mine No.2 mining.The theoretical size of the mining width was analyzed by the Mathews stability chat method.In order to reduce theoretical analysis errors, the Flac 3D numerical analysis software was used to analyze the rock mass stability characteristics under different mining widths, ultimately obtaining the reasonable structural parameters of the mine.This study innovatively combines numerical simulation and theoretical methods to determine the reasonable mining parameters, and aims to provide theoretical support for the mine to reasonably select recovery structure parameters such as mining width.
Geology
The Halazi Iron Mine No.2 mining area (Fig. 1), owned by Jilin Jinding Mining Co., Ltd is located in Huinan County, Jilin Province.It is situated at a direction of 141 • from the county seat, at a straight distance of 39 km.It is also located at a direction of 63 • from Huangnigang Village, Jinchuan Town, Huinan County, with a straight distance of 2.1 km.Administratively, it falls under the jurisdiction of Huangnigang Village, Jinchuan Town, Huinan County, Jilin Province.
The average thickness of the main mining ore body is 6.06 m, with an average dip angle of 30 • .The surrounding rock of the ore body mainly consists of diorite and dioritic gneiss.The ore body and the surrounding rock have consistent geological structures.The structural conditions in the area are simple, without any structural damage or weak interlayers.The mechanical parameters of the ore body and gneiss were obtained through previous rock mechanics tests, as shown in Table 1.
The current mining method used in the mine is primarily the "leaving ore comprehensive method" for ore extraction.However, due to the limitation of the original design in terms of mining width, the mine's production capacity is difficult to increase, thereby affecting the actual production efficiency of the mine.If the mining width of the mining area is increased hastily, it can lead to more severe ground pressure and potential collapse of the surrounding rock at the top of the goaf, posing risks to the safety of equipment and personnel within the mining area.Therefore, the scientific and rational selection of the mining width is of significant research importance for achieving efficient and safe recovery in the Halazi Iron Mine No.2 mining area.
Stope width optimization based on Mathews stability chat method
(1) Mathews stability chart method In order to assess the roof safety in the Halazi Iron Mine No.2 mining area under different mining widths, it is necessary to conduct an analysis using theoretical methods.The Mathews stability chart method is based on the rock mass classification system [20][21][22].By collecting and analyzing a large amount of actual mining parameters, a stability chart was established, depicting the relationship between the rock mass stability index N and the shape coefficient S of the exposed mining face area [23].
The approximate steps for determining the structural parameters of a mining field using Mathews stability chart method are as follows: 1. Firstly, calculate the stability index N of the rock mass based on relevant formulas.2.Then, consider the overall mining development and mining excavation engineering to preliminarily determine the structural parameters of the mining field.Also, calculate the shape coefficient S representing the exposed area of the mining field.3. Finally, N and S onto the Mathews stability chart to make an initial assessment of the stability of the mining field.
(1) Calculation of stability index N The stability index N can be calculated as follows (Eq.( 1)): where, A represents the stress coefficient, B represents the rock mass defect directional correction coefficient, and C represents the design exposure face directional correction coefficient.The magnitudes of coefficients A, B, and C are determined by the mining orebody dip angle and the mechanical parameters of the surrounding rock [24].
Considering only the stability coefficient, for the Halazi Iron Mine 2nd Deposit, the Rock Mass Rating (RMR) is 47, and the structural parameter value Q′ is assumed to be 1.40.The parameters for coefficients A, B, and C can be referenced from the literature [25], which will not be specifically mentioned here.Based on calculations, the value of coefficient A for the Halazi Iron Mine 2nd Deposit is 0.85, the value of coefficient B is 0.3, and the value of coefficient C can be taken as 1.00.Using formula (1), the calculated value of N for the Halazi Iron Mine 2nd Deposit is 0.36.
(2) Calculation of shape coefficient S The shape coefficient S can be defined as the ratio of area to circumference, as follows (Eq.( 2)): where, X represents the width of the mining field, and Y represents the length of the mining field.It is worth noting that when the ratio of the exposed face length to width is 4:1, the shape coefficient S remains relatively constant.This means that the stability of the exposed face is primarily influenced by the width of the exposed face during this condition.The dimensions of the mining excavation are determined based on the original mining design dimensions.Specifically, the height of the excavation is 27 m and the length is 44 m.The original mining excavation had a width of 6 m.To determine the appropriate increase in excavation width, seven different width options were designed, as shown in Table 2.
The stability index N is obtained through calculations, and then the stability shape coefficient S of the excavation roof is determined to be 3.65 based on Fig. 2.This means that when the shape coefficient of the excavation exceeds 3.65, it surpasses the stability-failure boundary and collapses.Referring to Table 1 for the shape coefficients of the excavation roof, it can be observed that when the excavation width is 8 m, the shape coefficient is 3.38 < the stability shape coefficient of 3.65.However, when the excavation width is 9 m, the shape coefficient is 3.73 > the stability shape coefficient of 3.65.Therefore, according to Mathews' stability graph method, the excavation width of the Halazi Iron Mine should not exceed 9 m.
Flac 3D model construction and calculation method
To further accurately determine the dimensions of the excavation, the Flac 3D numerical analysis software [26] is employed in this study to analyze the mechanical response of the mining body under varying excavation widths.Flac 3D is capable of simulating the mechanical behavior of three-dimensional media in engineering structures [27].During the calculation process, material yielding and rheology can occur, allowing for the resolution of significant deformations and localized failure problems commonly encountered in geotechnical engineering. (
1) Model dimensions
To perform the mechanical response analysis of mining activities using Flac 3D , it is first necessary to construct a numerical analysis model [28] as shown in Fig. 3.The overall dimensions of the numerical model built for this study are 947 m × 990 m × 500 m.Based on the ore body occurrence conditions of Halazi Iron Mine No.2, the stress redistribution range is approximately three times the maximum excavation size of the mining area.Within this range, the stress field is considered as secondary stress field, while the rock mass beyond this range is regarded in its original stress state.Therefore, the vertical direction along the ore body strike is defined as the X direction with a length of 947 m, the direction parallel to the ore body strike is defined as the Y direction with a total length of 990 m, and the actual vertical direction is defined as the Z direction with a total height of 500 m.
The design sets the roof pillar width at 3 m, the inter-pillar width at 6 m, and the panel height at 30-40 m.The panel width limitations will be determined based on Matthews' stability graph method, and the analysis will be conducted with panel widths of 6 m, 7 m, 8 m, and 9 m.The physical and mechanical properties of the surrounding rocks and ore body can be referred to in Table 1.By simulating the entire mining process until completion, the mechanical response indicators will be comprehensively evaluated to determine whether collapse will occur in the panel.This analysis aims to obtain the optimal panel width suitable for Halazi Iron Mine No.2, ensuring safe and efficient mining operations.
(2) Analysis of grid dimensions According to numerous studies conducted both domestically and internationally, it has been shown that the size of the grid cells is one of the important factors affecting the computational results when using FLAC3D for simulation research [29,30].In this study, a tetrahedral grid partitioning method was employed for the construction of the model, with the following specifications: The FLAC3D overall grid model constructed for the Halazi Iron Mine Deposit No.2, including the ore body and surrounding rock, consists of approximately 1,597,578 cube elements with various sizes and 12,768,108 nodes.Extensive preliminary sensitivity analyses were conducted to validate the reliability of the numerical calculation model in terms of boundary sizes and grid density.
(3) The analysis of mining sequence Based on the division of the mining middle section according to the mine plan, this simulation study aims to investigate the mechanical response indicators of the surrounding rock and ore body during the middle section excavation of 310 m, 280 m, 250 m, 220 m, 190 m, and 150 m wide stopes using the comprehensive sublevel stope method.Furthermore, stope width optimization will be conducted based on these findings.In the numerical simulation calculations, the empty stope method was employed, with a stope width of 6 m, 7 m, 8 m, or 9 m, and a middle section height of 30 m, with pillar widths of 3 m and 6 m, and block lengths of 50 m.To ensure the analysis is close to actual production, direct simulations of the mining process are conducted for each middle section ore body.The mining sequence for the middle section is as follows: first, the ore body in the 310 m middle section is mined, followed by the 280 m, 250 m, 220 m, 190, and 150 m middle sections in sequence.Within each middle section, the stopes are mined from south to north.The calculation steps are shown in Table 3.
(4) Instability criterion Due to the influences of geological factors such as rock mechanical properties, groundwater, rock mass structure and tectonics, stope dimensions, support methods, and mining practices, the stability of stope surrounding rock is difficult to assess using a universally applicable mathematical model.Therefore, a comprehensive judgment can be made by considering actual observations, monitoring data, and the results of mechanical calculations [31].For stope excavation under specific geological conditions, a numerical model can be established to calculate the stress, displacement, and plastic zone distribution of the surrounding rock before and after stope excavation and support.The stability of the rock mass can then be assessed by analyzing the distribution of stress, displacement, and plastic zones [32].
The allowable maximum displacement refers to the maximum subsidence of the stope roof in order to ensure that no harmful loosening occurs in the stope and no hazardous surface subsidence occurs, until deformation stability is achieved [33].Based on the practical experience of unsupported or temporarily supported large-span underground stope excavation in mines, there is a corresponding relationship between the displacement of the surrounding rock of the large-span underground stope and stope stability.According to Table 4, in subsequent simulation and analysis of stope structural parameters, if the subsidence exceeds 45 mm, it is considered that the engineering structure of the rock mass has been damaged.
After excavation of the mining area, the initial ground stress in this region is released and transferred.The maximum principal stress occurs extensively in the hanging wall at mid-sections of 310 m, 280 m, 250 m, and 220 m.Similarly, the maximum principal stress also appears in the northern at mid-sections of 190 m and 150 m.The maximum value of tensile stress is approximately 0.079 MPa (Fig. 4(a)).The maximum value of the minimum principal stress mainly appears in the pillars at the mid-section of 150 m, with a maximum compressive stress value of 29.85 MPa (Fig. 4(b)).The maximum settlement displacement primarily occurs in the hanging wall at the middle section of 150 m, with a maximum settlement displacement of approximately 26.75 mm.The floor of the mining area shows floor heave, with a heave displacement of approximately 14.83 mm (Fig. 4(c)).
The maximum value of the maximum principal stress in the footwall occurs extensively at mid-sections of 310 m, 280 m, 250 m, and 220 m.At the end face of the 190 m section and the northern area of the 150 m section, the maximum value of the maximum principal stress is also found in the footwall.The maximum tensile stress value is approximately 0.080 MPa (Fig. 4(a)).The minimum principal stress mainly appears in the pillars at the mid-section of 150 m, with a maximum compressive stress value of 29.85 MPa (Fig. 4(b)).At this stage, significant floor heave displacement occurs in the footwall, with a maximum heave displacement of approximately 21.62 mm.The maximum settlement displacement of the mining roof is approximately 21.16 mm (Fig. 4(c)).
According to the stress evolution curve shown in Fig. 4 (d), as the mining extends deeper into the ore body, the maximum tensile stress remains constant while the maximum compressive stress gradually increases.Additionally, as the mining extends deeper into the ore body, the maximum subsidence displacements of the hanging wall, roof, footwall, and floor heave displacements gradually increase.However, they have not exceeded the permissible displacement limit of 45 mm (Fig. 4(e)).Therefore, when the mining area has a width of 6 m, the overall stability of the mining area remains relatively stable throughout the entire extraction process.
After excavation, tensile stresses occur in the hanging wall at mid-sections of 310 m, 280 m, 250 m, 220 m, 190 m, and 150 m.The maximum tensile stress is located in the hanging wall, approximately 0.079 MPa.The minimum principal stress mainly appears in the pillars at the mid-section of 150 m, with a maximum compressive stress value of 30.64 MPa (Fig. 5(a)).The maximum settlement displacement primarily occurs at the intersection between the hanging wall and the roof at the middle section of the mining area, with a maximum settlement displacement of approximately 30.56 mm (Fig. 5
Table 4
The corresponding relationship between the displacement of surrounding rock and the stability of stope.According to the stress evolution curve (Fig. 5(d)), as the mining extends to deeper levels, the maximum tensile stress remains constant while the maximum compressive stress gradually increases.From the displacement evolution curve (Fig. 5(e)), it can be observed that as the mining extends to deeper levels, the maximum subsidence of the hanging wall, roof sinking displacement, bottom heave displacement of the footwall, and floor heave displacement gradually increase, but they have not exceeded the permissible displacement boundary of 45 mm.Therefore, when the mining width is 7 m, the overall stability of the mining process is relatively stable.(3) The stope width is 8 m.
After the excavation, the initial ground stress in the area has been released and transferred.Tensile stress is observed in the hanging wall at depths of 310 m, 280 m, 250 m, 220 m, 190 m, and 150 m.The maximum tensile stress is located in the hanging wall, approximately 0.079 MPa (Fig. 6(a)).The minimum principal stress mainly occurs within the pillars between the 150 m sections, with a maximum compressive stress value of 31.58MPa (Fig. 6(b)).The maximum subsidence displacement mainly occurs at the junction of the middle section of the mining face and the overlying surrounding rock, with a maximum subsidence displacement of approximately Fig. 6.Mining mechanical response characteristics of stope width 8 m X. Cui et al.
42.91 mm.Bottom heave is observed at the junction of the footwall and the working face roof at the 150 m section, with a heave displacement of approximately 35.26 mm (Fig. 6(c)).
At depths of 310 m, 280 m, 250 m, 220 m, 190 m, and 150 m, there is a large area of tensile stress in the footwall.Additionally, tensile stress is observed in the working face floor at the 150 m section.The maximum tensile stress is located in the footwall, approximately 0.079 MPa (Fig. 6(a)).The minimum principal stress mainly occurs within the pillars between the 150 m sections, with a maximum compressive stress value of 31.58MPa (Fig. 6(b)).There is bottom heave displacement at the junction of the footwall and the working face roof at the 150 m section, with a maximum heave displacement of approximately 35.26 mm, and the sinking displacement of the working face roof is approximately 42.91 mm (Fig. 6(c)).
According to the stress evolution curve (Fig. 6(d)), as the mining proceeds deeper into the ore body, the maximum tensile stress remains constant while the maximum compressive stress gradually increases.According to the displacement evolution curve (Fig. 6 (e)), as the mining proceeds deeper into the ore body, the maximum subsidence of the hanging wall, roof sinking displacement, bottom heave displacement of the footwall, and floor heave displacement gradually increase, but they have not exceeded the allowable displacement threshold of 45 mm.Therefore, when the mining width is 8 m, the entire mining process exhibits overall relative stability.
After the excavation of the mining stope, the initial stresses in the area are released and redistributed.Tensile stresses are observed in the overlying rock mass at the sections of 310 m, 280 m, 250 m, 220 m, 190 m, and 150 m, with the maximum tensile stress occurring in the overlying rock mass at approximately 0.079 MPa (Fig. 7(a)).The minimum principal stress mainly occurs in the pillars between the sections of 150 m, with a maximum value of 33.74 MPa (Fig. 7(b)).The maximum subsidence displacement is mainly observed at the intersection between the overlying rock mass and the roof in the middle of the stope, with a maximum displacement of approximately 45.72 mm.Bottom heave is observed at the junction of the footwall and the working face roof at the 150 m section, with a heave displacement of approximately 40.45 mm (Fig. 7(c)).
Tensile stresses are observed in the footwall at the sections of 310 m, 280 m, 250 m, 220 m, 190 m, and 150 m, along with tensile stresses in the bottom plate of the stope at the section of 150 m.The maximum tensile stress is located in the footwall, approximately 0.080 MPa (Fig. 7(a)).The minimum principal stress mainly occurs in the pillars between the sections of 150 m, with a maximum value of 33.74 MPa (Fig. 7(b)).Bulging displacement is observed at the intersection between the footwall and the roof of the stope at the section of 150 m, with a maximum bulge displacement of approximately 40.45 mm, while the subsidence displacement of the stope roof is approximately 45.72 mm (Fig. 7(c)).
When the mining width is 9 m, as the mining proceeds deeper into the ore body, the maximum tensile stress gradually decreases while the maximum compressive stress increases (Fig. 7(d)).Furthermore, as the mining proceeds deeper into the ore body, the maximum subsidence of the hanging wall, roof sinking displacement, bottom heave displacement of the footwall, and floor heave displacement gradually increase and have already reached the allowable displacement threshold of 45 mm (Fig. 7(e)).Therefore, when the mining width is 9 m, there is a risk of collapse throughout the mining process.However, it is recommended to closely monitor specific areas during the mining process and implement appropriate support measures to ensure safety and maximize the efficiency of the mining operation.
Stope width contrast optimization
(1) Maximum principal stress analysis Except for the mid-section mining stopes at 150 m level, where the maximum tensile stress of the surrounding rock mass decreases first and then increases with increasing stope width, the maximum tensile stress of the surrounding rock mass at the end of each other mid-section mining stope increases with the stope width.When the stope width increases from 6 m to 9 m, the maximum tensile stress of the surrounding rock mass is below 90% of the tensile strength of the rock mass.However, when the stope width increases from 8 m to 9 m, there is a slight decrease in the maximum tensile stress, indicating that the stope width of 9 m still maintains good tensile Fig. 8. Maximum principal stress analysis for different stope widths.
X. Cui et al.
(2) Minimum principal stress analysis Except for the mid-section mining stopes at 310 m level, where the maximum compressive stress of the surrounding rock mass remains relatively unchanged with increasing stope width, the maximum compressive stress of the surrounding rock mass at the end of each other mid-section mining stope increases with the stope width.When the stope width increases from 6 m to 9 m, the increase in the maximum tensile stress of the surrounding rock mass is relatively small, and the maximum compressive stress is within the allowable range.Therefore, considering the maximum compressive stress of the surrounding rock mass, the stope width can be increased to 9 m (Fig. 9).
(3) Maximum displacement analysis Except for the mid-section mining stopes at 310 m level, where the maximum vertical displacement of the overlying strata initially increases and then decreases with increasing stope width, the maximum vertical displacement of the overlying strata at the end of each other mid-section mining stope increases with the stope width.When the stope width increases from 6 m to 8 m, there is a significant increase in the maximum vertical displacement of the overlying strata.However, during the process of increasing the stope width to 9 m, the increase in the maximum vertical displacement is relatively small.After all the mid-section mining stopes are completed, with a stope width of 9 m, the maximum vertical displacement of the overlying strata exceeds the permissible limit of 45 mm.This indicates a potential instability of the stope during the entire mining process and poses a high safety risk.Therefore, in order to reduce the safety risks during the mining process, considering the maximum vertical displacement of the overlying strata, it is recommended to adopt a stope width between 6 and 8 m (Fig. 10).
(4) Plastic zone analysis Based on the simulation results (Fig. 11), it can be observed that the evolution of the plastic zone under different stope widths is generally similar during the mining process.In the initial stage of excavation, only a small amount of tensile plastic zones appears in the upper and lower part of the stope, resulting in minimal impact on the stability of the stope.As the mid-section mining extends deeper, in addition to the tensile plastic zones in the upper and lower part of the stope, shear plastic zones also appear.At the same time, partial shear plastic zones occur in the pillars, and the area of the plastic zone in the stope continues to increase.However, due to different stope widths (Fig. 11
Conclusions
The article employed the Mathews stability chart method to calculate the ultimate width of the mining fields in the Halazi iron ore mine.In addition, Flac 3D was used to simulate and analyze the mining process under different field widths, obtaining the response characteristics of corresponding mechanical indicators.Finally, based on the response characteristics of the mechanical indicators and Fig. 9. Minimum principal stress analysis for different stope widths.
footwall and the roof of the 150 m mid-section mining area, with a heave displacement of approximately 25.98 mm (Fig.5(c)).At depths of 310 m, 280 m, 250 m, 220 m, 190 m, and 150 m, there is a large area of tensile stress in the footwall.Additionally, tensile stress is observed in the working face floor at the 150 m section.The maximum tensile stress is located in the footwall, approximately 0.079 MPa (Fig.5(a)).The minimum principal stress mainly occurs within the pillars between the 150 m sections, with a maximum compressive stress value of 30.64 MPa (Fig.5(b)).There is bottom heave displacement at the junction of the footwall and the working face roof at the 150 m section, with a maximum displacement of approximately 25.98 mm, and the sinking displacement of the working face roof is approximately 30.56 mm (Fig.5(c)).
Fig. 7 .
Fig. 7. Mining mechanical response characteristics of stope width 9 m (a)-(b), Fig. 11(c)-(d)), the distribution range of the plastic zone varies.With the increase in stope width, the range of the plastic zone gradually expands.When the stope width is 9 m (Fig. 11(d)), some areas of the stope exhibit large plastic zones, indicating a potential instability and relatively high safety risk during the mining operation.Therefore, considering the distribution of plastic zones around the stope, it is recommended to adopt a stope width between 6 m and 8 m.
Table 1
Physical and mechanical parameters of rock mass.
Table 2
Shape coefficient S.
X. Cui et al. | 2024-02-15T16:10:41.464Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "5483c4a1fd70eb27956bce02785b6e20c2a19b14",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844024020760/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd84f0515607b3ffa587ed9cd462a019b28f7523",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247608989 | pes2o/s2orc | v3-fos-license | Problems of Internship of Professional Experience in Teaching Mathematics
The purposes of the research were (1) to study the problems of internship of teachers in teaching mathematics programs and (2) to compare the problems internship of teachers regarding the classes and educational level. Mixed method research was employed for the study. Quantitative data from 242 sample teachers and interview data from 12 teachers. The research instruments were a questionnaire and an interview form. The statistics used were percentage, mean, standard deviation and t-test. Results of the research were as follows. 1. The research findings showed that the average problem of the teachers about teaching Mathematics was the total and five domains at a moderate level. Whereas, assessment and evaluation at a low level. 2. The findings indicated that the problems of the teachers about teaching Mathematics regarding the secondary school teachers about teaching Mathematics was more than that of the primary school teachers at the .05 level of the statistical significance. Whereas, bachelor degree and post graduate degree was not different.
Introduction
Mathematics is a major subject of the fundamental subjects for improving creative thinking skills, analytical skills, planning and decision-making skills and practical applications of the students. Additionally, Mathematics is a basis of many sciences in particular science and technology. It is obvious that modern technology has changed greatly many current fields to be convenient in our daily life such as society, education, politics and economics (Maensnguan, 2012).
Mathematics is a major subject of the core fundamental subjects of the basic education curriculum of 2008. In 2017, Ministry of Education of Thailand has improved the learning indicators and contents of Mathematics. The indicators and contents focus on improving essential skills for the 21 st century consisting of analytical skills, critical thinking skills, problem-solving skills, creative thinking skills, application of technology, communication, and collaboration (Office of the Basic Education Commission of Thailand, 2017) Consequently, the educational institutes regarding the National Education Act (Thailand), B.E. 2542 (1999) have taken the major responsibility for producing professional teachers and improving qualified teachers, including educational personnel to meet the higher national standards. Besides, the higher education institutes, both universities and colleges, develop human resources of educational personnel, they also find out the workplaces for the internship of professional experience in teaching of the students. The previous studies reported that the universities and teacher colleges have not emphasized the workplaces for the internship in professional experience of the students. Phu-ngoen, Khotbanthao, and Phothiwat (2013) suggested that Faculty of Education regarding the policy in producing professional teachers, should realize the importance in strengthening both primary schools and secondary schools, where are the workplace for the internship of professional experience of the students. Moreover, the teachers, mentors of schools where the students will do professional experience, should be qualified for coaching the teacher students. Additionally, the studies showed that the networks of schools for professional experience in teaching have not strengthened enough for the internship of the students.
problems of teachers at schools and the school network about teaching Mathematics of the teachers at schools where Mahasarakham University is responsible for supervision and the students of Faculty of Education do the internship. Therefore, the studies will provide useful information of selection and preparation of the appropriate workplace, mentors, and facilities for the internship of the students.
Research Methodology
Mixed method research consisting of quantitative research and qualitative research was employed for convergent parallel design. The accuracy of data was assessed for more valid and perfect and presented by Creswell and Plano Clark method (2011). The quantitative research has been conducted by Mathematics teachers through questionnaires and field study for qualitative data collection by interview of the teachers.
Samples
The population was 316 mathematics teachers from 23 schools in Mahasarkham, Roi-Et and Kalasin Provinces, the second semester of 2019. The research samples were divided into two groups: 1) 242 teachers of the mathematics department from 23 schools for the internship of professional experience in the second semester of 2019. The sample of this research by using Taro Yamane sample size determination table with 95% confidence level with a Margin of Error of about ± 5% (Yamane, 1967). They were selected by stratified random sampling. 2) 12 research participants for qualitative research were three mathematics teachers from three primary schools and nine mathematics teachers from three secondary schools in Mahasarkham, Roi-Et and Kalasin Provinces.
Research Instrument
I) The questionnaire was divided into three main parts. 1) Checklist questionnaire for general information, 2) 30 items of a five-point-rating scale questionnaire on problems of Mathematics instruction focusing on six areas: curriculum and contents, instructional preparation, teaching management, students, teaching materials, and assessment and evaluation, and 3) open ended questionnaire for other problems and suggestions. The index of item-objective congruence [IOC] for the questionnaire assessed by content validity was 1.00 (IOC > 0.50). After that the questionnaire was tried out to 20 research participants who were not the target samples. Then, the discrimination index was analyzed by item-total correlation and the value of the questionnaire was ranged from .38 to .94 (the critical region for one-tailed test is: r > 0.37). Lastly, the reliability index assessed by Cronbach's alpha was .98.
II) The major issues were interviewed Problems about Teaching Mathematics and the guidelines for teaching Mathematics consisting of curriculum and contents, teaching preparation, instructional management, students, teaching materials, and assessment and evaluation.
Data Collection
Initially, the author had to make the agreement with the 2 nd year students and 5 th year students of the Mathematics Program, Faculty of Education, Mahasarakham University, about technique for data collection as follows.
I) The official letter was written by the author for the 5 th year students for asking a permission from school directors in Mahasarkham, Roi-Et and Kalasin Provinces, where they did professional experience internship for data collection by questionnaires.
II) The qualitative data was collected by the 2 nd year students of the Mathematic Program, Faculty of Education, Mahasarakham University. They had to make an appointment with the target population for the interview.
Data Analysis
1) The quantitative data consisted of 1.1) The general data was analyzed by a statistical procedure: frequency and percentage. 1.
2) The data of the problems was analyzed by mean and standard deviation. (Cohen, 1977) 2) Data Triangulation was employed for qualitative data analysis at the different time periods and different teachers, The author intervied both the primary school and secondary school teachers and the content analysis and descriptive report were employed for the study.
2) Problems in teaching mathematics
The research findings showed that the overall problem of teaching mathematics was at moderate level. The five moderately rated items of the problems were instructional management, students, curriculum and contents, teaching materials, and assessment and evaluation respectively in Table 2. The comparison of the problems of the teachers about teaching and learning mathematics.
The findings indicated that the overall problem of the secondary school teachers about teaching Mathematics was more than that of the primary school teachers at the .05 level of the statistical significance. The problems of the secondary school teachers and the primary school teachers regarding curriculum and contents, teaching preparation, instructional management, and teaching materials were different at the .05 level of the statistical significance. Whereas the overall problem of the secondary school teachers and the primary school teachers regarding assessment and evaluation was not different at the .05 level of the statistical significance, and Cohen's D ranged from -0.11 to -0.30 in Table 3. and post graduate degree was not different at the .05 level of the statistical significance. The two problems of the teachers with teachers with postgraduate degree about teaching materials and assessment and evaluation were more than those of the teachers with bachelor degree at the .05 level of the statistical significance. Whereas, the four problems of the teachers with postgraduate degree and bachelor degree about teaching Mathematics regarding curriculum and contents, teaching preparation, instructional management, and students were not different at the .05 level of the statistical significance, and Cohen's D ranged from -0.07 to -0.26 in Table 4.
Part 2 Qualitative
According to interview of 12 teachers, the data was analyzed and presented in the descriptive report as follows.
1.1) Curriculum and contents: a)
The teachers were confused about the practical guidelines of Mathematics Curriculum of 2017, the improved curriculum. b) The contents were arranged and organized inappropriately and discontinuously. c) The contents were complex, difficult and more for teaching Mathematics. d) The details of Mathematics textbook did not cover the core curriculum of Mathematics and there were too many pictures.
Examples of the interview of the teachers "The complex contents of the new curriculum in each educational level mainly cause the students feel bored and too difficult contents. As a result, . the students has not achieved the learning objectives and indicators." Interviewee 4 "Some contents of the new curriculum have been deleted, therefore, the instructional management is not efficient and the contents have not provide basic knowledge for the students. The problems make the teachers have to waste the time for reviewing basic knowledge to the students causing the assigned lesson plans." Interviewee 6 1.2) Teaching preparation: a) The teachers made lesson plans unsuitable to the curriculum. b) Course description and structure of the course were not updated. c) The teachers did not have sufficient time for preparing the lessons because of more work loads 1.3) Instructional management: The students had different learning backgrounds of both knowledge and skills. They were rarely ready to learn. Additionally, they were enthusiastic to learning and participate in learning activities. 4) The teachers seldom took care of their students because of too many students in one class. 5) There was enough time to learn because of many holidays and extra activities. Lastly, the teachers were not skillful in teaching techniques.
"There are many activities in school affected the teaching management in classes. Although the teachers have to ma up classes, the time is not enough for teaching management. These problems have to delete some contents and teach the contents briefly affecting the teaching efficiency." 1.4) Students: a) Most of the students were lack of problem-solving skills, communication skills, mathematical transfer, explaining and understanding mathematical meanings. b) They were crazy in using mobile telephone while they were studying in class. c) They were rarely responsible for works and always delayed to hand in their homework or exercises. They were not courteous and interested in learning Mathematics. They had a negative attitude towards learning Mathematics.
"Most students dislike Mathematics because it is a difficulty subject for them. Additionally, their background knowledge of Mathematics is not sufficient for further study especially plus, minus, division of whole number and fraction." Interviewee 1 "Most students dislike to learn Mathematics because they cannot answer the problems" Interviewee 11 1.5) Teaching materials: a) There were not various and modern learning materials for the students because of financial support. b) There were very few modern equipment and educational technology available for the students such as projector, computer and so on. c) There was not an internet network in school and not accessibility to the internet. d) The equipment was not ready to use for educational purposes such as projector, television, speakers and so on. e) Mathematics textbooks did not cover based on the core Mathematics curriculum, and there were too many pictures rather than details.
"There is very little electronic media or appliances such as projector computer. The teachers have to spend much time in making teaching materials by themselves. Moreover, most teaching materials are appropriate for teaching Mathematics in senior secondary level because the contents of Mathematics are various and abstract, which are difficult in making teaching materials of the teachers." Interviewee 7 "The weak point of using teaching materials is not appropriate and various. Additionally, the teachers only provide workbooks to students because it is very comfortable for teaching and learning Mathematics, including lack of budget of making various teaching materials." Interviewee 10 1.6) Assessment and evaluation: a) Traditional assessment technique was used for learning assessing learning outcomes. b) The criteria for learning assessment were not clear and the assessing instrument was not concrete. c) All learning objectives were assessed and evaluated clearly. d) The teachers were not skillful in creating the criteria and assessing instruments. The teachers did not understand the new techniques for learning assessment precisely.
"Most students always copy their classmate when they have Mathematics test" Interviewee 3 "The teachers are not confident in new assessment and evaluation. However, they usually use various methods for assessment and evaluation such as exercises and test." Interviewee 6 2) The guidelines for teaching Mathematics were as follows.
2.1) Contents:
The contents should be improved, simplified and suitable to the time and students.
2.2) Teaching preparation: Course description and curriculum structure should be improved.
2.3) Instructional management:
There should extra classes for improving basic knowledge of the students.
2.4) Students:
There should be more extra activities for improving mathematical skills of the students.
2.5)
Teaching materials: Teaching materials should be available and easy to find by the students.
2.6) Assessment and evaluation: There should be many different technics in assessment and evaluation.
Discussion
1) The problems of the teachers about teaching mathematics 1.1) The overall problem of the teachers about curriculum and contents was at a moderate level. The problem may be caused by using the same contents of basic education level, reordering the contents, deleting some contents and adding new contents. The factors caused the problems about teaching and learning Mathematics at a ies.ccsenet.org International Vol. 15, No. 2;2022 moderate level. Aksorn Charoentat ACT Co., Ltd. (2017) stated that the improvement of the Basic Education Core Curriculum of 2008 focused on simplifying, deleting, adding and ordering the contents suitable to the students and relating to their daily life. The institute for the Proamotion of Teaching Science and Technology has analyzed the primary information for designing and developing a draft of the core basic education curriculum by working with the experts and teachers through public hearing. The institute has worked collaboratively with Cambridge International Examinations (CIE) of the United Kingdom focusing on the three major areas of curriculum management and assessment and evaluation. The curriculum reflected on new teaching techniques and perfect contents based on the international standards. The improved curriculum emphasized mathematical skills and the important skills in the 21st century relating to their really life. Consequently, the curriculum has be designed and developed appropriately for the students and the actual situation (The institute for the Promotion of Teaching Science and Technology, 2018). The research results were consistent with the study of Pramarn and Pramarn (2016). Their study insisted that the average problem of the primary school teachers of Physical Education subject in Ayutthaya and Angthong Provinces about implementing the Basic Education Core Curriculum of 2008 was at a moderate level. The teachers did not understand the curriculum precisely. 1.
2) The overall problem of the teachers about teaching preparation was at a moderate level. The problem may be caused by making lesson plans of the teachers, and the teachers were very confident in teaching with well-organized learning activities and teaching materials (Jaithiang, 2010). It is important that the teachers had to study the major components of lesson plans and the major components of lesson plans were designed appropriately and well for the students consisting of teaching technique, teaching materials and assessment and evaluation (Vanichwatanavorachai, 2015). The research results were consistent with the study of Seesamer and Khantoa (2012). Their study revealed that the average opinion of the school administrators and teachers of Khon Kaen Basic Educational Service Area Office 4 about the problems of implementing the basic education curriculum of 2008 was at a moderate level. Suaeram (2009) asserted that the problem of the teachers in schools under Buriram Basic Educational Service Area Office 2 about teaching preparation for teaching Mathematics was at a high level. 1.
3) The problem of the teachers about instructional management was at a moderate level. The result may be caused by teacher training development of modern teaching techniques for instructional management. Additionally, Ministry of Education has reformed strategies for producing and improving efficient teachers. National Strategy (2018-2037) has focused on human resource development by providing financial support of 10,000 baht a person (Teachers and Basic Education Personnel Bureau, 2018). Moreover, Office of the Basic Education Commission of Thailand, 2019) has organized continuously the training courses for improving the efficiency and competency of school administrators, teachers and educational personnel based on their problems and needs. Additionally, the teacher development networks have been established for improving the efficiency and competency of professional school administrators, teachers and educational personnel focusing on authentic assessment of the achievement and works of the students. Wiratkasem (2014) stated that the problem of the teachers in Chonburi City Municipality schools about implementing the Basic Education Core Curriculum was at a moderate level.
1.4) The problem of the teachers in schools for professional experience internship about teaching Mathematics regarding the primary and secondary school students was at a moderate level. The results of both qualitative data and quantitative data insisted that the primary and secondary school students had very few mathematical skills and they had the negative attitude towards learning Mathematics. The results may be caused by the conventional techniques of the teachers focusing on learning achievement rather than mathematical processing skill and ability. Khruekham and Umpapol (2014) claimed that the teacher-centered model was always used for instructional management. Additionally, the teachers rarely used teaching materials in class of different achievement students.
1.5) The overall problem of the teachers about teaching materials of Mathematics was at a moderate level. There were not the internet networks available in some schools where the students of Mahasarakham University did professional experience internship. The research results may be caused by the shortage and inefficient internet networks for online teaching and learning. Ghavifekr, Kunjappan, Ramasamy, and Anthony (2016) stated that one of the major problems about online teaching and learning was disconnection and inaccessibility to the internet network. 1) The problem may be caused by the limitation and knowledge of the teachers about the application of information communication technology (ICT) for instructional purposes. Khanna and Prasad (2020) asserted that both of the teachers and students had encountered the problems of online learning and COVID-19 condition. They were not accessible to the internet network and some students did not know the application of digital technology for online learning. 2) There were very few various teaching materials and ies.ccsenet.org International Education Studies Vol. 15, No. 2;2022 modern digital technology for online learning because of shortage of educational budget support. Kensri et al. (2020) stated that the major problem of teaching Physical Education in primary schools consisted of insufficient sports equipment for the students and lack of educational budget support.
1.6) The overall problem of the teachers about learning assessment and evaluation of Mathematics was a low level. The results may be caused by the improvement of assessment and evaluation method of authentic assessment and evaluation based on the Basic Education Core Curriculum of 2008 (Basic Education Curriculum Development of 2017). Pramarn and Pramarn (2016) argued that the overall problem primary school teachers about assessment and evaluation in Ayutthaya and Angthong Provinces based on the Basic Education Core Curriculum of 2008 was at a moderate level.
2) Comparison of the problems about teaching mathematics 2.1) The overall problem of the secondary school teachers about teaching Mathematics was more than that of the primary school teachers at the .05 level of the statistical significance. The results may be caused by quality of the contents of the secondary school teachers. They have improved the contents of both fundamental courses and extra courses by editing, deleting, adding and organizing suit to the students based on the Basic Education Core Curriculum of 2008. The contents of the fundamental courses consisted of eight topics of number and algebra, two topics of assessment and geometry. The contents of the extra courses-statistics and possibility consisted of four topics (Office of the Basic Education Commission of Thailand, 2017). 2.
2) The overall problem of the teachers with higher degree about teaching materials was more than that of the teachers with bachelor's degree at the .05 level of the statistical significance. The results may be caused by knowledge and skills in using information communication technology (ICT) more than the teachers with bachelor's degree. The teachers with bachelor's degree have used very few modern and various assessment methods. Wirakaserm (2014) argued that the overall attitude of both secondary school teachers and primary school teachers with the different educational backgrounds towards the problems on implementing the Basic Education Core Curriculum of 2008 was not different.
3) The problems of teaching Mathematics regarding the qualitative data were as follows: 3.1) Curriculum and contents: a) The teachers were confused about the practical guidelines of Mathematics Curriculum of 2017, the improved curriculum. b) The contents were arranged and organized inappropriately and discontinuously. c) The contents were complex, difficult and more for teaching Mathematics. d) The details of Mathematics textbook did not cover regarding the curriculum and there were too many pictures.
3.2) Teaching preparation: a) The teachers made lesson plans unsuitable to the curriculum. b) Course description and structure of the course were not updated. c) The teachers did not have sufficient time for preparing the lessons because of more work loads 3.3) Instructional management: a) The students had different learning backgrounds of both knowledge and skills. b) They were rarely ready to learn. c) Additionally, they were enthusiastic to learning and participate in learning activities. d) The teachers seldom took care of their students because of too many students in one class. e) There was enough time to learn because of many holidays and extra activities. Lastly, the teachers were not skillful in teaching techniques.
3.4) Students: a) Most of the students were lack of problem-solving skills, communication skills, mathematical transfer, explaining and understanding mathematical meanings. b) They were crazy in using mobile telephone while they were studying in class. c) They were rarely responsible for works and always delayed to hand in their homework or exercises. They were not courteous and interested in learning Mathematics. They had a negative attitude towards learning Mathematics.
3.5) Teaching materials: a) There were not various and modern learning materials for the students because of financial support. b) There were very few modern equipment and educational technology available for the students such as projector, computer and so on. c) There was not an internet network in school and not accessibility to the internet. d) The equipment was not ready to use for educational purposes such as projector, television, speakers and so on. e) Mathematics textbooks did not cover based on the core Mathematics curriculum, and there were too many pictures rather than details.
3.6) Assessment and evaluation: a) Traditional assessment technique was used for learning assessing learning outcomes. b) The criteria for learning assessment were not clear and the assessing instrument was not concrete. c) All learning objectives were assessed and evaluated clearly. d) The teachers were not skillful in creating the criteria and assessing instruments. The teachers did not understand the new techniques for learning assessment precisely.
Suggestions for Practical Application
1) The school administrators should work with the organization relating to human resource development to organize and provide training courses for Mathematics teachers.
2) The training courses of teaching techniques for instructional management, making lesson plan and assessment and evaluation should be provided for both primary school teachers and secondary school teachers of Mathematics.
3) There should be fundamental courses and extra classes for the students who have the different educational background.
4) The teachers should employ various method and learning activities for improving the attitude of the students towards Mathematics.
5) The teachers should use teaching materials that they can find by their own and let the students learning by doing them.
6) Various learning assessment and evaluation should be employed for learning outcomes of mathematics such as concrete and authentic assessment and rubric scale criteria. The training courses of new approach to measurement and evaluation should be provided for Mathematics teachers.
Suggestions for Further Study
1) Active learning models should be conducted for further study of Mathematics teachers and mentors of students who do professional experience in teaching Mathematics.
2) The research and development of mathematical processing skill should be conducted to the teachers at schools where the students of Mathematics Program do professional experience. | 2022-03-23T15:16:29.597Z | 2022-03-21T00:00:00.000 | {
"year": 2022,
"sha1": "f2da2832ffd14ad85d8fba9e452e6bb2da27099d",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ies/article/download/0/0/46965/50224",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "95dedaecd34200e0ca3a429b24f40dc85cd59c30",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": []
} |
233380786 | pes2o/s2orc | v3-fos-license | The potential impact of systemic anti-inflammatory therapies in psoriasis on major adverse cardiovascular events: a Korean nationwide cohort study
This nationwide population-based cohort study aimed to investigate the impact of systemic anti-inflammatory treatment on the major adverse cardiovascular events (MACE) risk in patients with psoriasis from January 2006 to December 2018, using a database provided by the Korean National Health Insurance Service. Patients were grouped based on the following treatment modalities: biologics, phototherapy, methotrexate, cyclosporine, and mixed conventional systemic agents. Patients who had not received any systemic treatment were assigned to the control cohort. The incidence of MACE per 1000 person-year was 3.5, 9.3, 12.1, 28.4, 39.5, and 14.5 in the biologic, phototherapy, methotrexate, cyclosporine, mixed conventional systemic agents, and control cohorts, respectively. During the 36-month follow-up, the cumulative incidence of MACE in the phototherapy and biologic cohorts remained lower than that of other treatment modalities. Cyclosporine (hazard ratio (HR) = 2.11, 95% confidence interval (CI) = 1.64–2.71) and mixed conventional systemic agents (HR = 2.57, 95% CI = 2.05–3.22) treatments were associated with increased MACE risk. Methotrexate treatment was not associated with MACE. Our finding demonstrates that treatment modalities may affect cardiovascular comorbidities in patients with psoriasis. Thus, an appropriate combination of anti-psoriatic therapies should be considered to manage patients with high cardiovascular risk. IRB approval status: Waiver decision was obtained by the institutional review board, Konkuk University Hospital, Seoul, Republic of Korea (KUH1120107).
Psoriasis is a systemic inflammatory disease that affects approximately 1-3% of the adult population 1 . Epidemiological studies have shown an increased prevalence of cardiovascular (CV) diseases and their metabolic risk factors in patients with psoriasis 2 . Moreover, psoriasis has been suggested as an independent risk factor of major adverse CV events (MACE) [3][4][5][6] . The mechanism linking CV disease and psoriasis remains unclear. The pathophysiology of psoriasis is characterized by T-cell-mediated chronic inflammation, which contributes to the pro-atherogenic environment of the disease 7,8 .
In the last few decades, psoriasis treatment has significantly progressed with the identification of multiple new therapeutic targets and the introduction of biological therapies such as tumor necrosis factor-alpha (TNF-α), interleukin (IL)-12/23, and IL-17 inhibitors. Some systemic drugs, specifically biologics, which act by reducing disease-specific inflammation, are expected to improve the CV risk in psoriasis 9 . However, the association between MACE and each anti-inflammatory treatment for psoriasis remains unclear 10,11 . The selection of appropriate treatments for psoriasis has become a complex process considering the coexisting CV comorbidities and the consequent impact of therapeutic agents. Therefore, the association of each therapeutic agent with MACE must be clarified. In line with this, this population-based nationwide cohort study aimed to examine the
Methods
Data source. This nationwide population-based cohort study was conducted using the customized database of the National Health Insurance Service (NHIS) of Korea (NHIS-2020-1-062). As national insurance in South Korea is mandatory by law, the NHIS database contains data on all health care use of the entire population. The NHIS established the population-based database in 2002, which has registered information on every charged medical and pharmacy claim on all health care uses. Information regarding mortality was obtained from the National Death Registry. Personal information was deidentified and kept protected. The institutional review board of our institution approved a waiver for our study (KUH1120107). (Fig. 1). Each patient was assigned to one of six mutually exclusive cohorts. Patients with psoriasis who received at least one dose of etanercept, infliximab, adalimumab, or ustekinumab were assigned to the biologic cohort, regardless of the discontinuation or combined use of other oral agents or phototherapy. Biologic-naive patients who received conventional systemic agents (i.e., cyclosporine, methotrexate, or acitretin) were assigned to the conventional systemic cohort. Among the patients in the conventional systemic cohort, those treated with cyclosporine alone were assigned to the cyclosporine cohort. Patients who received methotrexate exclusively were assigned to the methotrexate cohort, while the remaining patients were assigned to the mixed conventional systemic cohort. Patients who were not treated with biologics or conventional systemic agents but received phototherapy were assigned to the phototherapy cohort. The biologic, cyclosporine, methotrexate, mixed conventional systemic, and phototherapy cohorts were considered to have moderate-to-severe psoriasis, which indicates that the patients had received some systemic treatment. Patients who did not receive any systemic treatment but received only symptomatic or topical treatments were assigned to the mild control cohort. Inc., Cary, NC, USA; https:// www. sas. com) was used for all statistical analyses. Descriptive statistics were used for all variables. P-values were calculated using analysis of variance for continuous variables and the χ 2 test for categorical variables. The incidence rates of the study endpoints were calculated as events per 1000 personyears (PYs). Multivariate Cox proportional hazard models were used to compare the MACE hazard among each cohort, adjusting for the potential confounding factors of age, sex, and CV risk factors, with significant differences among the cohorts during the baseline period. The results were reported as hazard ratios (HRs) with their corresponding 95% confidence intervals (CIs) and P-values. P-values of < 0.05 were considered statistically significant.
Results
Demographic characteristics. A total of 911,148 patients with psoriasis were included, of whom 9.7% (n = 88,242) were treated with systemic anti-inflammatory treatments and classified into the moderate-to-severe disease group (Table 1). Among the moderate-to-severe disease group, 2.1% (n = 1817) belonged to the biologic cohort, 2.9% (n = 2587) to the conventional systemic cohort, and the remaining 95% (n = 83,838) to the phototherapy cohort, respectively. The conventional systemic cohort was further divided into three sub-cohorts as follows: the methotrexate cohort included those who were treated with methotrexate only (n = 1153); the cyclosporine cohort included those treated with cyclosporine only (n = 755); and the mixed conventional systemic cohort included those treated with acitretin alone or at least two of the following drugs: methotrexate, cyclosporine, or acitretin (n = 679).
All cohorts comprised a smaller proportion of women (P < 0.001). The mean patient age was 46.3 ± 16.1 years. Patients in the biologic cohort were the youngest among all the groups and were 10 years younger than those in the control cohort (P < 0.001). We further divided the patients into three age groups: ≤ 40, 41-60, ≥ 61 years. The older age group showed a higher cumulative incidence of MACE and composite CV outcomes for all cohorts, excluding the incidence of CV outcome for the cyclosporine cohort aged 41-60 years (2.3 per 1000 PYs). The cyclosporine and mixed conventional systemic cohorts had more than three times higher incidence of composite CV outcome (3.1 and 3.0 per 1000 PYs) than the control cohort aged ≤ 40 years. Meanwhile, the biologic cohort had the lowest incidence of composite CV outcome (12.1 per 1000 PYs) among the study cohorts aged ≥ 61 years.
The stratified cumulative incidence of MACE for each treatment cohort for every 3-month period is shown in Fig. 2. The 36-month cumulative incidence was 1.0, 2.7, 3.6, 8.1, and 11.0% for the biologic, phototherapy, methotrexate, cyclosporine, and mixed conventional systemic agents cohorts, respectively, compared to 4.2% in the mild control cohort. The phototherapy and biologic cohorts showed significantly lower cumulative incidence than the control cohort at any point during the observation periods. Conversely, the cumulative incidence of MACE in the cyclosporine and mixed conventional systemic cohorts remained significantly higher than that in the control cohort until the end of the observation period. However, in this study, the cumulative incidence was not significantly different between the methotrexate and control cohorts.
Multivariate analysis. Multivariate Cox regression analyses were performed to determine the association of MACE to each treatment modality ( Table 3). The phototherapy, biologic, and methotrexate cohorts were not associated with a statistically significant effect regarding MACE when compared with the control cohort. Meanwhile, the cyclosporine (HR = 2.11; 95% CI = 1.64-2.71) and mixed conventional systemic cohorts (HR = 2.57; 95% CI = 2.05-3.22) had a higher MACE risk than the control cohort after adjusting for age, sex, and baseline comorbidities.
We further assessed the impact of biologics on the incidence of MACE compared to conventional systemic agents (Fig. 3). The MACE risk associated with biologic treatment tended to decrease over time. Three years after treatment initiation, biologic treatment was associated with a lower MACE risk (HR = 0.46; 95% CI = 0.29-0.74) than treatment with conventional systemic agents.
Discussion
This cohort study assessed MACE risk according to the treatment modalities of phototherapy, biologic, and conventional systemic agents. Herein, the phototherapy and biologic cohorts showed a lower incidence of MACE than the control cohort, and the difference in the cumulative incidence remained significant during the 36-month follow-up period. The cyclosporine and mixed conventional systemic treatments were significantly associated with increased MACE risk. Methotrexate was not associated with MACE in this study. The biologic cohort showed the lowest incidence of MACE among all the cohorts. In the present study, the incidence of MACE in the biologic cohort was 3.5 per 1000 PYs. This finding is similar to that of other studies that showed that biologic therapies were associated with lower CV risk than other anti-inflammatory treatments [12][13][14] .
Another study revealed that patients with psoriasis treated with TNF-α inhibitor had a lower risk of a CV event www.nature.com/scientificreports/ compared to patients treated with phototherapy (adjusted hazard ratio 0.77, P < 0.05) 15 . A study conducted in Denmark showed that the incidence was 3.49 per 1000 PYs for a composite CV endpoint in patients with psoriasis treated with TNF-α inhibitors 9 . This is also comparable to the result of a study based on data from the Psoriasis Longitudinal Assessment and Registry, that is, 3.6 per 1000 PYs in patients with psoriasis treated with biologics (n = 7476) or conventional systemic therapies (n = 2019) 16 .
In this study, no coronary arterial disease was recorded in patients treated with biologics. One cardiac arrest case (0.1%) and 10 stroke cases (0.6%) were associated with biologic treatment, and both showed the lowest incidence rates (0.2 and 1.8 per 1000 PYs, respectively). More than half of MACE in the biologic cohort were observed within 1 year (55%), which was higher than that in the control cohort (35%). The difference in the incidence of composite CV outcomes between the biologic and control cohorts was the greatest in the group aged ≥ 60 years (12.1 vs. 26.8 per 1000 PYs), suggesting that the CV protective effect of biologics could be more potent in older individuals. In addition, the most significant MACE risk reductions were observed 3 years after biologic initiation compared to those after conventional systemic treatment (P = 0.001). This result implies that biologics that target specific cytokines associated with psoriatic inflammation 17 may contribute to reducing MACE. However, further investigation with prolonged study periods for individual biological drug classes is required to clarify the association between biologic treatment and MACE risks.
In the present study, the phototherapy treatment was possibly associated with decreased MACE risk. Specifically, the cumulative incidence of MACE was significantly lower with phototherapy than in the control cohort during the study period (P < 0.001). In addition, the phototherapy cohort showed a lower incidence rate of acute coronary syndrome and stroke than the control cohort (0.2 and 4.7 per 1000 PYs, respectively). However, based on multivariate Cox regression analyses, phototherapy was not associated with decreased MACE risk compared with the control cohort. Preliminary studies have shown that phototherapy may reduce some inflammatory cytokines; however, there is little evidence regarding a decreased CV risk 18 . Our results suggest that phototherapy may have a positive impact on CV risk in patients with psoriasis.
Methotrexate was not associated with MACE in this study. This result differs from results of previous studies that reported an association between methotrexate treatment and reduced CV risk among patients with inflammatory diseases 19,20 . However, our result may indicate that methotrexate treatment in patients with moderateto-severe psoriasis may reduce the MACE risk to at least a level comparable to that in patients with mild disease. This is due to the fact that patients with severe psoriasis show higher CV risk than those with mild disease 21 . In addition, the assessment of CV endpoints alone reveals that the methotrexate cohort has the lowest incidence among all the study cohorts in the groups aged ≤ 60 years. Also, coronary arterial disease and cardiac arrest were not related to methotrexate treatment in the present study. Further research is still needed on the effect of methotrexate on CV events.
Treatment with cyclosporine or mixed conventional systemic agents was associated with a significant increase in the incidence of MACE after adjusting for sex, age, and baseline comorbidities. Although the cyclosporine cohort showed a lower incidence of CV outcomes than the control cohort (7.6 vs. 8.4 per 1000 PYs), caution is needed to interpret these results. This is because these results may be due to the abrupt decrease in CV events among those in the group aged 41-60 years (2.3 per 1000 PYs), which had a reduced rate of CV events (n = 2) and was consistent with the tendency of increased CV outcomes in older patients in all the other study cohorts.
Specifically, the mixed conventional systemic cohort showed the highest incidence of cardiac arrest and stroke among the cohorts (1.0 and 14.6 per 1000 PYs, respectively). Moreover, the stroke risk was significantly higher in the mixed conventional systemic cohort than in the control cohort after adjusting for age, sex, and comorbidities (HR = 1.60; 95% CI = 1.11-2.31). The cyclosporine cohort had a higher rate of ischemic stroke than the control cohort. For all the age groups, the cyclosporine and mixed conventional systemic cohorts showed a greater than three-fold incidence of CV outcomes in the groups aged ≤ 40 years than in the control cohort. This result may be indicative of the importance of considering CV complications when choosing early systemic anti-inflammatory treatments in younger patients. Previous studies reported that cyclosporine could increase blood pressure 22,23 , total cholesterol levels 24 , and serum creatinine levels 25 . Retinoids, including acitretin, are also associated with increased serum cholesterol and triglyceride levels 26 . This study revealed that caution should be exercised when prescribing cyclosporine or acitretin to patients with high CV comorbidities.
Limitations
This study has several limitations. First, because patients with certain risk profiles could have been funneled into specific treatment groups, confounding by indication was possible. To overcome this limitation, we considered baseline comorbidities such as hypertension, dyslipidemia, and renal disease as possible variables that could affect the choice of treatment modality. However, the potentially essential baseline comorbidities such as obesity and liver diseases, social factors, treatment dose, and duration of each treatment modality were not evaluated in this study. In addition, we excluded patients who were diagnosed as having the CV outcomes of interest before the index date for clear assessment of the outcomes. This strict removal, whereas, may result in lower incidence in CV outcomes in each cohort than typically expected in patients with psoriasis. Moreover, multivariate Cox regression analyses was not able to done for CV only outcomes due to the small number of outcomes.
Also, age, exposure to each treatment modality, and the baseline characteristics of the patients were analyzed using a fixed model, which limited the accuracy of the allocations of the exposure levels and outcomes in this study. As age is an important confounding factor in CV diseases, we not only included it as a variable for multivariate analysis but also subdivided the patients into three age groups to further exclude the influence of age. The relatively short 3-year follow-up periods also reduced the potential limitation of using age as a fixed variable, although the shorter follow-up period may also constitute a potential limitation of the study. www.nature.com/scientificreports/ Additionally, the treatment window, that is, treatment discontinuation after the last claimed treatment, was not considered in this study. This might have been particularly limiting regarding the results in the biologic cohort because the biologics were typically prescribed to patients who had already received phototherapy or conventional systemic agents but had insufficient response to treatment or suffered adverse effects.
Furthermore, clinical data, including laboratory results, are not included in the NIHS database. We were therefore unable to evaluate disease severity or changes in laboratory results.
Finally, the control group was evaluated from the time of diagnosis; thus, theoretically, the duration of the systemic inflammation related to psoriasis was shorter. This might be another limitation of the study, as it caused difficulty in determining whether the increased CV risk was due to the treatment modality or disease severity. However, although our study could not provide accurate interpretation regarding increased CV risk, our results showed that the effectiveness of some systemic treatment modalities, such as cyclosporine or acitretin, for controlling inflammation was not superior to that of other treatment modalities. Furthermore, we could have evaluated whether adequate systemic treatment might lower the CV risk of patients with moderate-to-severe disease compared with those with relatively short disease durations by including patients recently diagnosed with psoriasis.
Conclusions
This is a population-based nationwide cohort study on the CV risk associated with systemic treatments for psoriasis in Korea. In this study, phototherapy and biologic treatments were associated with decreased incidence rates of MACE. The cyclosporine or mixed conventional systemic agents were associated with an increase in the incidence of MACE; however, methotrexate was not associated with MACE in this study.
Psoriasis and atherosclerosis are chronic inflammatory diseases with similar immune-inflammatory mechanisms. Therefore, patients with psoriasis have high CV morbidity and mortality. Our study showed that adequate anti-inflammatory treatment would decrease the incidence of MACE in patients with moderate-to-severe psoriasis. We should consider an appropriate combination of anti-psoriatic therapies based on each patient's CV comorbidity to minimize the MACE risk. Randomized trials are required to evaluate the CV safety and efficacy of systemic anti-psoriatic therapies. | 2021-04-23T06:17:03.286Z | 2021-04-21T00:00:00.000 | {
"year": 2021,
"sha1": "e34b3a1edf85333614cb742999bbbee1d6c5c819",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-87766-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b18554e1f83fbf047a9b5f3c2a85bbff48f40c20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2301662 | pes2o/s2orc | v3-fos-license | Expression of the Mycobacterium bovis P 36 gene in Mycobacterium smegmatis and the baculovirus / insect cell system
In the present study we evaluated different systems for the expression of mycobacterial antigen P36 secreted by Mycobacterium bovis. P36 was detected by Western blot using a specific antiserum. The P36 gene was initially expressed in E. coli, under the control of the T7 promoter, but severe proteolysis prevented its purification. We then tried to express P36 in M. smegmatis and insect cells. For M. smegmatis, we used three different plasmid vectors differing in copy number and in the presence of a promoter for expression of heterologous proteins. P36 was detected in the cell extract and culture supernatant in both expression systems and was recognized by sera from M. bovis-infected cattle. To compare the expression level and compartmentalization, the MPB70 antigen was also expressed. The highest production was reached in insect cell supernatants. In conclusion, M. smegmatis and especially the baculovirus expression system are good choices for the production of proteins from pathogenic mycobacteria for the development of mycobacterial vaccines and diagnostic reagents. Correspondence
Introduction
The purification and characterization of individual mycobacterial antigens are essential for the understanding of the pathogenic mechanisms of mycobacteria and the immune response against them.This may also contribute to the knowledge of the virulence of other intracellular bacteria in which cellular immunity is also involved.The extremely long duplication time and the virulence of mycobacteria have prevented for years the identification of antigens and virulence factors.The application of the tools of molecular biology has produced important progress in the knowledge of Mycobacterium tuberculosis and M. leprae antigens and of the immune response against them (1,2).Expression of mycobacterial antigens and virulence factors in a nonpathogenic, fast growing organism such as Escherichia coli offers potential advantages over their purification from pathogenic mycobacteria such as safer working conditions and more rapid results.In addition, expression can be enhanced by placing the gene under the control of a strong promoter or by employing useful protein domains or peptides.However, some diagnostic assays based on E. coli-cloned mycobacterial antigens were reported to be less sensitive than tests in which the antigen is directly purified from a mycobacterium (3).Post-translational modifications such as glycosylation, which has been reported in mycobacteria (4,5) but not in E. coli, have been proposed as an explanation for this difference (3).In this respect, M. smegmatis (6,7) and the baculovirus/insect cell system (8) seem to be more appropriate hosts for the expression of mycobacterial antigens for several reasons: the transcriptional and translational machinery of M. smegmatis and that of the slow growing mycobacteria may be similar because both bacteria belong to the same genus.Moreover, protein from other mycobacteria expressed in M. smegmatis may suffer proper modification and folding (6).On the other hand, the baculovirus system may reach a high level of expression, up to 50% of the total proteins in infected cultures (8), and may induce various modifications.Finally, a reduced reaction of antibodies or T cells with host antigens may be obtained.Human or cattle sera frequently have high titers of antibodies against E. coli proteins.
In this study we investigated the P36 antigen, a 36-34-kDa protein that was identified and characterized from M. bovis by our group (9) and from M. tuberculosis by Berthet et al. (10).One of our aims was to purify this protein to study the immune response against P36 and to use it in diagnostic assays.When we expressed P36 in E. coli the protein was produced but highly degraded.In this study we describe the cloning and expression of the P36 gene in M. smegmatis and in the baculovirus/insect cell system.Because of the importance of the MPB70 antigen for the diagnosis of bovine tuberculosis, its gene was also cloned in M. smegmatis.
Bacteria, viruses, insect cells and media
The Escherichia coli strains used in this work were E. coli DH5a and E. coli BL21(DE3), obtained from Life Technologies (Gaithersbourg, MD).They were grown in LB media supplemented with ampicillin (100 µg/ml) when required.M. smegmatis mc 2 155 was used as the mycobacterial host for gene expression (11).M. bovis AN5 was used as the source of native P36 and MPB70.Mycobacteria were cultured in MADC-TW media (11) and M. smegmatis was also cultured in 3% tryptic soy broth (Difco, Detroit, MI).AcMNPV was used as wild type baculovirus.Sf9 insect cells were grown in serum-free Sf 900 II medium (Life Technologies) at 27 o C in 25-cm 2 flasks.All manipulations involving M. bovis were conducted under a P3 containment.
Cloning procedures
The P36 gene was inserted into the pYUB18, pMV261 and pYUB178 shuttle vectors described below.In the following constructions the P36 gene was obtained from pMBA123, a plasmid containing the P36 gene and regulatory sequences inserted into pBluescript KSII (9).The P36 gene and regulatory sequences were released with XhoI (NEB, Beverly, MA) from pMBA123, filledin with the Klenow fragment of DNA polymerase I and purified from agarose gels with Gene Clean (Bio 101, La Jolla, CA).The fragment was ligated to the pYUB18 (11) vector digested with BamHI and filledin with the Klenow fragment, giving rise to pMBA60, or to the EcoRV-digested pYUB178 vector (12), giving rise to pMBA62.
Plasmid pMBA123 was digested with the enzymes XhoI and RsaI, yielding a 1.0kb fragment that contains the P36 gene.The fragment was purified from the agarose gel as described above and cloned in the pMV261 vector (12) previously digested with SalI and PvuII.The resulting plasmid was called pMBA61.
The MPB70 gene was cloned in pMV261 as follows: a 700-bp band was amplified from the M. bovis genome using primers D 5CAGCAAGGGGCTACAGGTTT3' and R 5CTAATGCCTCCGGCGTAATC3', based on the MBP70 sequence (13).The amplification conditions were: 94 o C for 2 min, followed by 30 cycles of: 94 o C for 30 s, 55 o C for 2 min and 72 o C for 2 min.The amplification product was detected in agarose gels and purified by the Wizard PCR (Promega, Madison, WI).The fragment ends were phosphorylated with polynucleotide kinase (NEB) and ligated to the vector pMV261 digested with EcoRV, to produce the plasmid called pMBA77.
The P36 gene was cloned in the baculovirus vector pVL1393 (14) as follows: the P36 gene was amplified from pMBA123 using the synthetic oligonucleotides: bac pst 5'AACTGCAGATTATGCCGAACCGA CGCCGACGC3' and bac xba 5GCTCTAG ATTAGGCGACCGGCACGGTGATTG3' containing the PstI and XbaI cleavage site, respectively.The resulting PCR product of the expected molecular weight was purified by agarose gel electrophoresis, digested with the mentioned enzymes and inserted into the corresponding sites of pVL1393 under the control of the polyhedrin protein promoter, resulting in pMBA90.
pMBA125 has the same insert as pMBA123, but in the opposite direction so that expression will be under the direction of the T7 promoter.
The correct orientation and integrity of inserts from plasmid constructions were confirmed by restriction analysis, PCR amplification and sequence analysis.Plasmids were purified with Wizard mini preps kit (Promega).The general characteristics of the plasmid constructs obtained are shown in Table 1.
Transformation of Mycobacterium smegmatis
Competent cells of M. smegmatis strain mc 2 155 were prepared and transformed as described by Jacobs et al. (11).
Preparation of cell fractions
E. coli cultures were induced for 2 h by the addition of 10 mM isopropyl-b-thiogalactopyranoside (IPTG).For E. coli BL21, 100 µg/ml rifampicin was added at the time of IPTG induction.Cells were harvested by centrifugation and resuspended in loading buffer for PAGE (2% sodium dodecyl sulfate (SDS), 0.125 M Tris HCl, pH 6.8, 1% 2-mercaptoethanol, 0.02% bromophenol blue, and 10% glycerol).Cell extracts were obtained by boiling for 5 min.Cell extracts from M. smegmatis or insect cells were obtained also by centrifugation and boiling in loading buffer for PAGE.Proteins from culture supernatants were precipitated by the addition of up to 10% trichloroacetic acid and resuspended in 1/100 of the original volume in loading buffer for PAGE.
Western and Southern blots
Western and Southern blots were performed as previously described (15,16).
Transfection of insect cells
Transfection of insect cells to produce recombinant baculoviruses was performed as described by OReilly (17).Sf9 cells (17) Table 1 -Main features of plasmid constructions. 1 The insert orientation is opposite to that of pMBA123.
Plasmid
Vector were infected at a multiplicity of infection (m.o.i.) = 5.
Expression of P36 gene in E. coli
In a previous study (9) we reported the expression of the P36 gene in E. coli, but severe degradation prevented P36 purification from E. coli cell extracts (strain DH5a).In order to obtain a high P36 expression level and stability in E. coli, plasmid pMBA125 was introduced into the E. coli strain BL21(DE3).This plasmid (pMBA125) carries the P36 gene downstream of the T7 promoter.The E. coli strains bear the gene of T7 RNA polymerase from phage T7 under the control of an IPTG-inducible promoter.In addition, E. coli BL21(DE3) lacks the Lon (18) and OmpT (19) proteases, which especially degrade recombinant products.Western blot assays using rabbit anti-P36 sera (Figure 1) demonstrated higher P36 expression levels and a more stringent degree of IPTG regulation in E. coli BL21 (DE3) than in E. coli DH5a.However, the degradation increased with the production level, because only a 29-kDa degradation product is seen in E. coli DH5a, while several bands are demonstrated in E. coli BL21(DE3).
Cloning of P36 and MPB70 genes in M. smegmatis
To compare the expression level in terms of the vector used, the P36 gene and its regulatory sequence were cloned in three different vectors, pYUB18, pYUB178 and pMV261.The main characteristics of the three vectors are as follows: pYUB18 is a low copy number cosmid (11), pYUB178 lacks a replication origin for mycobacteria, and is inserted into the genome of bacilli at a mycobacteriophage integration site (12).Both plasmids lack a promoter for the expression of cloned genes.pMV261 is a moderate copy number plasmid and contains the inducible promoter of the BCG hsp60 gene (12).Following the insertion of the P36 gene into pMV261, pYUB178, or pYUB18, the recombinant plasmids pMBA61, pMBA62, pMBA60 were obtained, respectively.The MPB70 gene was cloned in M. smegmatis to compare its expression level and antibody reactivity to that of P36.The cloning of the MPB70 gene into the pMV261 vector originated plasmid pMBA77 (see Table 1).M. smegmatis was transformed with these plasmids and their presence was confirmed by the preparation of M. smegmatis plasmid DNA, followed by E. coli transformation and restriction analysis or PCR amplification.A Southern blot was performed to demonstrate the presence of pMBA62 in M. smegmatis (data not shown).
P36 expression in M. smegmatis
Expression of the P36 gene in extract and culture supernatants from different recombinant M. smegmatis strains was demonstrated by Western blots using specific anti-P36 rabbit sera (Figure 2).The highest production of P36 was observed with M. smegmatis (pMBA61).The recombinant protein was found both in the supernatant and in the cell extract.Addition of H 2 O 2 as an inductor of the hsp60 promoter had no effect on P36 In order to precisely assess the molecular sizes of the supernatant and of the intracellular forms of P36 protein produced by recombinant M. smegmatis, we submitted a small amount of each sample to SDS-PAGE to detect sharper bands.In the Western blot we observed that the intracellular form is heavier than the supernatant form (data not shown).The M. bovis AN5 supernatant showed two bands corresponding to both M. smegmatis forms.
The immune recognition of the recombinant P36 was studied using a few serum samples from cattle with macroscopic tuberculosis lesions which had been previously assayed for reactivity against the E. coli recombinant P36 (9).The sera reacted against the P36 antigen produced by M. smegmatis (Figure 3A).Two sera recognized no other protein in the supernatant while the other sera recognized a 45-kDa band.Sera from healthy cattle did not react with supernatants or cell extract.
MPB70 expression in M. smegmatis
The MPB70 gene was cloned in pMV261 vector.MPB70 protein was detected as a single band in culture supernatant using a monoclonal antibody.A very low level of
Expression of P36 gene in insect cells
Insect cells were infected with recombinant baculovirus carrying the P36 gene.In this construction the start codon of the P36 gene is ATG (instead of GTG) to optimize expression in this eukaryotic system.The presence of the P36 gene was demonstrated in infected Sf9 insect cells by dot blot of cell DNA using the P36 gene as a probe.Sf9 cells were infected at a multiplicity of infection of 5. Culture supernatants and cell extracts were obtained at 1.5, 3 and 4 days post-infection (dpi).The expression of P36 in extract and culture supernatant was demonstrated by Western blot using specific anti-P36 sera.The recombinant protein was found both in the supernatant and in the cell extract (Figure 5), with a molecular mass slightly higher than the P36 produced by M. bovis.In the supernatant, P36 appeared first at 1.5 dpi, reached its maximum expression level at 3 dpi and was not found at 4 dpi, probably due to degradation (data not shown).A main P36 band with slightly smaller bands was seen, indicating that degradation is highly reduced in insect cells.Several proteins migrated in the region of P36, making it difficult to detect a P36-specific band in Coomassie blue-stained gels (data not shown).In cell extracts, P36 appeared at 1.5 dpi, reaching also its maximum expression level at 3 dpi and slightly decreasing at 4 dpi (data not shown).Anti-P36 serum recognized no protein in the cell extract or culture supernatant from nonrecombinant baculovirus-infected insect cells.P36 production was higher in insect cells than in M. bovis, because when only 5 µl of nonconcentrated insect cell supernatant was submitted to SDS-PAGE, the protein was clearly detected by Western blot.On the contrary, no P36 band was detected when 5 µl of nonconcentrated M. bovis was loaded.
The same panel of sera from infected cattle as used in section 3.3 recognized P36 secreted by insect cells (Figure 3B).These cattle sera recognized no other protein in the insect cell culture supernatant.Sera from healthy cattle did not react with the supernatants.
Comparison of relative P36 production in M. smegmatis and baculovirus-infected insect cells
Since P36 is very poorly stained by Coomassie blue or silver nitrate (Bigi F, unpublished observations, and Berthet FX, personal communication) we decided to determine the relative P36 production level in the different expression systems by Western blot.To compare the relative production level of P36 protein in M. smegmatis, baculovirusinfected insect cells, E. coli and M. bovis, we determined the minimal number of cells producing detectable P36 by Western blot (Table 2).Since the sera and the antigen used in this comparison are the same, this method permits an approximate analysis of production level, even though the absolute amount of protein (i.e., as determined in Coomassie blue-stained gels) is not known.The highest
31-
Figure 5 -Detection of P36 expressed in insect cells.Western blot: anti-P36 was used as first antibody.Lanes: 1, M. bovis AN5 culture supernatant; 2, cell extract from insect cells infected with nonrecombinant baculoviruses; 3, cell extract from insect cells infected with recombinant baculoviruses; 4, culture supernatant from insect cells infected with recombinant baculoviruses (50 µl non-TCA concentrated supernatant).Except for lane 4, 1 ml of culture was used for preparation of extract and supernatant.
relative P36 production was reached in insect cells.The accumulation in the supernatant was 20 times higher than in cell extracts.E. coli showed an intermediate level of expression, while the supernatant and cell extracts of M. smegmatis had the same amount of P36 protein, similar to that of the producing organism, M. bovis.The production per cell in the insect cell supernatant was 4600 times higher than in the M. smegmatis supernatant.The protein concentration of each fraction is also given in Table 2.The comparison was based on the number of cells and not on protein concentration because of the highly different protein concentration of cell extracts and culture supernatants.
Discussion
The objective of this study was to compare different systems for the expression of mycobacterial antigen genes.For these expression assays we chose the P36 antigen, a secreted protein identified and cloned in our laboratory (9).This protein contains several amino acidic PGLTS repeats, which is the main antigenic region of the protein (Bigi F, unpublished observations).
The gene encoding P36 was introduced in E. coli BL21 under the control of the strong promoter T7.P36 production was higher compared to the E. coli strain (DH5a) where the P36 gene is under the direction of lacZ promoter.The degree of gene regulation by IPTG addition was more stringent in E. coli BL21 than in E. coli DH5a, a result possibly explained by the fact that in BL21 it is the RNA polymerase gene, not the recombinant gene, which is under the control of Plac (20).Although E. coli BL21 is a protease-deficient strain, the extent of proteolysis was still high, preventing the purification of the antigen and indicating that Lon and OmpT proteases are not responsible for the degradation.
To improve P36 protein production, stability and presumably post-translational modifications, the gene encoding P36 was introduced into M. smegmatis and in the baculovirus/insect cell system.At the same time, we transformed M. smegmatis with the gene encoding antigen MBP70 (21,22).We used MPB70 protein to compare its expression level, compartmentalization and antibody reactivity with that of P36.Plasmid constructs containing the genes coding for the two antigens were introduced into M. smegmatis.These constructs correspond to different vectors (integrative or replicative, with and without a strong promoter).The highest P36 production was obtained with a replicative plasmid containing the hsp60 promoter (pMBA61).No induction of P36 gene expression was achieved by adding H 2 O 2 to the culture medium, as reported by others (12).A lower production was obtained with M. smegmatis transformed with a replicative promoter-less plasmid (pMBA60) and with the integrative promoter-less plasmid (pMBA62).This lower P36 production was observed using both promoter-less vectors, demonstrating that the P36 promoter is active in M. smegmatis.Recombinant MBP70 antigen was produced by M. smegmatis (pMBA77).No production was obtained when the MPB70 gene was cloned in pYUB18 (data not shown), suggesting that the higher the gene copy number, the stronger the production.
P36 protein was found both in the culture supernatant and cell extract of M. smegmatis (pMBA61).Since in M. bovis P36 is a secreted protein, the secretion process seems to be more efficient in M. bovis than in M. smegmatis.The supernatant and extract forms were detected as bands of slightly different sizes, with the extract form being heavier.This result suggests that the cell extract form may be the processing precursor of the secreted form.However, it is unclear why bands of both sizes are observed in the extracellular fluid of M. bovis.A higher secretion rate was observed with MBP70 produced by M. smegmatis mainly in the culture supernatant.P36 stability was higher both in M. smegmatis cell extract and culture supernatant than in E. coli, suggesting that M. smegmatis has fewer proteases or that P36 acquires a folding that allows it to resist proteolysis.
The P36 gene was also expressed in the baculovirus/insect cell system under the control of the polyhedrin promoter.Again, the protein could be identified both in the culture supernatant and in the cell extract.A higher production and secretion level was reached because recombinant P36 protein was easily detected in Western blots in nonconcentrated supernatants.It has been shown that baculovirus-infected insect cells may secrete recombinant proteins (23).The fact that the protein was not observed in the supernatant culture at 4 dpi may indicate that strong proteolysis arises late in the infection, probably because cell lysis released proteases.Recombinant P36 antigen had a higher molecular mass than that produced in M. bovis, perhaps due to post-translational protein modifications occurring in insect cells.To our knowledge, there is only one previ-ous case of mycobacterial protein expression in the baculovirus system.This was the expression of the M. tuberculosis chaperonin 10 protein by Atkins et al. (24).Compared to P36, higher expression of chaperonin 10 was achieved; however, chaperonin 10 is a nonsecreted protein and is already abundant in mycobacteria.A sample of few sera from infected cattle recognized the recombinant P36 produced by M. smegmatis or insect cells.As previously reported (25), cattle sera recognized no protein cell extract or supernatants from insect cells infected by nonrecombinant baculoviruses.Recombinant MPB70 produced by M. smegmatis was also recognized by cattle sera.These results establish the basis for a diagnostic assay.
A semiquantitative analysis of the relative production level of recombinant P36 showed that the highest production was obtained in the baculovirus/insect cell supernatant.In this system there seems to be a good rate of exportation because the extracellular/ cellular ratio was higher than in M. smegmatis.
A high yield of mycobacterial proteins in systems such as those described here could be the first step to study the immune recognition or the biological function of these antigens.In an applied approach, the recombinant antigens could be useful for diagnosis or prevention studies.
Figure 3 -Figure 4 -
Figure 3 -A, Recognition of M. smegmatis-expressed P36 by cattle sera.Western blot was performed using the culture supernatant from M. smegmatis (pMBA61).Sera from infected (lanes 1-3) or healthy (lanes 4 and 5) cattle were used.B, Recognition of P36 expressed in insect cells (3 days post-infection) by cattle sera.Western blot was performed using the culture supernatant from insect cells infected with recombinant baculoviruses.Sera from healthy (lanes 1-3) or infected (lanes 4-6) cattle were used.
Table 2 -
Amount of cells that produce detectable P36 in Western blots. | 2017-08-14T22:36:19.408Z | 1999-01-01T00:00:00.000 | {
"year": 1999,
"sha1": "25eecffc82fe9e88ad1a9ca7d7f57b2ec879c3b1",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bjmbr/v32n1/3121c.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "25eecffc82fe9e88ad1a9ca7d7f57b2ec879c3b1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
228169430 | pes2o/s2orc | v3-fos-license | A methodological framework for characterizing fish swimming and escapement behaviors in trawls
Knowledge about fish behavior is crucial to be able to influence the capture process and catch species composition. The rapid expansion of the use of underwater cameras has facilitated unprecedented opportunities for studying the behavior of species interacting with fishing gears in their natural environment. This technological advance would greatly benefit from the parallel development of dedicated methodologies accounting for right-censored observations and variable observation periods between individuals related to instrumental, environmental and behavioral events. In this paper we proposed a methodological framework, based on a parametric Weibull mixture model, to describe the process of escapement attempts through time, test effects of covariates and estimate the probability that a fish will attempt to escape. We additionally proposed to better examine the escapement process at the individual level with regard to the temporal dynamics of escapement over time. Our approach was used to analyze gadoids swimming and escapement behaviors collected using a video set up in front of a selective device known to improve selectivity on gadoids in the extension of a bottom trawl. Comparison of the fit of models indicates that i) the instantaneous rate of escape attempts is constant over time and that the escapement process can be modelled using an exponential law; ii) the mean time before attempting to escape increases with the increasing number of attempts; iii) more than 80% of the gadoids attempted to escape through the selective device; and iv) the estimated probability of success was around 15%. Effects of covariates on the probability of success were investigated using binomial regression but none of them were significant. The data set collected is insufficient to make general statements, and further observations are required to properly investigate the effect of intrinsic and extrinsic factors governing gadoids behavior in trawls. This methodology could be used to better characterize the underlying behavioral process of fish in other parts of a bottom trawl or in relation to other fishing gears.
Introduction The 14 th Sustainable Development Goal of UNESCO [1] emphasized the necessity to maintain a good ecological status in fish populations and their habitats. In Europe, the Common Fisheries Policy (CFP) focuses on minimizing the effects of fishing on ecosystem functioning while maintaining incomes of coastal communities. Since the beginning of 2019, this regulation prohibits the widespread practice of throwing back into the sea unwanted catches of stocks under total allowable catch (TAC) regulations (or with a Minimum Conservation Reference Size-MCRS-in the Mediterranean) [2]. As a result, most European fleets need to reduce their discards to maintain their fishing opportunities. Changes in fishing gear selectivity and spatiotemporal fishing strategies are the two main strategies that fishers can use to achieve this goal. In order to avoid undesired fish mortality on the deck of fishing vessels, unwanted catches should be avoided in the first place, either by preventing individuals entering the gear or by allowing escapement. Better designed fishing gears could help to improve the match between the fishers' target species and those actually caught by the gears [3]. A large amount of literature has been dedicated to gear selectivity trials over the last two decades (for reviews see [4][5][6]), in which catches are compared between a standard gear used by fishers and a test gear to assess gains and losses in terms of size selectivity.
Knowledge on animal behavior is a key element to understand the capture processes of fishing gears [7,8] and is useful for modifying fishing gear design with the objective of influencing catch species composition. As behavioral responses are species specific, with clear differences between flatfish and roundfish, but also between roundfish species, due to differences in swimming capacities and anti-predator strategies [9], numerous case specific studies were carried out. Research on the behavioral response of fish to longline gear [10], nets [11], and baited fish pots [12][13][14] has proved to be a fruitful way to assess and modify gear design. For towed gear such as trawls, specific emphasis on escapement to improve gear design and selectivity has been examined, especially for fish and squid in the mouth [15,16] or cod, haddock, whiting and hake in the extension and codend [9,[17][18][19][20][21]. In addition to catch comparison experiment, underwater video could help identifying the differences between selective devices efficiency [22], by analyzing escapement attempts, successes and failures. However, the comprehension of underlying behavioral processes at individual and collective levels is still challenging.
The rapid expansion of the use of underwater cameras has facilitated unprecedented opportunities for studying the behavior of species interacting with fishing gears in their natural environment. Pioneer work qualitatively described fish behavior interacting with fishing gears [7]. Further research expand this fundamental knowledge by characterizing typical swimming behavioral responses in trawls such as cruising behavior in front of the mouth of the net [23], optomotor and herding behavior [20], horizontal and vertical distribution of individual in the net [15]. Fine scale descriptions of escapement behavior through the mesh were also provided, with for example active escaping fish approached a mesh at right angles by swimming straight ahead with very little change in direction, while other fish that approached the net at obtuse angles retreated by turning sharply [24].Then came count data and percentages. For example, the proportion of individuals entering the extension which escaped through the meshes were determined [19,25] and used to compare extension and codend configurations [25]. More sophisticated statistical analysis of video footages are under development in marine surveys field. Using deep learning to identify and quantify fish as they pass through the net [26], proposed to calculate absolute abundance estimate in trawls. A method for computing volumetric fish density using stereo cameras was also recently published by [27]. However, technological advance in camera systems would greatly benefit from the parallel development of dedicated methodologies to studies fish behavior in interaction with fishing gears.
The quantification of fish behavior based on video observations suffers of several bias related to instrumental, environmental and behavioral events. First, the camera's field of view does not cover the entire surface of the net or escapement device. Second, environmental conditions such as mud plume or strong turbidity can reduce image quality. Finally, fish can swim during different periods within the camera's field. These features induce varying observation periods between individuals, which biases the estimation of behavior duration, comparison between fish and preclude for direct estimation of the probability of escapement. For example, it is not because an individual does not attempt to escape during its observation period that it will not attempt later on in time or in rear part of the gear. In this work, we develop a methodological framework which takes into account these specificities of the dataset to better understand the behavioral processes that underpins escapement processes in a trawl. We propose to describe the process of escapement attempts through time using a parametric Weibull mixture model. This survival model enables handling for right-censored observations, as well as testing effects of covariates and estimating the probability that a fish will attempt to escape.
We use behavioral descriptors combined with statistical tests and a parametric model to i) describe swimming behavior of gadoids in the extension, ii) estimate the probability of making an escapement attempt (volunteer contact of the snout with the net), and iii) estimate the probability of success of escapement attempts when these fish are in the extension. To go further, we propose to explore the temporal dynamics of escapements as well as potential influencing factors to infer the underlying behavioral processes. This methodological framework can help address specific questions such as: Does contact probability and the probability of escape success change over time between the beginning and the middle of the towing period? Do these probabilities evolve with the number of attempts? Are escapement attempts made preferentially in a specific location of the cylinder?
We illustrate our approach by analyzing data collected with a video camera located in front of a selective device, mainly consisting of a 100 mm square mesh in a specific location of the trawl extension. Numerous trials have been made to increase the number of escape opportunities in this part of the gear, including the fitting of different types of netting (larger mesh size, smaller hanging ratio or alternative mesh shapes). Depending on the species being targeted, the panels or sections can be placed in the upper, side, or lower parts of the extension and can also extend across the full circumference [28,29]. There has been a particular focus on square mesh because it facilitates escapement by maintaining open mesh geometry compared with diamond mesh, which tends to close with towing tension [30]. The effectiveness of square mesh panels has been demonstrated in gadoid-directed fisheries as well as Nephrops-directed fisheries [31][32][33][34][35].
Experimental setup and data collection
Experiments at sea were carried out onboard a commercial fishing vessel (22.85 m long, 497 kW and a gauge of 118.77 gt). The extension and codend of the trawl were diamond meshes made of 5 mm single TPE, with a nominal stretched-mesh size of 100 mm. The mandatory 100-mm square mesh panel (SMP) of 50 meshes long and 25 meshes wide was placed in the extension, 4.8 m before the codend. The additional square mesh cylinder (SMC) was made of two SMP seamed side by side to form a cylinder. At the junction between the SMC and the diamond mesh codend, each square mesh was seamed with two diamond meshes. The additional SMC was inserted prior to the SMP (Fig 1).
One tow of approximately three hours was filmed underwater in the Celtic sea on September 13 2014, during the day at a depth of 123 m and a towing speed of 3 knots (initial coordinates of the tow 49˚09'95 N-07˚00'74 W). The video camera system used was VECOC (Video Embarquée de Controle et d'Observation de Chalut), capable of generating black and white images under low-light conditions. The underwater video system was composed of three modules ( Fig 2): battery, microcontroller/memory and camera (Tornado low-light camera-Tritech, resolution = 570 TVL, S = 0.0003 lux, SNR > 50 dB). The battery and microcontroller were housed in 10-cm diameter titanium pressure housings. Despite the camera sensitivity, a custom-made white LED light was used as a source of artificial light to assist the underwater camera.
The video system was placed inside the square mesh cylinder, in the upper part, facing toward the codend (Fig 1). The horizontal angle of the camera made it possible to analyze fish escaping the SMC from both the upper and lower parts. Two sequences of 5 min were analyzed, recorded at an interval of 50 min to test the influence of towing duration. The first sequence was started 10 min after the beginning of the tow. These sequences of video footage were selected for their good video quality and fish abundance.
Methodological framework
Description of fish swimming behavior. The description of fish behavior was recorded using BORIS Software (Behavioral Observation Research Interactive [36]). Each fish entering the camera's field of view was identified and individually followed until its disappearance from the field of view. Species were identified to the lowest taxonomic level possible. Swimming behavior was described using the position, orientation and speed of the fish. Four speed categories were estimated based on the relative position of the fish with respect to the trawl: i) the individual was not moving ('immobility'), ii) the individual was swimming slower than the trawl ('slow'), iii) swimming at the same speed as the trawl ('medium'), or iv) swimming faster than the trawl ('fast'). Fish position was categorized vertically ('top', 'center', 'bottom') and horizontally ('left', 'center', 'right'). The left-right position was based on the movement direction of the trawl (i.e., the left is observed at the right of the video). The orientation of each fish's body in the water column was defined in relation to the trawl axis: 'forward', 'lateral' (side-on) and 'aft'. Individual swimming behaviors in the extension were characterized using a time budget: the percentage of time each fish spent in each position and orientation and at each speed.
Two metrics were recorded to describe escapement behavior: the escapement attempts and the success/failure of each attempt. An escapement attempt refers to voluntary swimming behavior toward the net where the snout then comes into contact with the net. Escapement attempts. We used a parametric Weibull mixture distribution model [37,38] to model the escapement process and estimate the probability of escapement attempts through time. This model formulation makes it possible to take into account several specificities of the studied process (i.e. the escapement attempt) and the type of observations. The observation period varied between individuals and the attempt could have occurred outside the field of the camera. As such, individuals that do not attempt to escape during the observation period can be considered and incorporated in the analysis as right-censored observations. Furthermore, the mixture distribution model allows that an unknown proportion of individuals do not attempt to escape. The resulting survival function S(t), i.e., the complementary cumulative distribution of escapement attempt events through time, is expressed as follows: Where T is the time elapsed between the arrival of a fish in the field of the camera and the event, or the time between two events. α > 0 and ɣ > 0 are the scale and shape parameters, respectively. The escapement attempt rate is expected to decrease with time and converge to an asymptote, 1π.
Compared to standard survival models, where the event is mortality, individual fish can attempt to escape several times. The time to escape and its rank was recorded for each attempt. Obviously, time was reset to zero after the first, second, and third attempts etc. The rank of the escapement attempts (first attempts, second attempts, etc.) of the same individual as well as the period of the tow when the video footage was recorded (beginning: video 1 or middle: video 2) were tested as categorical covariates on the three parameters (α, ɣ and π). Shape parameter was also fixed at 1 to test a simpler model formulation (the exponential model being a particular case of the Weibull model). This resulted in 32 potential models. Model parameters were estimated by maximization of the model likelihood using a quasi-Newton optimization algorithm. Models were ranked according to Akaike's Information Criterion (AIC), which balances the goodness of fit by the number of parameters to favor model parsimony.
Probability of success/failure of escapement attempts. The effects of rank of escapement attempts and the period of video recording during the tow (video 1 or video 2) were tested on probability of success of escapement attempts using binomial GLM. Again, the model with the lowest AIC was selected.
Temporal dynamics of escapements. Results on the characterization of the escapement process at the individual level were examined with regard to the temporal dynamics of escapement over time by looking at the cumulative number of escapement attempts through time.
Statistical analyses were performed using R software. The minimal dataset is provided as a S1 File.
Ethics statement
Observations were carried out on board the vessel Le Jusant in accordance with the European scientific fishing authorization granted by the French maritime fisheries and aquaculture directorate (2014/898456/SELECMCTrawl/0011 and 2014/898456/SELECMCTrawl/0002). The study is based on underwater video observation, a non-invasive method to study the behavior of fish in the water. Animals were not exposed to additional stress other than that involved in commercial fishing practice. Thus, no additional authorization or ethics approval was required to perform the study. This study did not involve endangered or protected species.
Results
Haddock was by far the predominant species in the catches, with smaller numbers of whiting encountered (157 kg of haddock versus 17 kg of whiting). It was not always possible to distinguish between the two species. The two fish species were analyzed jointly and grouped under gadoids (as in [17]). Over the 10 min of video footage analyzed, 204 gadoids were recorded (106 and 98 for videos 1 and 2, respectively). The mean observation period on each fish was of 2.23 seconds (sd = 2.5, min = 0.28; max = 22.52).
Swimming behavior
Time budget distribution in the 204 individuals observed is presented in Fig 3. The gadoids spent the majority of their time swimming at the same speed as the trawl but were observed to swim faster or slower than the trawl more than 20% of their time. The gadoids were predominantly oriented against the direction of current flow, in forward or lateral positions (Fig 3). Only 15% of the fish were oriented aft. They predominantly occupied the upper and central parts of the trawl cylinder while swimming, with these positions accounting for more than 80% of their time. They tended to occupy the horizontal space randomly (Fig 3D). These swimming characteristics were similar between the two periods of video footage recorded after one hour or two hours of towing (S1 Fig).
High variability observed in Fig 3 corresponds to strong inter-individual variability in swimming behavior. Indeed, the gadoids remained active in this part of the gear, with most showing frequent changes in direction or swimming speed.
Escapement behavior
Out of the 204 individuals, 123 fish attempted to escape at least once, which represents 60% of the observed population. Two hundred and one escapement attempts were observed, meaning that the same individual could try to escape several times. On average, each fish that attempted to escape tried 1.77 times (sd = 1.15, min = 1, max = 6). Escapement attempts were not evenly distributed among positions in the gear: 69% of the attempts were made on the upper part of the net, while no differences were observed in horizontal positions (chi 2 , p-value = 0.13, Fig 4). These results remained similar between the two periods of video footage analyzed (S2 Fig). The time elapsed before making an escape attempt was 1.14 s on average (sd = 1.54, min = 0 and max = 13).
Escapement attempts.
The best-fitted model with the lowest AIC was model 8 in which ɣ was set at 1, corresponding to an exponential decay (Table 1). This means that the underlying behavioral process is stationary or, in other words, that the instantaneous rate of escapement attempts is constant over time. The best-fitting model also indicates that α differs between the first, second and third (and more) escapement attempts (α_R1 = 0.6, α_R2 = 0.29, α_R3+ = 0.38, Fig 4). The mean time before an escapement attempt (1/α) tended to increase between the first and second attempts. The proportion of the population that attempted to escape was estimated at 88% and did not depend on the rank of the attempt. Fig 5 illustrates how all curves converge to the same estimate of 1-π. The time of the video recording in the tow (video 1 vs video 2) was not retained by the model selection procedure for any of the three parameters tested. Success probability of escapement attempts. Comparison of goodness of fit indicated that the null model had the lowest AIC ( Table 2). The probability of success of an escapement attempt did not depend on the rank of the attempt, or the period of the tow when the video recording was made (video 1: beginning or video 2: middle). The probability of success was estimated at 14.8%.
Temporal dynamics of escapement. If the underlying behavioral process driving escapement behavior at the individual scale can be modeled using an exponential process (constant hazard rate) and fish enter the trawl following a homogeneous Poisson process (purely Fig 6D). One can see that the slopes of the linear regressions are quite similar between the two videos ( Fig 6C). However, our results indicate that observations deviated from the linear regression, which suggests that social phenomena could be at work.
Discussion
The use of an underwater video cameras allows direct observation of animal swimming behavior and for the two processes, the escapement attempt and its potential success, to be distinguished. Indeed, understanding the reasons for failure is just as important as the reasons for success. Despite various constraints such as video quality, water turbidity, and the time required to process video footage [39], numerous studies have used video to characterize animal behaviors or their interactions with fishing gears and especially with trawls (grid [25,[40][41][42]; drop chain [16,43], trawl mouth [44,45]; panel [46,47]). The data set analyzed (n = 204 gadoid fish individually followed over 10 minutes of observation from one fishing operation using artificial white light) allows to demonstrate the great potential of the methodological framework proposed to infer on swimming and escapement behavior of gadoids interacting with a 100 mm square mesh cylinder in the extension of a bottom trawl. Nevertheless, this data set was illustrative and not sufficient to make general statements from the results obtained.
Artificial lights is known to affect fish behavioral responses in trawls (reviewed by [48]). Although studies just ignore the bias, others use catch data in addition to the camera observations to determine if artificial lights affected fish behavior [49]. The use of artificial white light was required given the limited ambient light at 120-m depth in the Celtic sea. Due to limitations in our experimental design, we were unable to measure the effect of artificial light in our study. Several studies recommend using infrared light instead to minimize the impact of light on fish behavior [50,51]. However, a limitation of using red and far red light is the high absorption of these wavelengths by the water. Nevertheless, it remains interesting to confront results of this pilot study to literature.
Swimming behavior
Fish position in a trawl varies between species. Previous investigation of species-specific behavior in the aft and codend of a trawl have shown that Nephrops and flatfish remain low in the net, cod have a more uniform vertical distribution, and haddock and whiting stay high [21,23]. This is in line with our observation of haddock and whiting swimming mainly in the central to upper part of the trawl in the towing direction.
The tendency of roundfish to show varying degrees of "rise" as they tire has led to the development of an upper panel in the extension to increase escapement rate of undersize fish [20,52,53]. Since 2017, a square mesh panel is mandatory in the Celtic sea to reduce fishing mortality of juvenile roundish (mainly haddock and whiting). Our observations confirm these previous findings and show that 70% of escapement occurred in the upper part of the trawl. Nevertheless, around 20% of escapement attempts took place through the side of the cylinder. As already pointed out by [34] for hake, increasing the surface of a selective device such as a square mesh device from the top to the sides or even to the bottom (to produce a cylinder) can increase the selectivity of several species.
Two types of swimming behavior are reported in the literature, 1) ordered and steady swimming behavior and 2) erratic swimming behavior characterized by a higher variation in angular velocity and faster swimming speed [54]. Our observations indicate that under the conditions of the experiment most of the gadoids were still actively swimming against the current flow, holding their position in relation to the camera (medium to fast swimming, positioned in a forward body orientation) when in the former part of the extension. This finding agrees with previous observations where optomotor responses were the most common behavior [19].
Escapement behavior
Escapement attempts. At an average towing speed of 3 knots and given the swimming abilities of gadoids, escapement attempts in the extension must be an active and voluntary behavior [33]. In comparison [19], reported that in half of the observations in the codend, fish appear to come into contact with the net through the pulsing motion of the codend itself rather than actively swimming toward the netting, leading to "opportunistic escape". The percentage (π) estimated in this pilot study (88%) appears higher than those reported for hake (62%) or megrim (41%) [34]. This difference could be attributed to the effect of artificial light used in our experiment (see above). However, it is worth noting that our approach provides an estimation of the expected percentage of the population that attempts to escape in the extension based on the temporal dynamic of the observed attempts, while in most of the published works it is reported the observed percentage in the view of the camera.
We also observed that haddock and whiting can make successive attempts to escape and that the mean time before an attempt increases between the first and second attempt and decreases between the second and third or more attempts. More observations should be made to confirm this pattern and better understand this behavioral phenomena (reinforcement, learning, etc.), as well as potential biases linked to the presence of light. Our modeling approach also allows testing whether π depends on the rank of escapement attempt. Our preliminary results suggest independence between successive attempts. The value of this parameter is likely to be species and size specific and might be influenced by net geometry.
Our results suggest little difference between the two video footages made at 50 min interval, suggesting that similar processes are at work in both. The hypothesis that the fish would become more tired as towing time increases and make less escapement attempts in the extension should be further investigated but is not supported by our observations. By using curve fitting, we went further than studies in the literature have previously done. We gained knowledge on the underlying behavioral processes involved in escapement of fish in trawls. Our preliminary results seems to indicate that an escapement attempt is a memoryless phenomenon (ɣ = 1): the probability of attempts to escape can be modelled using an exponential law and the instantaneous rate of escapement is constant over time at the scale of 5 min. This type finding will be useful for improving hypotheses made to model fish behavior in trawls [55]. On the contrary, if additional observations indicated that in reality ɣ <1, this would indicate that the instantaneous rate of escapement decreases over time.
Escapement success. We estimate the probability of success of an escape attempt through a square mesh of 100 mm in the extension at 14.8% based on our experiment. This is lower than previously published results for the same species in the codend, where 20 and 35% of the fish approaching or striking the netting, respectively, resulted in successful escapes [19]. Escapement success is a size-related tradeoff: on the one hand, individuals that are too small, while capable of passing through the mesh easily, have poor swimming abilities and, on the other hand, individuals that are too large, while capable of good swimming performances, have too broad a cross section to penetrate through the mesh and escape. As such, the difference observed can result from difference in fish size (15-44cm in [19] compared to 25-65cm in our experiment, information coming from catch sampling). However, precise determination of fish length was not possible here using only one camera. Further studies with video observation of fish interacting with selective devices will require the development of a stereovision system to accurately quantify fish length, position and orientation [56].
It is reasonable to assume that fish encountered fishing gear on more than one occasion and that their behavior may be modified through a process of learning from past experiences [8] and that this could have implication for fisheries management [57]. Experiments run in laboratory conditions have demonstrated the role of learning in mesh penetration by haddock [58] and clupeoids [59]. However, how rapid can fish learn? The binomal regression we used allows testing such processes by testing the influence of successive attempts on the probability of success. Preliminary results suggest that it is unlikely that fish learn from recent failures, although additional research will be required, as it might vary with species and individuals.
In our experiment, the probability of success of observed attempts was the same after 10 minutes as after one hour of towing. In comparison, escapement from the codend evolves with time as catch build up there and obstruct the open mesh [19,60]. Escape probability and its success can be more efficient for groups [18,23]. Indeed, social behaviors for predator avoidance have been demonstrated in schooling fish [61], but social interactions and facilitations need to be further investigated in demersal fish communities.
Applied to fishing gear [62], demonstrated that density of fish ahead of the bottom-trawl positively affects catchability and that qualitative differences can be observed in escapement and capture behavior at various densities [8]. More than density, social learning (e.g process by which individuals acquire new behavior via the observation or interaction with other animals [63]) among fish in response to approaching net was also demonstrated under laboratory conditions [63]. Results of our analysis on the dynamics of escapement attempts through time raise several questions and hypotheses. Our preliminary observations show that the cumulative number of escapement attempts over time does not strictly follow a linear trend. Two main explanations involving social behavior can be put forward to explain such differences. Firstly, fish might not be entering the extension following a homogeneous Poisson process. Indeed, when examining the total number of gadoids observed per second (S3 Fig) we can see that this is not statistically constant but evolved through time. This arrival of fish in pulses can induce small accelerations in the temporal dynamics. In this case, the temporal dynamic of escapements should be analyzed conditionally to arrivals and further methodological developments are required to combine both processes. Secondly, social interactions can play a role and influence temporal dynamics with, for instance, an effect of the number of congeners on following behavior in escapement, or the total number of fish through a crowding effect. When such interactions occur, the individual probability of escaping can increase with the number of fish. Further observations, coupled with modeling work, could help improve understanding of any underlying social mechanisms [64].
In conclusion, knowledge on fish behavior in fishing gear is still scarce and sometimes contradictory about the same device or species due to the influence of intrinsic (physiological condition, motivation state, fish size and visual ability) and extrinsic (ambient light levels, temperature, fish density) factors [8,65]. This advocates for a deeper examination of capture and escapement underlying behavioral processes using laboratory studies, observations at sea and modelling. In this study, we develop a methodological framework including Weibull modelling which allows to take into account some specificities of the data set (variable observation periods between individuals, censored and recurrent measurement) and estimate parameters such as: the proportion of the population that tend to escape, the instantaneous rate of escapement and its temporal dynamics as well as effect of covariates on this parameters. We additionally proposed to better examine the escapement process at the individual level with regard to the temporal dynamics of escapement over time. This methodology is applied to the case study of gadoids escapement behavior faced with a selective device (100mm squared mesh cylinder) in the extension of a bottom trawl. Nevertheless, the results presented herein are based on a too small number of observations (in terms of fish number and number of fishing operation) to make general statements and additional research will be required to validate the findings. | 2020-12-13T14:06:09.007Z | 2020-12-11T00:00:00.000 | {
"year": 2020,
"sha1": "5477117dce72647a59172a37e9163ce9a418da6f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0243311&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac200e0ccf1aedadc7525858b4fa291a033a28ba",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
264211363 | pes2o/s2orc | v3-fos-license | Biosensors for health applications
The ability to assess health status, disease onset and progression, and monitor treatment outcome through a non-invasive method is the main aim to be achieved in health care promotion and delivery and research. There are three prerequisites to reach this goal: specific biomarkers that indicates a healthy or diseased state; a non-invasive approach to detect and monitor the biomarkers; and the technologies to discriminate the biomarkers. The early disease diagnosis is crucial for patient survival and successful prognosis of the disease, so that sensitive and specific methods are required for that. Among the numerous mankind diseases, three of them are relevant because of their worldwide incidence, prevalence, morbidity and mortality, namely diabetes, cardiovascular disease and cancer. In recent years, the demand has grown in the field of medical diagnostics for simple and disposable devices that also demonstrate fast response times, are user-friendly, costefficient, and are suitable for mass production. Biosensor technologies offer the potential to fulfill these criteria through an interdisciplinary combination of approaches from nanotechnology, chemistry and medical science. The emphasis of this chapter is on the recent advances on the biosensors for diabetes, cardiovascular disease and cancer detection and monitoring. An overview at biorecognition elements and transduction technology will be presented as well as the biomarkers and biosensing systems currently used to detect the onset and monitor the progression of the selected diseases. The last part will discuss some challenges and future directions on this field.
Introduction
The ability to assess health status, disease onset and progression, and monitor treatment outcome through a non-invasive method is the main aim to be achieved in health care promotion and delivery and research. There are three prerequisites to reach this goal: specific biomarkers that indicates a healthy or diseased state; a non-invasive approach to detect and monitor the biomarkers; and the technologies to discriminate the biomarkers. The early disease diagnosis is crucial for patient survival and successful prognosis of the disease, so that sensitive and specific methods are required for that. Among the numerous mankind diseases, three of them are relevant because of their worldwide incidence, prevalence, morbidity and mortality, namely diabetes, cardiovascular disease and cancer. In recent years, the demand has grown in the field of medical diagnostics for simple and disposable devices that also demonstrate fast response times, are user-friendly, costefficient, and are suitable for mass production. Biosensor technologies offer the potential to fulfill these criteria through an interdisciplinary combination of approaches from nanotechnology, chemistry and medical science. The emphasis of this chapter is on the recent advances on the biosensors for diabetes, cardiovascular disease and cancer detection and monitoring. An overview at biorecognition elements and transduction technology will be presented as well as the biomarkers and biosensing systems currently used to detect the onset and monitor the progression of the selected diseases. The last part will discuss some challenges and future directions on this field.
Biorecognition elements and transduction technology 2.1 Biorecognition elements
Clinical analyses are no longer carried out exclusively in the clinical chemistry laboratory. Measurements of analytes in biological fluids are routinely performed in various locations, including hospital, by caregivers in non-hospital settings and by patients at home. Biosensors (bioanalytical sensors) for the measurement of analytes of interest in clinical chemistry are ideally suited for these new applications. These factors make biosensors very attractive compared to contemporary chromatographic and spectroscopic techniques. A biosensor can be generally defined as a device that consists of a biological recognition system and a transducer, for signal processing, to deduce and quantity a particular analyte (Hall, 1990). Biosensors provide advanced platforms for biomarker analysis with the advantages of being easy to use, rapid and robust as well as offering multianalyte testing capability; however a specific biomarker is necessary. Biomarkers are molecules that can be objectively measured and evaluated as indicators of normal or disease processes and pharmacologic responses to therapeutic intervention (Rusling et al., 2010). The first biosensor was reported by Clark and Lyons (1962) for glucose in blood measurement. They coupled the enzyme glucose oxidase to an amperometric electrode for PO 2 . The enzyme-catalyzed oxidation of glucose consumed O 2 and lowered PO 2 that was sensed, proportionally to the glucose concentration in the sample. The enzyme-based sensor was the first generation of biosensors and in the subsequent years a variety of biosensors for other clinically important substances were developed. Therefore, biosensors can be categorized according to the biological recognition element (enzymatic, immuno, DNA and whole-cell biosensors;Spichiger-Keller, 1998) or the signal transduction method (electrochemical, optical, thermal and mass-based biosensors; Wanekaya et al., 2008) (Fig 1). (Arya et al., 2008).
Substantial amounts of published work on the enzyme-based biosensors are found in the literature due to their medical applicability, commercial availability or ease of enzyme isolation and purification from different sources and also enzymes can be used in combination for detection of a target analyte (D'Orazio, 2003). By acting as biocatalytic elements, the enzymatic reaction is accompanied by the consumption or production of species such as CO 2 , NH 3 , H 2 O 2 , H + , O 2 or by the activation/inhibition activity that can be detected easily by various transducers and correlate this species to the substrates. Amongst various enzymes, glucose oxidase, horseradish peroxidase, and alkaline phosphatase have been employed in most biosensor studies (Laschi et al., 2000;Wang, 2000). The detection limit is satisfactory or exceeded but the enzyme stability is still a problem, especially considering a long period of time. A major advantage of enzyme-based biosensors is the ability, in some cases, to modify catalytic properties or substrate specificity by genetic engineering. The major limitation is the lack of specificity in differentiating among compounds of similar classes (Buerk, 1993;2001;D'Orazio, 2003). Affinity biosensors have received considerable attention in the last years, since they provide information about binding of antibodies to antigens, cell receptors to their ligands, DNA/RNA to complementary sequences of nucleic acids and functioning enzymatic pathways that allow the screening of gene products for metabolic functions. Immunosensors are based on the high selectivity of the antibody-antigen reaction. The specific interaction is sensed by a transducer and measurements can be obtained directly, in minutes, rather than the hours required for visualizing results of an ELISA test (Spangler et al., 2001). Either an antigen or antibody can be immobilized onto a surface of support in an array format (Huang et al., 2004) and participates in a biospecific interaction with the other component, allowing detection and quantification of an analyte of interest (Stefan et al., 2000). The sensors may operate either as direct or as indirect sensors often referred to homogeneous and heterogeneous immunosensors, respectively. Antibodies are t h e c r i t i c a l p a r t o f a n i m m u n o s e n s o r t o provide sensitivity and specificity. As the antibody-antigen complex is almost irreversible, only a single immunoassay can be performed (Buerk, 1993) although intensive research effort has been directed toward the regeneration of renewable antibody surfaces. Reproducibility is another concern, partly due to the antibody orientation and immobilization onto the sensor surface. Immunosensors are inherently more versatile than enzyme-based biosensors because antibodies are more selective and specific. Immunosensors are currently been used for infectious diseases diagnosis (Huang et al., 2004). DNA analysis is the most recent and most promising application of biosensors to clinical chemistry. DNA is well suited for biosensing because the base pairing interactions between complementary sequences are both specific and robust. DNA biosensors employ immobilized relatively short synthetic single-stranded oligodeoxynucleotides that hybridizes to a complementary target DNA in the sample (Palecek, 2002). Hybridization can be performed either in solution or on solid supports. The system can be used for repeated analysis since the nucleic acid ligands can be denatured to reverse binding and then regenerated (Ivnitski et al., 1999). However, considerable research is still needed to develop methods for directly targeting natural DNA present in organisms and in human blood with high detection sensitivity (Palecek, 2002). Accurate tests for recognizing DNA sequences, usually, need to multiply small amounts of DNA into readable quantities using the polymerase chain reaction (PCR). Some of the new gene chips are sensitive enough to eliminate the need for target amplification, a time-consuming process. This improvement has stimulated the development of DNA biosensors with a view toward rapid analysis for point-of-care diagnostics for infectious disease, testing cancer and genetic disease diagnosis and measurement of drug resistance or susceptibility, and even a whole cancer circulating cell can be identified (Liu et al., 2009). Whole-cell biosensors are based in the general metabolic status of bacteria, fungi, yeasts, animal or plant cells that are the recognition elements. Whole cells can easily be manipulated and adapted to consume and degrade new substrates. Many enzymes and cofactors that co-exist in the cells give them the ability to consume and hence detect a large number of chemicals. However, this may compromise their selectivity (Ding et al., 2008). The sensing molecule, in general, is hold on a solid support, the matrix. Chemical properties of a desired support decide the method of immobilization and the operational stability of a biosensor. In particular, it should be resistant to a wide range of physiological pHs, temperature, ionic strength and chemical composition. The ability to co-immobilize more than one biologically active component is desirable in some cases. Conducting polymers, carbon nanotubes, nanoparticles, sol-gel/hydro-gels and self-assembled monolayer are common used to immobilize a variety of sensing molecules (Arya et al., 2008).
Transduction technology
The interaction of the analyte with the bioreceptor is designed to produce an effect measured by the transducer, which converts the information into a measurable signal. A variety of transducer methods have been feasible toward the development of biosensor www.intechopen.com technology; however the most common methods are electrochemical, optical and piezoelectric (Buerk, 1993;Collings & Caruso 1997;Wang, 2000). Electrochemical sensors measure the electrochemical changes that occur when analytes interact with a sensing surface of the detecting electrode. The electrochemical assay is simple, reliable, has a low detection limit and a wide dynamic range due to the fact that the electrochemical reactions occur at the electrode-solution interfaces. Based on that and cost competitiveness, more than half of the biosensors, reported in the literature, are based on electrochemical transducers (Meadows, 1996). The electrical changes can be potentiometric (a change in the measured voltage between the indicator and reference electrodes), amperometric (a change in the measured current at a given applied voltage), or conductometric (a change in the ability of the sensing material to transport charge). Amperometry is the electrochemical technique usually applied in commercially available biosensors for clinical analyses that detect redox reactions. The electrochemical platform is suited for enzyme-based and DNA/RNA sensors, field monitoring applications (e.g. handheld) and miniaturization toward the fabrication of an implantable biosensor. Optical transducers can be used to monitor affinity reactions and have been applied to quantitate antigenic species of interest in clinical chemistry and to study the kinetics and affinity of antigen-antibody and DNA interactions. Of particular interest have been direct optical transducers based on methods such as internal reflectance spectroscopy, surface plasmon resonance and evanescent wave sensing. Light entering an optical device is directed through optical fibers or planar waveguides toward a sensing surface and reflected back out again. The reflected light is monitored, using a detector such as a photodiode, revealing information about the physical events occurring at the sensing surface. The measured optical signals often include absorbance, fluorescence, chemiluminescence, surface plasmon resonance (to probe refractive index), or changes in light reflectivity. Optical biosensors are preferable for screening a large number of samples simultaneously; however, they cannot be easily miniaturized for insertion into the bloodstream. Most optical methods of transduction require a spectrophotometer to detect signal changes. Mass sensors can produce a signal based on the mass of chemicals that interact with the sensing film, usually a vibrating piezoelectric quartz crystal. Acoustic wave devices, made of piezoelectric materials, are the most common sensors, which bend when a voltage is applied to the crystal. Acoustic wave sensors are operated by applying an oscillating voltage at the resonant frequency of the crystal, and measuring the change in resonant frequency when the target analyte interacts with the sensing surface. Because a significant amount of nonspecific adsorption occurs in solutions, piezoelectric sensors have received their widest use in gas phase analyses. Extremely high sensitivities are possible with these devices detecting femtogram levels of drug vapors. Similarly to optical detection, piezoelectric detection requires large sophisticated instruments to monitor the signal. Generation of heat during a reaction can be used in a calorimetric based biosensor. Changes in solution temperature caused by the reaction are measured and compared to a sensor with no reaction to determine the analyte concentration. This approach is well suited for enzyme/substrate reactions that cause changes in solution temperature but not for receptorligand reactions because there is no temperature change at steady-state and transient measurements are very difficult to make. Calorimetric microsensors have been manufactured for detection of cholesterol in blood serum based on the enzymatically produced heat of oxidation and decomposition reactions (Caygill et al., 2010).
Glucose as diabetes biomarker
About 3% of the population worldwide suffers from diabetes, a leading cause of death, and its incidence is growing fast. Diabetes is a syndrome of disordered metabolism resulting in abnormally high blood sugar levels. Diabetic individuals are at a greater heart disease, stroke, high blood pressure, blindness, kidney failure, neurological disorders risk and other health related complications without diligent monitoring blood glucose concentrations. Through patient education, regular examinations and tighter blood glucose monitoring, many of these complications can be reduced significantly (Turner & Pickup, 1985;Lasker, 1993). Optimal management of diabetes involves patients measuring and recording their own blood glucose levels. Under normal physiological condition, the concentration of fasting plasma glucose is in the range 6.1-6.9 mmolL −1 , so the variation of the blood glucose level can indicate diabetes mellitus, besides other conditions. Consequently, quantitation of the glucose content is of extreme importance, as it is the main diabetes biomarker. The American Diabetes Association recommends that insulin-dependent type 1 diabetics selfmonitor blood glucose 3-4 times daily, while insulin-dependent type 2 diabetics monitor once-daily (American, 1997). However, frequent self-monitoring of glucose concentrations is difficult, given the time, the inconvenience and the discomfort involved with the traditional measurement technique. Several methods for glucose analysis have been reported. However, most of these methods involve complex procedures or are expensive in terms of costs. Therefore it is necessary to develop a simple, sensitive, accurate, micro-volume and low-cost approach for glucose analysis which is appropriate for rapid field tests and is also effective as an alternative to the existing methods.
Biosensors for glucose measuring
Glucose can be monitored by invasive and non-invasive technologies. Glucose biosensor was the first reported biosensor (Clark & Lyons, 1962) and after that a great number of different glucose biosensors were developed, including implantable sensors for measuring glucose in blood or tissue. Glucose sensors are now widely available as small, minimally invasive devices that measure interstitial glucose levels in subcutaneous fat (Cengiz & Tamborlane, 2009). Requirements of a sensor for in vivo glucose monitoring include miniaturization of the device, long-term stability, elimination of oxygen dependency, convenience to the user and biocompatibility. Long-term biocompatibility has been the main requirement and has limited the use of in vivo glucose sensors, both subcutaneously and intravascular, to short periods of time. Diffusion of low-molecular-weight substances from the sample across the polyurethane sensor outer membrane results in loss of sensor sensitivity. In order to address the problem, microdialysis or ultrafiltration technology has been coupled with glucose biosensors. The current invasive glucose monitors commercially available use glucose oxidase-based electrochemical methods and the electrochemical sensors are inserted into the interstitial fluid space. Most sensors are reasonably accurate although sensor error including drift, calibration error, and delay of the interstitial sensor value behind the blood value are still present (Castle & Ward, 2010). The glucose biosensor is the most widely used example of an electrochemical biosensor which is based on a screenprinted amperometric disposable electrode. This type of biosensor has been used widely throughout the world for glucose testing in the home bringing diagnosis to on site analysis.
Non-invasive glucose sensing is the ultimate goal of glucose monitoring and the main approaches being pursued for glucose sensor development are: near infrared spectroscopy, excreted physiological fluid (tears, sweat, urine, saliva) analysis, microcalorimetry, enzyme electrodes, optical sensors, sonophoresis and iontophoresis, both of which extract glucose from the skin (Koschwanez & Reichert, 2007;Beauharnois et al., 2006;Chu et al., 2011). Despite the relative ease of use, speed and minimal risk of infection involved with infrared spectroscopy, this technique is hindered by the low sensitivity, poor selectivity, frequently required calibrations, and difficulties with miniaturization. Problems surrounding direct glucose analysis through excreted physiological fluids include a weak correlation between excreted fluids and blood glucose concentrations. Exercise and diet that alter glucose concentrations in the fluids also produce inaccurate results (Pickup et al., 2005). The desire to create an artificial pancreas drives for continued research efforts in the biosensor area. Nevertheless, the drawbacks of in vivo biosensors must be solved before such an insulin modulating system can be achieved.
Cardiovascular disease biomarkers
Cardiovascular diseases are highly preventable, yet they are major cause of death of humans over the world. One of the most important reasons of the increasing incidences of cardiovascular diseases and cardiac arrest is hypercholesterolemia, i.e. increased concentration of cholesterol in blood (Franco et al., 2011). Hence estimation of cholesterol level in blood is important in clinical applications. The early evaluation of patients with symptoms that indicates an acute coronary syndrome is of great clinical relevance. Biomarkers have become increasingly important in this setting to supplement electrocardiographic findings and patient history because one or both can be misleading. Cardiac troponin is the only marker used routinely nowadays in this setting because it is specific from the myocardial tissue, easily detected, and useful for therapeutic decision making. Determination of the level of other non-myocardial tissue-specific markers might also be helpful, such as myeloperoxidase, copeptin, growth differentiation factor 15 and Creactive protein (CRP). CRP, which reflects different aspects of the development of atherosclerosis or acute ischemia, is one of the plasma proteins known as acute-phase proteins and its levels rise dramatically during inflammatory processes occurring in the body. This increment is due to a rise in the plasma concentration of IL-6, which is produced predominantly by macrophages as well as adipocytes. CRP can rise as high as 1000-fold with inflammation. CRP was found to be the only marker of inflammation that independently predicts the risk of a heart attack.
Biosensors in cardiovascular disease
Biosensors for cholesterol measurement comprise the majority of the published articles in the field of cardiovascular diseases. In the fabrication of cholesterol biosensor for the estimation of free cholesterol and total cholesterol, mainly cholesterol oxidase (ChOx) and cholesterol esterase (ChEt) have been employed as the sensing elements (Arya et al., 2008) (Fig. 2). Electrochemical transducers have been effectively utilized for the estimation of cholesterol in the system (Charpentier & Murr, 1995;Singh et al., 2006;Zhou et al., 2006;Arya et al., 2007). Based on number and reliability of optical methods, a variety of optical transducers have been employed for cholesterol sensing, namely monitoring: luminescence, change in color of dye, fluorescence and others (Arya et al., 2008). Other cardiovascular disease biomarkers are also quantified. CRP measurement rely mainly on immunosensing technologies with optical, electrochemical and acoustic transducers besides approaches to simultaneous analytes measurement (Albrecht et al., 2008;Heyduk et al., 2008;McBride & Cooper, 2008;Niotis et al., 2010;Qureshi et al., 2010a,b;Sheu et al., 2010;Zhou et al., 2010). Silva et al. (2010) incorporated streptavidin polystyrene microspheres to the electrode surface of SPEs in order to increase the analytical response of the cardiac troponin T and Park et al. (2009) used an assay based on virus nanoparticles for troponin I highly sensitive and selective diagnostic, a protein marker for a higher risk of acute myocardial infarction. Early and accurate diagnosis of cardiovascular disease is crucial to save many lives, especially for the patients suffering the heart attack. Accurate and fast quantification of cardiac muscle specific biomarkers in the blood enables accurate diagnosis and prognosis and timely treatment of the patients. It is apparent that increasing incidences of cardiovascular diseases and cardiac arrest in contemporary society denote the necessity of the availability of cholesterol and other biomarkers biosensors. However, only a few have been successfully launched in the market. One of the reasons lays in the optimization of critical parameters, such as enzyme stabilization, quality control and instrumentation design. The efforts directed toward the development of cardiovascular disease biosensors have resulted in the commercialization of a few cholesterol biosensors. A better comprehension of the bioreagents immobilization and technological advances in the microelectronics are likely to speed up commercialization of the much needed biosensors for cardiovascular diseases.
Cancer biomarkers
Cancer is the leading cause of death in economically developed countries and the second leading cause of death in developing countries. This disease continues to increase globally largely because of the aging and growth of the world population alongside an increasing adoption of cancer-causing behaviors, particularly smoking. Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death among females and lung cancer is the leading cancer site in males. Breast cancer is now also the leading cause of cancer death among females in economically developing countries, a shift from the previous decade during which the most common cause of cancer death was cervical cancer (Jemal et al., 2011). Solid cancers are a leading cause of morbidity and mortality worldwide, primarily due to the failure of effective clinical detection and treatment of metastatic disease in distant sites (Chambers et al., 2002;Pantel & Brakenhoff, 2004). Cancer can be caused by a range of factors, both genetic and environmental. Chemical, physical and biological factors such as the exposure to carcinogenic chemicals, radiation, bacterial (e.g. stomach cancer), viral infections (e.g. cervical cancer) and toxins (aflatoxin; e.g. liver cancer) can lead to cancer development (Vineis et al., 2010). As the causes of cancer are so diverse, clinical testing is also very complex. The multi-factorial changes (genetic and epigenetic) can cause the onset of the disease and the formation of cancer cells. However, no single gene is universally altered during this process, but a set of them that brings difficulties to the correct disease diagnosis. All the changes which take place, in the tumors from different locations (organ), as well within tumors from the same location, can be so variable and overlapping that it is difficult to select a specific change or marker for the diagnosis of specific cancers. Therefore, a range of biomarkers can potentially be analyzed for disease diagnosis. These biomarkers or molecular signatures can be produced either by the tumor itself or by the body in response to the presence of cancer (Robert, 2010). Several cancer biomarkers are listed in Table 1. The analysis of biomarkers in body fluids such as blood, urine and others is one of the methods applied in the detection of the disease. Multi-marker profiles, both presence and concentration level, can be essential for the diagnosis of early disease onset. These methods should provide information to assist clinicians in making successful treatment decisions and increasing patient survival rate (Tothill, 2009). A range of biomarkers have been identified with different types of cancers. These include DNA modifications, RNA, proteins (enzymes and glycoproteins), hormones and related molecules, molecules of the immune system, oncogenes and other modified molecules. Several biomarkers are current being studied, including genes and proteins; however few of them have routine cancer clinical testing importance because of their complexity. The development of protein based biomarkers for biosensors use in cancer diagnosis is more attractive than genetic markers due to protein abundance, recovery and cost effective technique for the development of point-of-care devices (Li et al., 2010).
Biosensors in cancer disease
Existing methods of screening for cancer are heavily based on cell morphology using staining and microscopy which are invasive techniques. Furthermore, tissue removal can miss cancer cells at the early onset of the disease. Biosensor-based detection becomes practical and advantageous for cancer clinical testing, since it is faster, more user-friendly, www.intechopen.com less expensive and less technically demanding than microarray or proteomic analyses. However, significant technical development is still needed, particularly for protein based biosensors. For cancer diagnosis multi-array sensors would be beneficial for multi-marker analysis. A range of molecular recognition molecules have been used for biomarker detection, being antibodies the most widely used. More recently, synthetic (artificial) molecular recognition elements such as nanomaterials, aptamers, phage display peptides, binding proteins and synthetic peptides as well as metal oxides materials have been fabricated as affinity materials and used for analyte detection and analysis (Sadik et al., 2009;Khati, 2010). Antibodies (monoclonal and polyclonal) have been applied in cancer diagnostics tests targeting cancer cells and biomarkers. Polyclonal antibodies can be raised against any biomarker or cells and with the introduction of high throughput techniques, applying these molecules in sensors has been successful. The use of monoclonal antibodies however, results in more specific tests. The drawbacks include that monoclonal antibodies are more difficult to maintain and can be more expensive than polyclonal antibodies (Huang et al., 2010). Replacing natural biomolecules with artificial receptors or biomimics has therefore become an attractive area of research in recent years. The advantages of using these molecules are that they are robust, more stable, less expensive to produce and can be modified easily to aid immobilization on the sensor surface as well as adding labels as the maker for detection (Liu et al., 2007). Those molecules can be synthesized after a selection from combinatorial libraries with higher specificity and sensitivity when compared to the antibody molecule. For cancer biomarkers analysis, bioaffinity based electrochemical biosensors are usually applied to detect gene mutations of biomarkers and protein biomarkers. Electrochemical affinity sensors based on antibodies offer great selectivity and sensitivity for early cancer diagnosis and these include amperometric, potentiometric and impedimetric/conductivity devices. Amperometric and potentiometric transducers have been the most commonly used, but much attention in recent years has been devoted to impedance based transducers since they are classified as label-free detection sensors. However, much of the technology is still at the research stage (Lin & Ju, 2005;Wang, 2006). Besides based on antibodies, electrochemical devices have been developed based on DNA hybridization and used for cancer gene mutation detection. In this type of device a single stranded DNA sequence is immobilized on the electrode surface where DNA hybridization takes place (Ahmed, 2008). ELISA based assays conducted on the electrode surface are the most frequently used techniques for cancer protein markers analysis, such as CEA. In this method the antibody (or antigen) is labeled with an enzyme such as horseradish peroxidase (HRP), or alkaline phosphatase (AP) and these will then catalyze an added substrate to produce an electroactive species which can then be detected on an electrochemical transducer. Electrochemical detection of rare circulating tumor cells has the potential to provide clinicians with a standalone system to detect and monitor changes in cell numbers throughout therapy, conveniently and frequently for efficient cancer treatment (Chung et al., 2011). Many commercially available platforms use fluorescence labels as the detection system. However, the instruments used for signal readout are usually expensive and are more suitable for laboratory settings. As an example the Affymetrix gene chip (Affymetrix Inc., Santa Clara, USA) can be used for screening cancer and cancer gene identification. Other biosensor platforms such as grating couplers, resonant mirrors and surface plasmon based systems have also been used for cancer biomarkers diagnosis. These are classified as labelfree and real-time affinity reaction detection systems. Different SPR based biosensors have been developed for cancer markers detection based on the above optical systems (Tothill, 2009). Recently, microcantilever based sensors have also been applied for early-stage diagnosis of hepatocellular carcinoma (Liu et al., 2009b). In spite of the achieved development in cancer biosensing, the point-of-care testing is not yet available. In order to achieve this goal challenges must be overcome such as: development of reproducible biomarker assays; improvement in recognition ligands; development of multi-channel biosensors; advances in sample preparation; device miniaturization and integration; development of more sensitive transducers; microfluidics integration; advanced manufacturing techniques and cost reduction (Rasooly & Jacobson, 2006).
Conclusion
A precise diagnostic for a disease is essential for a successful treatment and recovery of patients suffering from it. Diagnostics methods must be simple, sensitive and able to detect multiple biomarkers that exist at low concentrations in biological fluids. Biosensors can fulfill these requirements. However, biosensor devices need to be further developed and improved to face these new challenges to allow, for example, multiplex analysis of several biomarkers where arrays of sensors need to be developed on the same chip. Biosensors are firmly established for application in clinical chemical analysis. Biosensors for measurement of blood metabolites such as glucose, lactate, urea and creatinine, using both electrochemical and optical modes of transduction, are commercially developed and used www.intechopen.com routinely in the laboratory, in point-of-care settings and, in the case of glucose, for selftesting. While immunosensors have difficulty competing with traditional immunoassay based mainly on sensitivity requirements, they hold promise for testing where some sensitivity can be sacrificed for improved ease of use and faster time to result, such as in near-patient testing for cardiac and cancer markers. Although biosensors are used for several clinical applications, few biosensors have been developed for cardiovascular and cancer-related clinical testing. Development of molecular tools, both genomic and proteomic, to profile tumors and produce molecular signatures, based on genetic and epigenetic signatures, changes in gene expression and protein profiles and protein posttranslational modifications has opened new opportunities for utilizing biosensors in cancer testing. Harnessing the potential of biosensors is challenging because of cancer's complexity and diversity. Successful development of biosensor-based cancer testing will require continued development and validation of biomarkers and development of ligands for those biomarkers, as well as continued development of sample preparation methods and multi-channel biosensors able to analyze many cancer markers simultaneously. The use of biosensors for cancer clinical testing may increase assay speed and flexibility, enable multitarget analyses and automation and reduced costs of diagnostic testing. Biosensors have the potential to deliver molecular testing to the community health care setting and to underserved populations. Cancer biomarkers identified from basic and clinical research, and from genomic and proteomic analyses must be validated. Ligands and probes for these markers can then be combined with detectors to produce biosensors for cancer-related clinical testing. Point-of-care cancer testing requires integration and automation of the technology as well as development of appropriate sample preparation methods (Rasooly & Jacobson, 2006). A clear direction for future work in biosensor research is in molecular diagnostics. Improving the sensitivity of DNA biosensors for a single-molecule detection in an unamplified sample is an important goal to achieve. This goal will require enhancing the signal-to-noise rate, improving the signal produced by the biochemical reaction or increasing the sensitivity of the transducer while reducing background noise. Ultrasensitive transducer technologies will be required. Some recent examples of transduction modes with enhanced sensitivity include microcantilevers for the detection of mass changes upon detection of a binding event and quartz crystal microbalances capable of monitoring formation and rupturing of chemical bonds by sensing acoustic emissions. The latter has demonstrated sensitivity to detect a single virus particle. Increasing the arrays amplitude for more complete and rapid DNA sequencing information is another area of focus, and improvements in this area may ultimately be limited by resolution of the detection transducer. DNA chips are being incorporated into total analysis systems, including microfluidics and the biosensor on a single structure. These systems should include, in the future, no need for sample preparation, a user-friendly handling system, chemical analysis and signal acquisition capabilities. Central to development of lab-on-a-chip analysis system will be the homogeneous sensing formats and microfabrication technologies for DNA analysis. One recent step towards a homogeneous assay has been the development of synthetic polymeric probes that emit fluorescence only after the hybridization to native DNA targets, allowing monitoring of hybridization in real time without the need for separation steps. Further development and improvement of nanotechnologies will be needed to produce nanoscale devices, with expanded sizes of arrays using reduced sample volume. The future of such devices for rapid determination of a disease could be especially used for point-of-care application. However, cost and quality control of these devices must be strictly adjusted for the accurate devices to gain popular acceptance. Homogeneous assay formats, removing the need for sample preparation and amplification steps and mass fabrication will be important to lowering cost. Molecular biology will play a central role in the future of biosensor development, for example, to improve biocomponent stability, and for the development of aptamers. The highly reproducible synthetic approach and ease of immobilization of aptamers hold great promise for the custom design of future biosensors for molecular diagnostics (D'Orazio, 2003). Future innovation in biosensor technology to include biomarkers patterns, software and microfluidics can make these devices of high potential for health applications. The concept of using nanomaterials in the development of sensors for biomarkers diagnosis will make these devices highly sensitive and more applicable for point-of-care early diagnosis. Early diagnosis will aid in the increase in the survival rate of patients and successful development of biosensors for disease diagnosis and monitoring will require appropriate funding to move the technology from research through to the realization of commercial products. Biosensor research and development over the past decades have demonstrated that it is still a relatively young technology. The rationale behind the slow and limited technology transfer could be attributed to cost considerations and some key technical barriers. Many of the more recent major advances had to await miniaturization technologies that are just becoming available through research in the electronic and optical solid state circuit industries. Analytical chemistry has changed considerably, driven by automation, miniaturization, and system integration with high throughput for multiple tasks. Such requirements pose a great challenge in biosensor technology which is often designed to detect one single or a few target analytes. Successful biosensors must be versatile to support interchangeable biorecognition elements, and in addition miniaturization must be feasible to allow automation for parallel sensing with ease of operation at a competitive cost. The future is very bright for biosensors. These advancements will, however, require a concerted multi-disciplinary approach for the sensor systems to successfully make the very big jump from the research and development laboratory to the market place. Combination of several new techniques, derived from physical chemistry, molecular biology, biochemistry, thick and thin film physics, materials science and electronics with the necessary expertise has revealed the promise for development of viable clinical useful biosensor. | 2018-09-19T04:45:15.032Z | 2011-07-19T00:00:00.000 | {
"year": 2011,
"sha1": "95224e8dc979cc9bf8ac9419b1c2477626c3cda2",
"oa_license": "CCBYNCSA",
"oa_url": "https://cdn.intechopen.com/pdfs/16477.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a6093aaf22039fc531c909d766036ff8a75b073e",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
39202366 | pes2o/s2orc | v3-fos-license | Prediction of Spontaneous Protein Deamidation from Sequence-Derived Secondary Structure and Intrinsic Disorder
Asparagine residues in proteins undergo spontaneous deamidation, a post-translational modification that may act as a molecular clock for the regulation of protein function and turnover. Asparagine deamidation is modulated by protein local sequence, secondary structure and hydrogen bonding. We present NGOME, an algorithm able to predict non-enzymatic deamidation of internal asparagine residues in proteins in the absence of structural data, using sequence-based predictions of secondary structure and intrinsic disorder. Compared to previous algorithms, NGOME does not require three-dimensional structures yet yields better predictions than available sequence-only methods. Four case studies of specific proteins show how NGOME may help the user identify deamidation-prone asparagine residues, often related to protein gain of function, protein degradation or protein misfolding in pathological processes. A fifth case study applies NGOME at a proteomic scale and unveils a correlation between asparagine deamidation and protein degradation in yeast. NGOME is freely available as a webserver at the National EMBnet node Argentina, URL: http://www.embnet.qb.fcen.uba.ar/ in the subpage “Protein and nucleic acid structure and sequence analysis”.
Introduction
Protein deamidation is a post-translational modification in which the side chain amide group of a glutamine or asparagine residue is transformed into an acidic carboxylate group [1]. Nonenzymatic deamidation of asparagine is faster than of glutamine and hence presents higher physiological significance [2], being involved in processes such as apoptosis, brain development and aging [3]. Deamidation often [3], but not always [4], leads to loss of protein function. Deamidation rates in proteins vary widely, with halftimes for particular Asn residues ranging from several days to years. Deamidation may thus act as a molecular clock for protein function and turnover [5] and may take place during recombinant protein purification and during storage of therapeutic proteins.
Non-enzymatic deamidation at internal asparagine residues in proteins occurs near neutral pH through an intramolecular rearrangement and involves two steps [1,3]. In the first, ratelimiting step, the backbone amide nitrogen atom of the first amino acid residue right next to the C-terminal end of the Asn (referred to from now on as the N + 1 amino acid) attacks the carbonyl carbon of the asparagine or glutamine side chain forming a cyclic imide (Fig 1).
In unstructured peptides, the kinetics of succinimide formation is mainly affected by the identity of the N+1 residue, showing an inverse relationship with the bulkiness of the amino acid side chain [6]. When glycine is present in the N+1 position of an asparagine residue, the deamidation half-life is in the order of days while the presence of the phenylalanine slows the deamidation half-life to years. In proteins, structure strongly affects the kinetics of succinimide formation by imposing conformational constraints to both the amide nitrogen and the asparagine side chain [7]. In the second reaction step (Fig 1), the cyclic imide is hydrolyzed at either the alpha or beta carbonyl group, yielding iso-aspartate (isoAsp) and aspartate, in a ratio of approximately 3:1 in model peptides. Most organisms indeed possess the enzyme L-isoAsp-O-methyltransferase (PIMT) that specifically restores aspartic acid residues from iso-aspartic. Loss of this enzyme has harmful consequences [3]. Since the cyclic intermediate can undergo racemization, the final reaction products include both L-and D-Asp and isoAsp. In any case, upon asparagine deamidation a new acidic group appears in the protein. Asparagine glycosylation is a common post-translational modification that can occur in eukaryotes, bacteria and archaea at N[^P] [ST] sequences. Glycosylated asparagines are not prone to deamidation due to the attached glycan.
In contrast with the ubiquity and importance of asparagine deamidation, there is currently no publicly available algorithm for the prediction of Asn deamidation in proteins in general. A structure-based algorithm was published [7], showing a good predictive power but limited to those proteins with known three dimensional structure, a serious limitation considering that structure has been resolved only for a small subset of proteins. To our knowledge this algorithm is no longer available online. A second structure-based algorithm was recently reported, but its scope is limited to antibody variable regions [8]. We present NGOME, a sequence-based method for the prediction of asparagine deamidation from predicted structural features. We envision that NGOME will be useful to systematically evaluate whole proteome data and in the study of intrinsically disordered proteins for which the structural data is scarce. The analysis of specific case studies shows that NGOME can give insights into the spontaneous deamidation of individual proteins, as well as link deamidation with protein turnover in whole proteomes.
Computational definition of features modulating deamidation
In the absence of secondary and tertiary structure, asparagine deamidation rates are governed by the identity of the N+1 amino acid [7]. In model peptides, the Asn-Gly dipeptide is by far the fastest to deamidate, with bulky N+1 side chains generally slowing down the reaction. On the other hand, many conformational constraints decreasing Asn deamidation rates have also been identified, including alpha helix formation [9] and hydrogen bond formation by the Asn side chain, the N+1 backbone amide and the neighboring residues [7]. These conformational constraints slow down deamidation by restricting the nucleophilic attack of the backbone amide to the side chain carbonyl carbon of the asparagine. During the development of NGOME, we evaluated a larger set of conformational constraints, including the propensity to adopt beta sheet conformation. However, only alpha helix formation and intrinsic disorder were found useful to predict deamidation. In NGOME, conformational constraints are weighted in two main factors, the tendency to adopt alpha helix and intrinsic disorder. The input of NGOME is a protein sequence. NGOME calculates t 50 for each internal Asn in the sequence as follows: t 50 (sequence) is the deamidation half-time of the N,N+1 dipeptide in model peptides [7]. Similar to the quantitative analysis of amide hydrogen exchange in proteins, the observed deamidation half time is the product of the intrinsic deamidation half time as observed in unstructured peptides, times a protection factor that describes the slowing down of deamidation by conformational constraints [7]. H is 1 if the Asn residue is in a helix as predicted by JPred [10] and 0 otherwise. D is the disorder score for the Asn residue from the IUPRED algorithm and ranges from 0 (fully ordered) to 1 (fully disordered) [11]. IUPRED scores correlate with backbone dynamics as measured by NMR [12] and are used here as a proxy for local hydrogen bond formation. w H and w O are empirical weights (see below for their estimation).
Dataset
We compiled a database of 281 asparagine residues (67 positives and 214 negatives) in 39 proteins to train NGOME (see S1 Table in S1 File). We collected from the literature experimental reports of deamidation of Asn residues in proteins using mass spectrometry or Edman sequencing. Since deamidation rates depend strongly on pH and temperature, we only included experiments at neutral or slightly basic pH and up to 313 degrees Kelvin. An Asn residue was considered positive for deamidation if unequivocal change to aspartic or isoaspartic residue was observed. If quantitative data were available, we labeled the Asn residue as positive if conversion was at least 50%, with a half-time <100 days. An Asn residue was considered a negative for deamidation if deamidation was tested for and absent, or had a halftime >100 days. Asparagine residues for which direct experimental evidence was not obtained were not taken into account. Proteins obtained from natural sources were included in the dataset only if sample age was reported.
Algorithm training
We trained NGOME by randomly splitting the dataset into training and test sets 100 times.
The splitting was performed so that both training and test sets had a proportion of experimental positives and negatives as close as possible to the proportion in the full dataset. For each splitting, we selected w H and w O to maximize the area under the receiver operating characteristic (ROC) curve for the training set. For the test set, the area under the ROC curve for NGOME was larger than for sequence-based prediction 97 out of 100 times (average difference 0.0334± 0.0096). Finally, we selected the average values of w H (0.571) and w O (2.989) for NGOME. The performance of NGOME is shown in Fig 2, in comparison with predictions using t 50 (sequence) only. We computed t 50 for all Asn in the dataset and generated a ROC curve by considering as positives Asn residues with different values of t 50 (Fig 2A). The area under the ROC curve is larger for the NGOME predictions (green line, 0.9640) than for the sequencebased predictions (purple line, 0.9270) (p-value 6Á10 −3 [13]). NGOME also performs better for threshold values yielding few false positives (Fig 2A, inset).
We next considered Asn-Gly dipeptides, which deamidate the fastest in the absence of structure and are thus more likely to be of biological significance. However, only 52 of the 64 Asn-Gly dipeptides in our dataset are positive, confirming that structure plays an important role in determining Asn-Gly deamidation rates. t 50 (sequence) can not discriminate between positives and negatives in Asn-Gly dipeptides because it only takes the identity of the N+1 residue into account. On the other hand, Fig 2B shows that NGOME can discriminate between positive and negative Asn-Gly dipeptides. The area under the ROC curve is 0.7051 for the NGOME predictions (green line), larger than the random value of 0.5 for sequence-based prediction (purple line) (p-value 9Á10 −3 [13]).
To sum up, NGOME identifies fast deamidating Asn residues better than sequence-based predictions for both the full database and fast deamidating Asn-Gly dipeptides.
Online Implementation
We implemented NGOME online at www.embnet.qb.fcen.uba.ar, in the subpage "Protein and nucleic acid structure and sequence analysis". The user of NGOME provides as query a protein sequence in fasta format. A warning appears if the query sequence has few related sequences and the JPred secondary structure prediction was performed using only the query sequence.
The output of the NGOME server is both numerical and graphical. The first two tables in the output list the following attributes for all Asn residues in the query sequence: (1) Identity of the N+1 residue (2) Predicted status as positive or negative, using a threshold giving a false positive rate of 5% (3) Predicted value of t 50 (NGOME) (4) Predicted value of t 50 (sequence) (5) The corresponding protection factor P = t 50 (NGOME)/t 50 (sequence) (6) Predicted percent deamidation after a user-given time (default: 2 days), assuming an exponential decay (7) Whether the asparagine belongs to a N[^P][ST] sequence and thus may be glycosylated. The first table is sorted by residue number, while the second table is sorted by t 50 (NGOME). The purpose of these tables is to pinpoint deamidation-prone asparagines in the protein sequence of interest.
A second pair of tables include the same data for all positions of the query sequence, in the hypothetical case that they were occupied by an asparagine. The third table is sorted by residue number, while the fourth table is sorted by t 50 (NGOME). This experiment tests whether an asparagine residue introduced by a point mutation at a particular position would deamidate according to NGOME. The secondary structure and disorder predictions are taken from the wild type sequence. The JPred secondary structure predictions are based on an alignment of sequences homologous to the query sequence [10] and are thus unlikely to be strongly affected by a point mutation in the query. The IUPRED predictions are based on predicted residue interactions for a 21-residue window and are also unlikely to be strongly affected by a point mutation in the query. Nevertheless, if the user identifies a sequence position of potential interest, we suggest running NGOME for the mutant sequence. Two figures show the logarithm of t 50 (NGOME), t 50 (sequence) and the protection factor as a function of residue number (see the case studies section below for representative examples). Sequence positions with an Asn in the query sequence are highlighted. This visualization tool locates regions of the protein where the structure can protect an asparagine from deamidation. Finally, a third figure shows the predicted percent deamidation after a user-given time (default: 2 days), for both individual asparagines and the protein as a whole. This calculation assumes that the deamidation reactions of individual Asn residues are independent. t 50 (Protein) can be useful in tuning the lifetime of a particular protein, rather than focusing on individual asparagines.
The server also includes extensive documentation about protein deamidation, a guide to interpret the results and the analysis of four case studies of medical and biotechnological interest: superoxide dismutase, BCL-xL protein, human interferon beta and Trastuzumab heavy chain.
Superoxide dismutase
Recently, a relationship has been established between several missense N to D mutations in the Cu, Zn Superoxide Dismutase (SOD) protein and the onset of amyotrophic lateral sclerosis (ALS) [14]. Interestingly, the N to D replacement can also be obtained by the spontaneous deamidation of an asparagine residue. The SOD protein is expected to endure a lifetime on the order of 1.4 years when traversing an axon that is 1 m in length at a rate of 2 mm/day. This protein lifetime allows slow deamidation events to take place and suggests that deamidation could explain in some cases the sporadic onset of ALS. Mutation of N26, N131 and N139 to D in the recombinant enzyme decreases SOD conformational stability and accelerates SOD fibrillation [14]. This suggests that deamidation of N26, N131 and/or N139 may lead to loss of SOD function and increase the concentration of cytotoxic oligomeric species. N26 is the most deamidation-prone residue in SOD, and 23% of the protein purified from human red blood cell shows aspartate instead of asparagine in the position 26 [14]. NGOME successfully predicts N26 as the fastest deamidating asparagine in SOD, with an estimated t 50 of 10.3 days (Fig 3A). We would like to remark that N26 shows a relatively large protection factor, indicating that local order in the SOD structure ( Fig 3C) leads to nine-folds lower deamidation of this key residue compared to an unstructured peptide (Fig 3B). NGOME also predicts that the second and third-fastest deamidating asparagine residues in SOD are N131 and N139, with estimated t 50 -values of 237 and 113 days respectively. It should be noted that N131 and N139 are predicted to deamidate slowly, yet at a rate that is consistent with the SOD lifecycle and with late onset ALS. We conclude that NGOME is able to pinpoint deamidation-prone asparagines that could be responsible for SOD destabilization and misfolding events often observed in late onset ALS.
BCL-xL
Deamidation is generally regarded as a degradation process by which a protein loses its function. However, in some cases deamidation regulates key events in living cells. A paradigmatic case of the role of deamidation in the regulation of protein activity is BCL-xL, a component of the apoptotic response to DNA-damaging antineoplastic agents [15]. In basal conditions, the BCL-2 family members (BCL 2 and BCL-xL) block the pro-apoptotic activity of BH3 domainonly proteins. Cis-platin antineoplastic drugs cause DNA damage, which induces deamidation of BCL-xL at conserved asparagine residues 52 and 66 in an unstructured loop of the protein [15]. This overrides the anti-apoptotic activity of BCL-xL, the Bak and Bax proteins are activated and the caspase cascade is initiated with the consequent cellular apoptosis. NGOME correctly predicts that the two fastest deamidating asparagines in human BCL-xL are residues 52 and 66 (Fig 4A). In this case, NGOME accurately identified the molecular effectors (N52 and N66) that trigger apoptosis. N52 has an estimated t 50 of 3.2 days and N66 an estimated t 50 of 5.1 days. NGOME helps visualize the interplay between structure and sequence in protein deamidation. A sequence-based prediction would suggest three deamidation-prone asparagines in BCL-xL (Fig 4A, purple line). However, structure in the C-terminal half of the (Fig 4B and 4C) slows down N183 from deamidation relative to N52 and N66 ( Fig 4A, green line).
Interferon beta
Recombinant Interferon beta is widely used for the treatment of relapsing-remitting multiple sclerosis [16]. Interferon beta deamidation is a slow process that does not generally impact the activity of the wild type protein under physiological conditions. However, recombinant Interferon beta is a commercial drug that must remain nearly unchanged for a long time, usually two years, during the product lifecycle. In this scenario, slow asparagine deamidation gains critical relevance. It has been experimentally observed that asparagine 46 suffers deamidation in the lapse of days to months [16]. In this case, deamidation increases the biological activity of the drug [16]. NGOME predicts that all asparagines in Interferon beta deamidate slowly (Fig 5). Nevertheless, the top ranked asparagine 46 is predicted to deamidate with a t 50 of 23.1 days, in agreement with experimental data. This is due to a strong sequence propensity (Fig 5A, purple line) that overrules an overall strong protection from deamidation by conformational constrains (Fig 5B and 5C). The identification of deamidation prone residues in biotechnological products is a useful tool in formulation development. Trastuzumab heavy chain Therapeutic monoclonal antibodies are by far the fastest growing biological drugs segment. As with every biotherapeutic drug, the structural identity of the protein should be preserved within narrow ranges during the whole lifecycle of the product. However, it is not possible to exclude all possible degradation prone sequences, including asparagine deamidation, from the whole protein or even from CDR regions. Trastuzumab is a commercial therapeutic monoclonal antibody used to treat Her2 positive breast cancer. Residues N55 in the CDR2 region and N388, N393 or N394 in the CH3 of Trastuzumab heavy chain have been reported to deamidate [17,18]. NGOME correctly predicts the deamidation of N55 in the CH3 of the heavy chain of Trastuzumab and points at N388 as the main site of deamidation in the CDR2 region (Fig 6). N319 is also predicted to be deamidation-prone, but has not been reported to deamidate. Monoclonal antibodies are heterodimeric proteins formed by two light chains, linked by intra and interchain disulfide bridges. Since quaternary structure and disulfide bonds are not considered by NGOME, we find the correlation between experiment and prediction encouraging.
Protein turnover in yeast
Our last case study does not deal with an individual protein, but with protein dynamics at an organismic scale. The turnover rates of over 3700 yeast proteins during exponential growth have been reported [19] and short and long-lived proteins were analyzed for enrichment of various physical attributes. A significant enrichment of serine and asparagine in short-lived proteins was observed [19]. The enrichment of serine in short-lived proteins was rationalized by relating it to serine-rich sequences that target proteins for degradation, such as the PEST sequence. The enrichment of asparagine in short-lived proteins was left unexplained.
We have looked for a correlation between the logarithm of the experimental protein halflives and the logarithm of the t 50 (Protein)-values provided by our algorithm. We performed NGOME predictions for all proteins in [19] (see S2 Table in S1 File). We used for the correlation only the in vivo t 50 -values with a positive sign [19]. Fig 7 shows that short-lived proteins have shorter t 50 (Protein)-values for deamidation. The R-value is 0.2, indicating that deamidation is not the main determinant of protein turnover in vivo. However, the p-value is 10 −28 , indicating a statistically strong correlation.
We propose two possible explanations for this correlation. The first explanation starts with the fact that deamidation leads in most cases to loss of protein function and decreases organism fitness [20]. Thus, there is a selection pressure against deamidation, which is weaker for shortlived proteins because they degrade faster than they deamidate. A second explanation is that spontaneous asparagine deamidation may act as a degradation signal for a subset of yeast proteins. In the case of yeast, degradation may be mediated by isoaspartyl-specific metalloproteases [21]. These two explanations are not mutually exclusive. In our opinion, the bottom line is that NGOME is a useful tool to study the relationship between spontaneous protein deamidation and in vivo protein turnover.
Discussion NGOME is able to predict non-enzymatic asparagine deamidation in proteins with considerable accuracy for a sequence-based method (Fig 2). NGOME development was limited by the presence of only 39 proteins in the training dataset. The identification of deamidated asparagines within complete proteins is elusive because deamidation is a sub-daltonic modification (mass difference 0.98 Da). Moreover, deamidation is a spontaneous process that begins right after translation. Thus, the extraction of bona fide kinetic data is not obvious since the age of proteins obtained from natural sources is hard to evaluate precisely. We expect that future Prediction of Spontaneous Protein Deamidation from Sequence analyses of recombinant protein samples of known age with high resolution mass spectrometers, such as Fourier Transform Mass Spectrometry, Orbitrap Mass Spectrometry and Q-TOF, will increase the size of the training dataset and allow for improved algorithm performance. We envision the use of the current version of NGOME in proteome-wide studies of protein deamidation as well as in suggesting protein point mutations that improve the longevity of therapeutic proteins.
Enzymatic Asn deamidation, non-enzymatic deamidation at amino or carboxyl terminal Asn residues and deamidation at low pH are beyond the scope of NGOME since they take place through different mechanisms. Asparagine deamidation may be modulated by structural factors at the Asn or its proximal residues that are not included in the algorithm, such as dimerization/oligomerization, interactions with other molecules such as cofactors, glycosylation and other post-translational modifications. NGOME predictions should be used with care if such factors are suspect to be present.
Four of our case studies deal with specific proteins and show the relevance of deamidation reactions happening in different timescales and conditions (Figs 3, 4, 5 and 6). For example, superoxide dismutase deamidates slowly in the months time scale, while BCL-xL deamidates much faster in the hours to days timescale. Deamidation of BCL-xL regulates a biological process in the cell, while deamidation of Interferon beta and the Trastuzumab antibody are mainly of interest for biotechnology and in a test tube. Regardless of timescale and milieu, the algorithm yielded a relative ranking of deamidation propensity for the asparagine residues of a sequence that reflects the experimental tendencies. NGOME can also illustrate how the interplay between sequence and structure modulates the process. This valuable information can be gained in the absence of high-resolution structural data, an information that requires considerable experimental work and is absent for many proteins. Since NGOME requires only a protein sequence as an input and not a three-dimensional structure, it can be used to perform predictions at the proteome level (Fig 7). In our fifth case study, this led to testable hypotheses for the role of spontaneous asparagine deamidation in protein turnover and evolution.
Supporting Information S1 File. File includes S1 Table and S2 Table. S1 Table: Experimental reports of spontaneous deamidation of internal Asn residues in proteins. S2 Table: Prediction of spontaneous deamidation and experimental lifetimes for all proteins in [19]. (DOCX) | 2017-07-08T21:55:42.423Z | 2015-12-16T00:00:00.000 | {
"year": 2015,
"sha1": "03ac07f216938f1f90ad87787c5edbd664de7065",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0145186&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03ac07f216938f1f90ad87787c5edbd664de7065",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
250283083 | pes2o/s2orc | v3-fos-license | 50 Years of the steric-blocking mechanism in vertebrate skeletal muscle: a retrospective
Fifty years have now passed since Parry and Squire proposed a detailed structural model that explained how tropomyosin, mediated by troponin, played a steric-blocking role in the regulation of vertebrate skeletal muscle. In this Special Issue dedicated to the memory of John Squire it is an opportune time to look back on this research and to appreciate John’s key contributions. A review is also presented of a selection of the developments and insights into muscle regulation that have occurred in the years since this proposal was formulated.
Introduction
The steric-blocking mechanism proposed in 1973 by John Squire and the author (Parry and Squire 1973) explained how the regulation of vertebrate skeletal muscle might be facilitated by the combined action of tropomyosin and troponin.Since 50 years have now passed since this research was undertaken (1971/1972) it seems an opportune time to look back on the events of the day that led to those ideas and to the interactions between the authors that contributed to the success of this venture.It is also of interest to consider a selection of the most important developments in this field that have occurred in subsequent years.
In order to put these events in context it may be pertinent to provide here a brief account of the personal history of both authors since this is clearly related to their ability to have collaborated so successfully on the steric-blocking mechanism and, indeed, on many other matters over the years.I first met John Squire in 1963 when he was 18 years old and in his first year of a physics degree at King's College London (KCL).In turn, I was in my first year of a PhD in biophysics, also at KCL.In order to gain a little income, I undertook some student demonstrating in first year physics laboratory classes and this proved to be the occasion of our first meeting.When my PhD studies were completed under the supervision of Arthur Elliott, it was John Squire who sat down at "my" desk to undertake his own PhD under the same supervisor.This was the second in what proved to be a series of intersections in our respective careers that were destined to play a major part in both of our lives.
During my postdoctoral fellowship with Bruce Fraser and Tom MacRae at the CSIRO Division of Protein Chemistry in Melbourne, Australia, where I worked primarily on keratin structures, I honed my skills in fibre diffraction and model building, both of which were, with hindsight, to play a part in our subsequent research on the steric-blocking mechanism.After Melbourne I spent 2 years (1969)(1970)(1971) with Carolyn Cohen and Don Caspar at the Childrens Cancer Research Foundation in Boston working on the crystal structures of tropomyosin.Tropomyosin crystals illustrated that dynamic interactions in muscle were probable and that 40 nm long tropomyosin molecules were firmly bonded endto-end to form an open meshwork of supercoiled filaments (unit cell diagonal 40.2 nm with a variation of only 0.2%).Our work on the molecular rearrangements of tropomyosin in the crystal lattice caused by troponin also illustrated that in vivo some movement of tropomyosin in the thin filaments of muscle was likely (Cohen et al. 1971(Cohen et al. , 1972)).
After John completed his PhD he took up a post-doctoral fellowship in Jack Lowy's laboratory in Aarhus, Denmark.The main themes of the research undertaken there were X-ray diffraction studies on relaxed and contracting muscle (Lowy 1972;Vibert et al. 1971Vibert et al. , 1972) ) and the structural basis of contraction in muscle (Small and Squire 1972;Squire 1972).Back in the 1960s and early 1970s the use of relatively low intensity X-ray sources, allied to the need to take an X-ray pattern of contracting muscle only for the fraction of a second that the muscle was stimulated to contract, made the total exposure a very prolonged process.Subsequently, however, the use of high intensity sources generated from synchrotron radiation became commonplace and total exposure times were much reduced.A key observation in the early 1970s was that X-ray studies on contracting vertebrate skeletal muscle (Huxley 1970(Huxley , 1972) ) and those in rigor (Vibert et al. 1972) showed significant intensity differences on the 2nd and 3rd layer lines of the actin diffraction pattern.These data, allied to the wide knowledge of every aspect of muscle structure and function that John had gained in both the UK and Denmark, were to prove invaluable in our subsequent collaborative venture.
In late 1971 I gained a post-doctoral fellowship to work with Andrew Miller at the Laboratory of Molecular Biophysics in the Department of Zoology.Oxford.At the same time John returned from Denmark and took up a post-doctoral Fellowship with Belinda Bullard, also in the Department of Zoology.Our careers thus intersected for the third time and it soon became apparent that our complementary experiences and expertise might allow us to model the mechanism by which vertebrate skeletal muscle was regulated in vivo.Importantly, and prior to us arriving in Oxford, we had independently come to the conclusion that the movement of tropomyosin in the thin filament was the key factor in regulation.
It is important to acknowledge from the outset that there were many relevant and recently published/in press observations that provided key inputs into our analyses.For example, it had been shown that seven actin monomers were stoichiometrically related to a single tropomyosin molecule (Ebashi and Endo 1968).It had also been shown that the axial period of tropomyosin in muscle measured by X-ray fibre diffraction was 38.5 nm (Huxley and Brown 1967) but that the length of tropomyosin deduced from the crystal studies was about 41.0 nm when supercoiling was removed.Further, O'Brien et al. (1971) had suggested that the X-ray observations on contracting and relaxed muscles might be explicable in terms of changes in the position of tropomyosin in the long-period grooves of the actin helix, though they gave no detailed analyses to support their ideas.However, optical diffraction studies that they undertook on electron micrographs of paracrystals of F-actin and also of thin filaments containing tropomyosin (and sometimes troponin) indicated an enhancement of the 2nd layer line diffraction when tropomyosin was present.They did not, however, indicate what structural changes might occur in the thin filaments during muscle contraction when tropomyosin was always present.Another key observation was that derived from the 3-D reconstructions of Moore et al. (1970) on actin filaments.These showed the sites where the heads of myosin, in the absence of ATP, bound to the actin filaments.These were all to prove crucial pieces of evidence in our model-building process as were the X-ray data on relaxed and contracting muscle pertaining to the 2nd and 3rd layers lines of the actin diffraction pattern previously noted.
Thus, although the general form of thin filament structure was well defined in the early 1970s no detailed analyses of the structural changes that might occur in the thin filaments during muscle regulation had been undertaken.This was the point from which our own research efforts were to commence and which were to climax, in a remarkably short time thereafter, in a model that has very largely withstood the passage of time.
Structural analyses
Significantly, in the steric-blocking mechanism paper (Parry and Squire 1973) the first topic that we discussed related to the Mg tactoids of tropomyosin (Caspar et al. 1969: axial period 39.5 nm): it was noted that these displayed 14 moreor-less equally-separated axial bands.This suggested that the tropomyosin sequence might contain a quasi-repeat of approximately this magnitude, i.e. 39.5/14 or 2.82 nm, a value equivalent to 2.82/0.1485or 19 residues in a coiledcoil conformation of the type adopted by tropomyosin.This was virtually identical to half the separation of consecutive actin molecules in the thin filament and corresponded directly to the 2.8 nm meridional reflection seen earlier by X-ray diffraction (Caspar et al. 1969).Although the complete sequence of tropomyosin was unknown in 1971 the idea that there might be a close correspondence between the period in tropomyosin and that in the actin helix was very suggestive to us that periodic interactions between tropomyosin and actin would occur, a concept consistent with tropomyosin lying in any of a number of positions in the grooves of the actin-containing thin filaments.
In addition, we showed that if the molecules of tropomyosin (length 41.0 nm) were supercoiled with a radius of about 3.0 nm, as would be likely in the grooves of the actin helix, the axial period would be 39.5 nm, a value akin to the observed period of 38.5 nm.Any additional supercoiling of tropomyosin would necessarily reduce the 39.5 nm value to one closer to 38.5 nm and, indeed, it was shown that a supercoil of pitch 5.5 nm and radius 0.25 nm would be sufficient to take up the entire length of the molecule.
As noted earlier X-ray diffraction data (Huxley 1970(Huxley , 1972;;Vibert et al. 1972) had shown that in relaxed vertebrate skeletal muscle the intensity on the 2nd layer line of the actin pattern (I 2 ) was less than that on the 3rd layer line (I 3 , where I 2 < I 3 ).In contrast, in contracting muscle the intensity on the 2nd layer line was considerably greater than that in relaxed muscle (i.e.> I 2 ) and that on the 3rd layer line was smaller (i.e.< I 3 ).As a result the 2nd layer line intensity was greater than that on the 3rd layer line.This was a key observation and one that strongly informed our model-building studies.Indeed, very early model calculations showed that the second layer line of the actin pattern was particularly sensitive to the position of tropomyosin.We now found ourselves to be in a position to undertake detailed model-building and to calculate the diffraction pattern of models with tropomyosin lying in different positions in the long-period grooves of the thin filament.We hoped that we might be able to match the qualitative experimental observations noted above.
Computers in the early 1970s were rather primitive by present-day standards.This necessitated a suitably simple model amenable to ready calculation.Consequently, the symmetry of the thin filament was modelled as that of a 13/6 helix with an axial repeat of 35.5 nm.Each actin molecule was represented by a sphere of radius 2.4 nm and the length of tropomyosin associated with each actin (5.5 nm) was modelled as five overlapping spherical scattering units, each with a radius of 0.83 nm.Each of these was separated from its immediate neighbour by an axial distance of 1.1 nm.The radius of 0.83 nm arose from the need to ensure that the ratio of the combined volumes of the five scattering units representing tropomyosin to that of the volume of the actin sphere matched the ratio of one-seventh of the molecular weight of tropomyosin to the molecular weight of actin.This, in turn, was dictated by the fact that the electron densities of tropomyosin and actin were virtually identical (430-440 el nm −3 ).
The Fourier transforms of ten models of the thin filament were thus calculated.These differed only in the azimuthal position of tropomyosin (Fig. 1), which was systematically rotated in 5° intervals about the long axis of the thin filament from a point where it lay central in the groove formed by the two long-period strands of actin monomers (azimuth 90°) to one where it was well displaced from it (azimuth 45°).The calculations were striking and immediately informative: the intensity on the second layer clearly increased with increasing azimuth whereas the intensity on the third layer line decreased with increasing azimuth.This strongly indicated that if tropomyosin was to move from a position with azimuth 45-50° (radius about 4.2-4.5 nm) in relaxed vertebrate skeletal muscle to a position with azimuth 65-70° (radius 3.3-3.5 nm) in contracting muscle the observed intensity changes on the 2nd and 3rd layers of the actin pattern would be very satisfactorily explained.In addition, in relaxed muscle the tropomyosin filaments would lie in very close proximity to the known binding site of the HMM S-1 (head) fragment of myosin on actin and could thereby Fig. 1 Thin filament in crosssection with the two long-period actin strands considered as continuous threads.The position of tropomyosin (T), assuming that it lies on the surface of actin, is defined by radial coordinates (r, θ), where θ is allowed to vary between 45° and 90°.For larger values of θ it is believed that the tropomyosin molecules will display a degree of supercoiling.Redrawn from Parry and Squire (1973) sterically-block myosin from interacting with actin.In contrast, in contracting muscle the position of the tropomyosin filaments would be displaced from the HMM S-1 binding site and would not hinder actomyosin interaction.If nothing else the steric-blocking concept was striking in its simplicity and was, we felt, particularly attractive because of it (Fig. 2).
A physical model was constructed by John to illustrate the position of tropomyosin filaments in the "on" and "off" states in the actin thin filaments.About 40 rubber balls were purchased from Woolworths in the High Street in Oxfordwhite, red and dark green.John felt slightly guilty about cleaning out the stock of rubber balls from the store and thereby depriving the local children of their entertainment but the needs of science prevailed.Each actin was represented by a white ball but every seventh one was replaced by a red one, thereby indicating the extent of the "structural unit".Troponin was represented by a dark green ball.A hole was drilled through the centre of the "actin" balls and then each was threaded on to a thin wire, thereby representing one of the two long-period strands comprising the actin filaments.The tropomyosin filaments were represented by lengths of rubber tubing "borrowed" from the chemistry laboratories.The resulting figure in our paper showed various positions of tropomyosin in the long period grooves of the thin filament, and was successful in illustrating to readers exactly what the steric-blocking mechanism implied (Fig. 3).Nowadays, the model would be computer-generated Tropomyosin is now in a position that would permit binding of the myosin head to actin and would correspond to muscle in a contracting state.Reproduced from Parry and Squire (1973) with permission from Elsevier and would be much more attractive visually.It nonetheless served its purpose well at the time.
Progress on the steric-blocking project was very rapid indeed from its inception in early November 1971 to the time when I gave a seminar on the completed model to the Department of Zoology, Oxford on 21 January 1972.A few months of tidying up the manuscript followed before it was submitted on 9 May 1972 and published in 1973.In a note added in proof we commented that after our paper had been submitted we learnt that Hugh Huxley and John Haselgrove were carrying out similar calculations.They subsequently published these in the Proceedings of the 1972 Cold Spring Harbor Symposium on Quantitative Biology (Huxley 1972;Haselgrove 1972) and, indeed, their conclusions closely matched our own.
John and I agreed that we had contributed equally to the 1973 paper.However, unlike today where an asterisk would be placed by both author's names indicating equal contributions, we decided to toss a coin to decide whether it should be Parry and Squire ("heads") or Squire and Parry ("tails").John tossed the coin and thus the order was decided.It made no real difference to either of us, of course, and we always received equal recognition for our efforts.This represented the only time in my scientific career when the order of authors on a paper I contributed to was decided by the toss of a coin.
It is worth noting that the results presented were not confined to the regulation of vertebrate skeletal muscle but included those for both molluscan muscle and vertebrate smooth muscle.Furthermore, it was shown that the approximately 20% increase in intensity observed on both the 5.1 and 5.9 nm layer lines of the actin pattern as muscles go from a relaxed to a contracting state (Haselgrove 1970;Lowy 1972;Vibert et al. 1972) was not a consequence of changes in either the structure of actin or in the organisation of the tropomyosin filaments but arose from cross-bridge attachment.Nowadays there is a wealth of information supporting that conclusion, of course, but in the early 1970s this was not the situation and the idea was considered likely but not unequivocal.Looking back at the 1973 paper it was interesting to see the lengths we went to in order to convince ourselves (and others) that the only realistic solution to the intensity changes observed on the 2nd and 3rd layer lines between the relaxed and contracting states was that involving the movement of tropomyosin.
With hindsight everything came together far more successfully than we could have ever imagined.The data on which we based our ideas were very limited: perhaps we were fortunate in getting things right but maybe we just spotted the obvious interpretation.Either way this paper was to give both of us one of the greatest thrills of our scientific careers and was one that we looked back on in subsequent years with a great deal of affection, not least because this represented the first opportunity we had been afforded to work closely together.
Later research
Any scientifically-significant paper, as we hoped our 1973 paper might prove to be, should not only advance the field but should also provide the framework for future development.Our desire, therefore, was that our contribution would represent a new beginning in the regulation story and one that would facilitate the gaining of new insights into the mechanism.
As is often the case with a new concept the regulatory model, as originally presented, was not universally accepted and, indeed, it often proved to be the source of considerable debate and the expression of fervently-held views.This, of course, was entirely appropriate and represented the way that any scientific enterprise worthy of the name should be addressed.While details of the controversies are not described here (see, however, Squire and Morris (1998) for a discussion of many of the relevant issues) this does not lessen the importance of the role that they played in the future development of the regulatory mechanism.
Subsequent progress by ourselves (primarily John Squire, Ed Morris, Danielle Paul and their colleagues) but also by others (including Peter Vibert, Bill Lehman, Roger Craig, Carolyn Cohen, Keiichi Namba and Takashi Fujii and their colleagues) has very much confirmed those hopes.It is not the purpose of this paper to review all of the progress on the regulation of vertebrate skeletal muscle that has been made since 1971/1972 when this research was undertaken-now 50 years ago.Rather, a small number of areas of particular interest to the author, primarily relating to structural/ functional aspects, have been selected and these are discussed below.These have either confirmed the essence of the steric-blocking mechanism or have given rise to exciting new insights.Much more detailed accounts of these and other significant developments are provided in the reviews by Squire et al. (2017) and Hitchcock-DeGregori and Barua (2017).
As noted earlier the Mg tactoids of tropomyosin (axial period 39.5 nm) displayed 14 more-or-less equally-separated bands.This suggested that the sequence might contain a quasi-repeat of about 2.82 nm.Subsequent sequence analyses (Parry 1974(Parry , 1975;;McLachlan and Stewart 1976) did indeed show that the sequence of rabbit alpha-tropomyosin (284 residues) had a repeat of about 39.2 residues (5.8 nm) that was strongly halved (19.6 residues or 2.9 nm) in the axial distributions of both the acidic residues and the apolar residues, and that approximately eight residues were involved in a head-to-tail overlap of similarlydirected tropomyosin molecules in the filaments.These periods were easily and directly related to the separation of actin monomers along one strand of the thin filaments.It followed naturally that although the seven (14) repeats were not identical to one another it was indeed possible for each actin to be regulated by tropomyosin in a quasiequivalent manner closely akin to that embodied in our proposals.
Using 3-D reconstruction methodology on electron micrographs of thin filaments Spudich et al. (1972) showed that tropomyosin filaments sometimes lay to one side of the centre of the actin grooves in a position close to one of those that we had identified.Interestingly, however, helical reconstructions of various muscles in different states (Lehman et al. 1994(Lehman et al. , 1995;;Vibert et al. 1997;reviewed by Squire et al. 2017) revealed that tropomyosin filaments were not found in just two positions (on and off) on the actin filament, as we had suggested, but in three positions.The first, when the calcium levels were low, was termed the "off" position (the blocked or B-state), and corresponded to the situation where the attachment of the myosin heads was almost completely blocked.The second corresponded to the so-called intermediate state (the closed or C-state), which resulted from calcium activation of thin filaments, where the tropomyosin filaments were rotated a further 20° relative to that in the off state.The third state (the myosin or M-state) occurred when the myosin heads were bound strongly.In this case the tropomyosin filaments were rotated a further 10° relative to that in the C-state.Over the years a considerable body of evidence has been accumulated that supports these conclusions (see, for example, Phillips et al. 1986;McKillop and Geeves 1993;AL-Khayat et al. 1995;Brown and Cohen 2005;Poole et al. 2006) and, together, they represent a significant development in our understanding of regulation.
Dividing electron microscope images of filaments into short segments has allowed three-dimensional reconstructions to be performed at much higher resolutions than were previously thought possible.Notable amongst the successes using this technique are those for actin filaments (resolution 0.66 nm, Fujii et al. 2010), actin-tropomyosin (resolutions 0.37 nm for F-actin and 0.65 nm for tropomyosin, van der Ecken et al. 2015) and actin-tropomyosin-myosin in the rigor state (resolution 0.8 nm, Behrmann et al. 2012).Between them these structures (and refined versions of some of them) have provided a number of fascinating insights.For example, it has become clear that interactions between actins along an individual long-pitched strand are strong whereas those interactions between the strands are relatively loose.Further, in the Behrmann et al. (2012) structure the interactions between a single tropomyosin sub-repeat, two neighbouring actins along a long-period strand and a myosin head (in the rigor state) revealed, as predicted, that the tropomyosin lay very close to the M-state and, furthermore, that the tropomyosin strands were interacting tightly with the myosin heads.Many important details of the steric-blocking mechanism have been demonstrated by studies of this type and these have proved invaluable in the on-going increase in our understanding of the steric-blocking mechanism.
It is always exciting when something unexpected turns up.In single particle image processing of negatively-stained thin filaments in the absence of calcium (Paul et al. 2017) tropomyosin was shown to lie in essentially equivalent positions on each actin in the thin filament.This, in itself, was no surprise, of course, as it had previously been thought likely because of the quasi-equivalence of the seven sets of sequence repeats in tropomyosin, the earlier helical reconstructions from electron micrographs and also the X-ray diffraction data, all of which tend to "see" average structures rather than ones displaying local variation.In the presence of calcium, however, the situation was rather different.For ease of explanation of the results, the seven quasi-repeats in the sequence of tropomyosin were labelled as a, b, c, d, e, f and g, where a, b and c constitute what has been termed as set 2 and where d, e, f and g constitute what has been termed as set 1 (Squire et al. 2017).The validity of sub-dividing the tropomyosin repeats in this manner relies on the evidence presented by Paul et al. (2017) that the negative staining of their specimens, allied to the resolution limits of their data, does indeed allow a clear differentiation to be made between troponin subunits and tropomyosin along the length of the thin filaments.On this basis Paul et al. (2017) have suggested that the tropomyosin repeats comprising set 1, which are close to troponin on the pointed/M-line end of the thin filament, shift across the filament by about 18° but the tropomyosin repeats that comprise set 2, and which lie on the other side of troponin at the barbed/Z-line end of the thin filament, move by an average of about 28°.Thus, in the presence of calcium the set 1 repeats of tropomyosin lie in the closed C-state whereas the set 2 tropomyosin repeats, even in the absence of myosin, lie in a position that is close to the M-state.It follows that tropomyosin does not move as a rigid body and that the lateral shift of the tropomyosin repeats can show variation from actin-to-actin such that some myosin sites on actin may be completely open and some may be partially closed (Paul et al. 2017).
Using cryo-electron microscopy, which preserves proteins in a near-native frozen hydrated state, single particle image analysis has recently broken the sub-nanometer barrier, with detailed structures of the thin filaments in cardiac muscle, both in the presence and absence of Ca 2+ (Yamada et al. 2020).In conjunction with known crystal structures this has revealed that the head-to-tail overlap of tropomyosin molecules lies in a complex with an N-terminal region of troponin T and a C-terminal region of troponin I. Further, these studies have shown the core of troponin lying on the actin filament.The regulatory mechanism has thus been explained as follows: in the absence of Ca 2+ the C-terminal part of troponin I binds to both actin and that part of tropomyosin that lies above the troponin core.Consequently, the HMM S-1 binding sites on actin are blocked.However, when Ca 2+ is bound by a region in the N-terminal part of troponin C the complete C-terminal fragment of troponin I dissociates from the complex by the binding of a short N-terminal sequence in the C-terminal part of troponin I to a region in the N-terminal fragment of troponin C.This allows tropomyosin and the N-terminal portion of troponin T that lies near the head-to-tail junction of the tropomyosin molecules to move across the surface of actin, thereby revealing some of the HMM S-1 binding sites on actin (Yamada et al. 2020).This research has enabled us to gain a much more detailed understanding of the regulatory mechanism at the molecular level than was previously possible.
More progress on the troponin structure was reported for murine cardiac thin filaments using the technique of cryo-electron microscopy (Oda et al. 2020).By incorporating a Volta phase plate the contrast in the micrographs was enhanced considerably and Oda et al. (2020) were also able to visualise a complete repeat unit in the thin filament.Also, in 2021 Risi et al. published a series of cryo-EM maps of the cardiac thin filament at physiological Ca 2+ levels, where the two strands consist of a mixture of regulatory units, composed of Ca 2+ -free, Ca 2+ -bound or, as observed for the first time, "mixed" with Ca 2+ -bound on one side and Ca 2+ -free on the other.
In yet another recent development Wang et al. (2021) used electron cryo-tomography to investigate structural details of the various regions comprising the mouse sarcomere in the rigor state.Amongst the results reported were the I-band structures of the actin-tropomyosin-troponin complex (resolution 1.98 nm) and the actin-tropomyosin complex (resolution 1.06 nm).After a variety of image processing/ refinement steps these authors were able to demonstrate that tropomyosin in the thin filaments in the Ca 2+ state did indeed lie in the C-state.However, in the A-band the data clearly indicated that tropomyosin lay in the M-state.It confirmed earlier work that the position of tropomyosin on actin can differ locally within the same filament and also within the same sarcomere (Paul et al. 2017).Wang et al. (2021) also showed that there were considerable similarities between skeletal and cardiac troponin when bound to actin filaments.
All of these results, and there are other important ones not described here, have confirmed the essence of the stericblocking mechanism proposed in 1973.Of course, refinements and new data have altered some aspects and thereby revealed many of the intricacies and details of the regulatory mechanism in vertebrate skeletal muscle that could not have been imagined in 1971/1972 with the evidence then available.When John and I last met at the end of 2019 we still looked back on this project with some pleasure and rejoiced that it had not only survived but had evolved and thrived.The rubber ball model constructed 50 years ago remained with John, at work or in his home study, over this entire period and remained a permanent memento of an exciting scientific time for the pair of us (Fig. 4).
John Squire's contributions to the scientific world
John was internationally recognised for his pioneering research in muscle (especially thick filament structure and thin filament regulation), but also in other areas too, such as the glycocalyx, and he published in excess of 100 peerreviewed papers in the international literature as well as 36 reviews, some in books and some in Journals.Although John and I published only six papers together over a span of 44 years we remained in close contact (person-to-person, mail, fax, phone or email depending on the era) from 1963 until his untimely and tragic passing in January 2021.My wife Jenny and I visited John and Melanie in Salisbury late in 2019 just before "covid" became a word that the world would rather forget.We look back at our last visit with much pleasure but also, in light of subsequent events, with much sadness.
In addition to his numerous scientific achievements John and I collaborated and organised five four-yearly Workshops at Alpbach in Austria starting in 1993 and finishing in 2009.These were on "Coiled-coils, Collagen and Co-proteins" and were essentially devoted to the structure and function of fibrous proteins.On the completion of the 2009 Workshop we passed the organisation of future meetings on to Andrei Lupas and Dek Woolfson.After each Workshop (except the first) we co-edited Special Issues of the Journal of Structural Biology (we were both on the Editorial Board) that covered the papers presented, thereby providing a permanent record of the advances made.In addition, we co-edited four books, three in the Advances in Protein Chemistry series ("Fibrous Proteins: Coiled-coils, Collagen and Elastomers", "Fibrous Proteins: Muscle and Molecular Motors" and "Fibrous Proteins: Amyloids, Prions and Beta Proteins", the latter in conjunction with Andrey Kajava) and, most recently, in 2017 a volume entitled "Fibrous Proteins: Structures and Mechanisms".John edited two other books as well.He did, of course, also write a highly regarded monograph on muscle in 1981 entitled "The Structural Basis of Muscular Contraction", and this remains a classic in the field.A second monograph appeared in 1986 entitled "Muscle: Design, Diversity and Disease".With regard to his service to the scientific community John has no peer.Just as importantly (perhaps even more importantly), John remained a gentleman (an old-fashioned word but very relevant in his case), a true friend to his collaborators, a mentor to his students, and a family man in every respect.Each of us will miss him greatly but his work and contributions to our own experiences will remain with us each and every day.Our lives have been much enriched by his presence.
Personal footnote
I am reminded of a quote from Dr Seuss that seems particularly pertinent with respect to John's life and career.
Don't cry because it's over. Smile because it happened
The sentiments thus expressed would, I suspect, be very much in line with John's own philosophy.Over a period of some 58 years we enjoyed a close and mutually beneficial relationship at both the personal and scientific level.I consider myself to be very fortunate in both respects, and I deem it a great honour to have been invited to make a contribution to this special issue dedicated to the memory of a great biophysicist, a great family man and, above all, a great friend.
Fig. 2 Fig. 3 a
Fig. 2 Axial projection of the thin filament showing tropomyosin (small circles), troponin (filled circles), actin (large circles) and HMM S-1. a corresponds to relaxed vertebrate skeletal muscle with the position of tropomyosin sterically-blocking the attachment of HMM S-1 to actin and b contracting vertebrate skeletal muscle with tropomyosin in a position closer to the centre of the central groove thereby permitting HMM S-1 to bind to actin.Redrawn from Parryand Squire (1973)
Fig. 4
Fig. 4 On a visit by the author to the UK John Squire (left) and the author (right) are pictured in the former's study in Salisbury, England with the steric-blocking model in the background (see Fig.3) Fig. 4 On a visit by the author to the UK John Squire (left) and the author (right) are pictured in the former's study in Salisbury, England with the steric-blocking model in the background (see Fig.3) Fig. 4 On a visit by the author to the UK John Squire (left) and the author (right) are pictured in the former's study in Salisbury, England with the steric-blocking model in the background (see Fig.3) | 2022-07-06T06:16:58.226Z | 2022-07-05T00:00:00.000 | {
"year": 2022,
"sha1": "0c07b11862e85d4719b4ac70a23823dc2200328e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10974-022-09619-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "00b459a008e53d558a57f48e2bfd0a40e251fed1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
94153999 | pes2o/s2orc | v3-fos-license | Two-dimensional Frustrated Antiferromagnets (MCl)LaNb2O7 (M = Mn, Co, Cr)
Magnetic susceptibility and specific heat measurements have been performed on two-dimensional spin systems (M2+Cl)LaNb2O7 (M = Mn (S = 5/2), Cr (S = 2), Co (S = 3/2)), prepared via a topotactic ion-exchange reaction. All three compounds establish antiferromagnetic order at TN = 53 K, 61 K and 52 K, respectively for M = Mn, Co, Cr. Together with TN = 78 K for M = Fe (S = 2), this result indicates that the TN is not simply scaled by the magnitude of spin. In particular, the presence of strong spin-orbit interactions is suggested for (CoCl)LaNb2O7.
Introduction
Geometrically frustrated magnets have been received considerable interests over the last several decades. Among the models comprising of squares with some diagonal interactions are the J 1 -J 2 model Li 2 VO(Si, Ge)O 4 [1], the checkerboard model A 2 F 2 Fe 2 OQ 2 (A = Sr and Ba; Q = S and Se) [2] and the Shastry-Sutherland SrCu 2 (BO 3 ) 2 [3]. These materials show intriguing magnetic properties such as spin-disordered state and quantized magnetization plateaus to name only a few [3,4].
Synthesis
The synthesis of (MCl)LaNb 2 O 7 (M = Mn, Co, Cr) is expressed as the following two-step ionexchange reactions [17]: (2) First, RbLaNb 2 O 7 was prepared via a conventional high-temperature route, using stoichiometric amount of La 2 O 3 (99.99% purity) and Nb 2 O 5 (99.99%) and 25% molar excess of Rb 2 CO 3 (99.9%). Second, LiLaNb 2 O 7 was obtained from LiNO 3 and RbLaNb 2 O 7 in 10:1 molar ratio through the ionexchange reaction (1) at 300 °C for 24 hours in air. The product was washed with warm water and then dried at 120 °C overnight. Third, LiLaNb 2 O 7 was mixed with two-fold molar excess of ultradry MCl 2 (M = Mn, Co, Cr; 99.9%) and pressed into pellets in an Ar-filled glove box (<1ppm O 2 /H 2 O). The ion-exchange reaction (2) was carried out in sealed, evacuated (<10 -3 Torr) Pyrex tubes at 390 ~ 400 °C for 7 days, followed by washing with distilled water for M = Mn, Co and with ethanol for M = Cr to eliminate the excess MCl 2 (M = Mn, Co, Cr) and LiCl, and dried at 120 °C overnight. The schematic structure is represented in figure 1.
Characterization
In-house X-ray diffraction study at room temperature confirmed the tetragonal symmetry with roomtemperature cell constants: (a, c) = (3.899 Å, 12.04 Å), (3.908 Å, 11.63 Å), and (3.899 Å, 11.97 Å) for M = Mn, Co and Cr, respectively, in good agreement with those previously reported [17]. Magnetic susceptibilities were studied using a superconducting quantum interference device (SQUID) magnetometer (Quantum Design, MPMS) for a temperature range T = 2-300 K in a magnetic field H = 0.1 T. Specific heat measurements were performed by the thermal relaxation method in a T range between 4 K and 100 K in the absence of magnetic field using Physical Property Measurement System (PPMS, Quantum Design) at Institute for Solid State Physics, University of Tokyo. Handpressed pellets were attached to an alumina platform with a small amount of Apiezon N grease.
Results and Discussions
Figure 2(a) shows the temperature dependence of magnetic susceptibility χ raw for (MnCl)LaNb 2 O 7 . A Curie-tail is seen below 30 K, which is most likely due to impurities and/or defects of Mn 2+ ions in (MnCl)LaNb 2 O 7 , as was also present in the previous report [19]. Using the Curie equation χ imp = C imp /T, we fitted the raw data χ raw below 30 K and the best fit gave a small C imp = 0.12 emu K −1 mol −1 , corresponding to about 2.7% of noninteracting S = 5/2 Mn 2+ ions. After subtracting this upturn, one obtains the intrinsic susceptibility χ spin , where a broad maximum characteristic of low-dimensional magnet was centered at T χ max = 65 K. Above 160 K, the inverse susceptibility χ spin −1 (figure 2(b)) obey a Curie-Weiss law, and the fitting gave the Curie constant C = 4.38 emu K −1 mol −1 together with θ = −131 K with a slight temperature-independent impurity term of χ 0 = −4.0 × 10 −4 emu K −1 mol −1 . The value of T χ max is comparable with that previously reported (63 K) [19]. Moreover, the value of C obtained in this study is in excellent agreement with the theoretical value for 1 mol of S = 5/2 Mn 2+ ions (4.375 emu K −1 mol −1 ), while the one obtained in the previous study was a little larger (4.467 emu K −1 mol −1 ) [19]. However, the value of θ obtained in this study is somewhat smaller than the one obtained in the previous study (−145.7 K) [19]. This is probably because the value of C largely depends on the fitting range and the value of χ 0 . It is also to be noted that C and θ are influenced by the way in which the Curie-tail at low temperature is subtracted from the raw data. The specific heat C p at zero field is shown in figure 3. A tiny anomaly at around 53 K being located 15 K below T χ max strongly indicates the occurrence of the antiferromagnetic phase transition. Indeed, this temperature is in excellent agreement with T N = 54 K estimated from dχ/dT [19]. The anomaly in the specific heat is not so obvious probably because of the use of polycrystalline sample. We could not estimate magnetic specific heat C m by subtracting lattice contribution βT 3 because this approximation should be valid up to around 30 K. The large difference between θ and T N manifests two-dimensionality of the magnetic system and also certain frustrated interactions. The specific heat C p of M = Co and Cr at zero field (figures 4 and 5) has a slight anomaly at 61 K and 52 K, respectively. As shown in figures 6(a) and 7(a), each magnetic susceptibility has, as in the case of M = Mn, a broad maximum at a slightly higher temperature of 67 K for M = Co and 55 K for M = Cr, indicating that the anomaly in the specific heat is due to the magnetic phase transition. As is the case with M = Mn, we could not estimate magnetic specific heat C m by subtracting lattice contribution βT 3 . In the lower temperature region below this transition, the susceptibility grows considerably with decreasing temperature. However, in contrast to the case of M = Mn as demonstrated above, both materials exhibit another anomaly in a different manner: in (CoCl)LaNb 2 O 7 , a hysteresis behavior is observed below about 6 K, while in (CrCl)LaNb 2 O 7 a cusp without hysteresis is observed at 3 K (see the insets of figures 6(a) and 7(a)). The low-temperature anomaly featured by the hysteresis between zero-field and field cooling processes is also seen in (FeCl)LaNb 2 O 7 and was attributed to the second magnetic transition [18]. We would consider, however, that these lowtemperature anomalies likely come from the defect of the magnetic ions in the crystal (and thus being extrinsic to the pure system) because the magnitude of the Curie-like tail has sample dependence. The estimation of the defect amount is not straightforward because of the presence of low-temperature anomalies. However, since the Co-, Cr-and Mn-samples have a similar size of the Curie-like tail, the amount of the magnetic defect should be roughly the same. Given the lower temperature anomaly in these two materials, subtracting the Curie-like tail from the raw data would not be appropriate. Accordingly, the raw data were fitted to the Curie-Weiss formula in the temperature range above 90 K (Co) and 175 K (Cr). We obtained, for M = Cr, C = 3.24 emu K −1 mol −1 , which agrees reasonably with the theoretical value for 1 mol of S = 2 Cr 2+ ions (3.0 emu K −1 mol −1 ), indicating the completion of the ion-exchange reaction. Interestingly, the obtained value of C for M = Co is 2.775 emu K −1 mol −1 , which is significantly larger than the theoretical value for 1 mol of S = 3/2 Co 2+ ions (1.875 emu K −1 mol −1 ), where g = 2 is assumed. Thus orbital angular momentum should sizably contribute to the g-factor, resulting in a strong anisotropy in the Co moment due to spin-orbit interactions. Such a strong anisotropy has been observed in the Co 2+ -containing compounds [21,22]. The values of θ for M = Co and Cr were, respectively, −77.95 K, −61 K, the magnitude of which is only slightly higher than T N (61 K and 52 K). Here we would like to stress that the proximity between |θ| and T N does not mean that the frustration effect is negligible because when competing antiferromagnetic and ferromagnetic interactions are present (which is the case of the related systems), |θ|/T N cannot be a measure of frustration. Notably, recent reinvestigations of the structure of (CuCl)LaNb 2 O 7 using a single crystal X-ray diffraction and the state-of-the-art structural analysis revealed the superstructure in the space group Pbam, which accounts for the spin-singlet formation [12]. We suppose that there will be also a possibility that M and Cl atoms in the present compounds displace from the vertices of square lattice in an ordered manner, which might result in complex relationship between |θ| and T N .
Conclusion
The two-dimensional antiferromagnets (MCl)LaNb 2 O 7 (M = Mn (S = 5/2), Co (S = 3/2) and Cr (S = 2)) obtained via a topotactic ion-exchange reaction are found to exhibit antiferromagnetic long range ordering at 53 K, 61 K and 52 K, respectively, indicating that the T N is not simply scaled by the spin quantum number but affected by competing magnetic interactions and a possible formation of superstructure. A strong anisotropy was indicated in (CoCl)LaNb 2 O 7 , where the orbital degrees of freedom may affect the magnetic property. Further magnetic study is needed such as powder neutron diffraction. | 2019-04-04T13:05:04.784Z | 2011-09-28T00:00:00.000 | {
"year": 2011,
"sha1": "2f7336a1eec9d5a668ccf4d5d2fca558cd967575",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/320/1/012035",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e7af1cb729bfbd8bb287a41629bcecb52f2f3e5b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
85731488 | pes2o/s2orc | v3-fos-license | Genotoxicity Biomarkers: Application in Histopathology Laboratories
Most cancers results from man-made and natural environmental exposures (such as tobacco smoke; chemical pollutants in air, water, food, drugs; radon; and infectious agents) acting in concert with both genetic and acquired characteristics. It has been estimated that without these environmental factors, cancer incidence would be dramatically reduced, by as much as 80%-90% (Perera, 1996). The modulation of environmental factors by host susceptibility was rarely evaluated. However, within the past few years, the interaction between environmental factors and host susceptibility factors has become a very active area of research (Perera, 2000). Molecular biology as a tool for use in epidemiological studies has significant potential in strengthening the identification of cancers associated with environmental exposures related to lifestyle, occupation, or ambient pollution. In molecular epidemiology, laboratory methods are employed to document the molecular basis and preclinical effects of environmental carcinogenesis (Portier & Bell, 1998).
Introduction
Most cancers results from man-made and natural environmental exposures (such as tobacco smoke; chemical pollutants in air, water, food, drugs; radon; and infectious agents) acting in concert with both genetic and acquired characteristics. It has been estimated that without these environmental factors, cancer incidence would be dramatically reduced, by as much as 80%-90% (Perera, 1996). The modulation of environmental factors by host susceptibility was rarely evaluated. However, within the past few years, the interaction between environmental factors and host susceptibility factors has become a very active area of research (Perera, 2000). Molecular biology as a tool for use in epidemiological studies has significant potential in strengthening the identification of cancers associated with environmental exposures related to lifestyle, occupation, or ambient pollution. In molecular epidemiology, laboratory methods are employed to document the molecular basis and preclinical effects of environmental carcinogenesis (Portier & Bell, 1998).
Molecular epidemiology has become a major field of research and considerable progress has been made in validation and application of biomarkers and its greatest contribution has been the insights provided into interindividual variation in human cancer risk and the complex interactions between environmental factors and host susceptibility factors, both inherited and acquired, in the multistage process of carcinogenesis (Perera, 2000).
The possibility to use a biomarker to substitute classical endpoints, such as disease incidence or mortality is the most promising feature and one that is most likely to affect public health. The use of events that are on the direct pathways from the initiation to the occurrence of disease to surrogate the disease incidence is a very appealing approach, which is currently investigated in different fields (Bonassi & Au, 2002).
Biological monitoring of workers has three main aims: the primary is individual or collective exposure assessment, the second is health protection and the ultimate objective is occupational health risk assessment. It consists of standardized protocols aiming to the periodic detection of early, preferably reversible, biological signs which are indicative, if compared with adequate reference values, of an actual or potential condition of exposure, effect or susceptibility possibly resulting in health damage or disease. These signs are referred to as biomarkers (Manno et al., 2010).
There has been dramatic progress in the application of biomarkers to human studies of cancer causation. Progress has been made in the development and validation of biomarkers that are directly relevant to the carcinogenic process and that can be used in large-scale epidemiologic studies (Manno et al., 2010).
There are many important aspects to consider when a biomonitoring study is designed. For instance, there is needed a detail information on genotoxin exposure, e.g. type of toxin, duration of exposure, commencing date of exposure relative to sampling date of buccal cells, in order to achieve a meaningful interpretation of data. It will also helps to identify key variables affecting the observed frequency of biomarkers, like age, gender, vitamin B status, genotype and smoking status (Thomas et al., 2009).
Based on the impact on genotoxicity biomarkers in peripheral blood lymphocytes on the design of biomonitoring studies, Battershill et al. (2008) study have considered a strong/sufficient correlation between micronucleus (MN) frequency and increasing age. The effect is more pronounced in females than in males, with the increase more marked after 30 years of age. There are studies that also demonstrated a strong correlation between age and MN frequency and suggested that chromosome loss is a determining factor in this increase.
In what concern to gender, is also documented a gender difference in the background incidence of MN in peripheral blood lymphocytes (PBL), with the frequency being consistently higher in females. A study that assessed MN, chromosomal aberrations and sister chromatid exchange showed highly significant elevations in MN in lymphocytes of women (29% when adjusted for age and smoking) whereas chromosomal aberrations and sister chromatid exchange remained unchanged. This may reflect aneuploidy detected in MN assays (Battershill et al., 2008).
In respect to smoking, although the link between smoking and cancer is strong and exposure to genotoxic carcinogens present in tobacco smoke has been convincingly demonstrated, interestingly the same convincing association is less apparent when assessing biomonitoring studies of genotoxicity. HUMN project study about tobacco smoke, the majority of the laboratories showed no significant differences between smokers and nonsmokers and the pooled analysis, interestingly, indicated an overall decrease for all smokers compared to controls (Battershill et al., 2008).
It was verified a weak/insufficient evidence for association with genotoxicity end points and alcohol consumption. Alcohol consumption has been causally associated with cancer at a number of sites (e.g. head and neck cancer). Alcoholic beverages have not been reported to induce mutagenic effects in rodents. The evidence regarding an effect of drinking alcoholic beverages on increased MN or substitute for chromosomal aberrations formation in PBL is inconclusive (Battershill et al., 2008).
Biomarkers -General definitions
Biomarkers have been defined by the National Academy of Sciences (USA) as an alteration in cellular or biochemical components, processes, structure or functions that is measurable www.intechopen.com in a biological system or sample. The traditional, generally accepted classification of biomarkers into three main categories -biomarkers of exposure, effect, and susceptibility; depending on their toxicological significance (Manno et al., 2010).
A biomarker can potentially be any substance, structure or process that could be monitored in tissues or fluids and that predicts or influences health, or assesses the incidence or biological behaviour of a disease. Identification of biomarkers that are on causal pathway, have a high probability of reflecting health or the progression to clinical disease, and have the ability to account for all or most of the variation in a physiological state or the preponderance of cases of the specified clinical outcome, have largely remained elusive (Davis et al., 2007).
A biomarker of exposure is a chemical or its metabolite or the product of an interaction between a chemical and some target molecule or macromolecule that is measured in a compartment or a fluid of an organism (Manno et al., 2010).
A biomarker of effect is a measurable biochemical, structural, functional, behavioural or any other kind of alteration in an organism that, according to its magnitude, can be associated with an established or potential health impairment or disease. A sub-class of biomarkers of effect is represented by biomarkers of early disease (Manno et al., 2010).
A biomarker of susceptibility may be defined as an indicator of an inherent or acquired ability of an organism to respond to the challenge of exposure to a chemical (Manno et al., 2010).
Although the different types of biomarkers are considered for classification purposes, as separate and alternative, in fact it is not always possible to attribute them to a single category. The allocation of a biomarker to one type or the other sometimes depends on its toxicological significance and the specific context in which the test is being used (Manno et al., 2010).
Genotoxicity biomarkers
As a subtype of biomarkers of effect there are biomarkers of genotoxicity, generally used to measure specific occupational and environmental exposures or to predict the risk of disease or to monitor the effectiveness of exposure control procedures in subjects to genotoxic chemicals (Manno et al., 2010).
Cytogenetic biomarkers are the most frequently used endpoints in human biomonitoring studies and are used extensively to assess the impact of environmental, occupational and medical factors on genomic stability (Barrett et al., 1997;Battershill et al., 2008) and lymphocytes are used as a surrogate for the actual target tissues of genotoxic carcinogens (Barrett et al., 1997). The evaluation of MN in PBL is the most commonly used technique, although cells such as buccal epithelium are also utilized (Battershill et al., 2008).
MN assay is one of the most sensitive markers for detecting DNA damage, and has been used to investigate genotoxicity of a variety of chemicals. MN testing with interphase cells is more suited as a cytogenetic marker because it is not limited to metaphases, and has the advantage of allowing rapid screening of a larger numbers of cells than in studies with sister chromatid exchanges or chromosomic aberrations (Ishikawa et al., 2003).
www.intechopen.com MN analysis, therefore, appears to be a good tool for investigating the effects of clastogens and aneuploidogens in occupational and environmental exposure in human epidemiological studies (Ishikawa et al., 2003) and are described as a promising approach with regard to assessing health risks (Battershill et al., 2008).
Cytokinesis-Block micronucleus assay
The scope and the application of cytokinesis-block MN assay (CBMN) in biomonitoring has also been expanded in recent years so that in addition to scoring MN in binucleate cells, there are proposals to evaluate MN in mononucleate cells (to provide a more comprehensive assessment of DNA damage), nucleoplasmic bridges (indicative of DNA misrepair, chromosome rearrangement or telomere endfusions) and nuclear buds (a measure of gene amplification or acentric fragments). Fenech (2007), has proposed that CBMN assay can be used to measure chromosomal instability, mitotic dysfunction and cell death (necrosis and apoptosis) and has suggested the term CBMN assay. Identification of the contents of MN (e.g. presence and absence of centromeres) is now considered important in the evaluation of MN in biomonitoring studies, providing insight into mechanisms underpinning the positive results reported, i.e. to differentiate between clastogens and aneugenic responses (Battershill, et al., 2008).
The CBMN assay is a comprehensive system for measuring DNA damage; cytostasis and cytotoxicity-DNA damage events are scored specifically in once-divided binucleated cells and include: micronucleus (MN), nucleoplasmic bridges (NPB) and nuclear buds (NBUDs). Cytostatic effects are measured via the proportion of mono-, bi-and multinucleated cells and cytotoxicity via necrotic and/or apoptotic cell ratios (Fenech, 2002a(Fenech, , 2006(Fenech, , 2007. MN originate from chromosome fragments or whole chromosomes that lag behind anaphase during nuclear division. The CBMN assay is the preferred method for measuring MN in cultured human and/or mammalian cells because scoring is specifically restricted to once-divided binucleated cells, which are the cells that can express MN. In the CBMN assay, once-divided cells are recognized by their binucleated appearance after blocking cytokinesis with cytochalasin-B (Cyt-B), an inhibitor of microfilament ring assembly required for the completion of cytokinesis.
The CBMN assay allows measuring chromosome breakage, DNA misrepair, chromosome loss, non-disjunction, necrosis, apoptosis and cytostasis. Also measure NPB, a biomarker of dicentric chromosomes resulting from telomere end-fusions or DNA misrepair, and to measure NBUDs, a biomarker of gene amplification.
Because of its reliability and good reproducibility, the CBMN assay has become one of the standard cytogenetic tests for genetic toxicology testing in human and mammalian cells (Fenech, 2002b(Fenech, , 2007. NPB occur when centromeres of dicentric chromosomes are pulled to opposite poles of the cell at anaphase. There are various mechanisms that could lead to NPB formation following DNA misrepair of strand breaks in DNA. Typically, a dicentric chromosome and an acentric chromosome fragment are formed that result in the formation of an NPB and an MN, respectively. Misrepair of DNA strand breaks could also lead to the formation of dicentric ring chromosomes and concatenated ring chromosomes which could also result in the www.intechopen.com formation of NPB. An alternative mechanism for dicentric chromosome and NPB formation is telomere end fusion caused by telomere shortening, loss of telomere capping proteins or defects in telomere cohesion. The importance of scoring NPB should not be underestimated because it provides direct evidence of genome damage resulting for misrepaired DNA breaks or telomere end fusions, which is otherwise not possible to deduce by scoring MN only (Fenech, 2007 ;Thomas et al., 2003).
NBUD are biomarkers of elimination of amplified DNA and/or DNA repair complexes. The nuclear budding process has been observed in cultures grown under strong selective conditions that induce gene amplification as well as under moderate folic acid deficiency. Amplified DNA may be eliminated through recombination between homologous regions within amplified sequences forming mini-circles of acentric and atelomeric DNA (double minutes), which localized to distinct regions within the nucleus, or through the excision of amplified sequences after segregation to distinct regions of the nucleus. The process of nuclear budding occurs during S phase and the NBUD are characterized by having the same morphology as an MN with the exception that they are linked to the nucleus by a narrow or wide stalk of nucleoplasmic material depending on the stage of the budding process. The duration of the nuclear budding process and the extrusion of the resulting MN from the cell remain largely unknown (Fenech, 2007;Serrano-García & Montero-Montoya, 2001;Utani et al., 2007).
Most chemical agents and different types of radiation have multiple effects at the molecular, cellular and chromosomal level, which may occur simultaneously and to varying extents depending on the dose. Interpretation of genotoxic events in the absence of data on effects in nuclear division rate and necrosis or apoptosis can be confounding because observed increases in genome damage may be due to indirect factors such as inhibition of apoptosis or defective/permissive cell-cycle checkpoints leading to shorter cell-cycle times and higher rates of chromosome malsegregation. Furthermore, determining nuclear division index (NDI) and proportion of cells undergoing necrosis and apoptosis provides important information on cytostatic and cytotoxic properties of the agent being examined that is relevant to the toxicity assessment. In human lymphocytes, the NDI also provides a measure of mitogen response, which is a useful biomarker of immune response in nutrition studies and may also be related to genotoxic exposure. The cytome approach in the CBMN cytome assay is important because it allows genotoxic (MN, NPB and NBUD in binucleated cells), cytotoxic (proportion of necrotic and apoptotic cells) and cytostatic (proportion and ratios of mono-, bi-and multinucleated cells, NDI) events to be captured within one assay (Fenech, 2005(Fenech, , 2007Umegaki & Fenech, 2000).
In conclusion, the CBMN method has evolved into an efficient "cytome" assay of DNA damage and misrepair, chromosomal instability, mitotic abnormalities, cell death and cytostasis, enabling direct and/or indirect measurement of various aspects of cellular and nuclear dysfunction such as: unrepaired chromosome breaks fragments and asymmetrical chromosome rearrangement (MN or NPB accompanied by MN originating from acentric chromosomal fragments); telomere end fusions (NPB with telomere signals in the middle of the bridge and possibly without accompanying MN); malsegregation of chromosomes due to spindle or kinetochore defects or cell-cycle checkpoint malfunction (MN containing whole chromosomes or asymmetrical distribution of chromosome-specific centromere signals in the nuclei of BN cells); nuclear elimination of amplified DNA and/or DNA repair complexes (NBUD); chromosomal instability phenotype and breakage-fusion-bridge cycles (simultaneous expression of MN, NPB and NBUD); altered mitotic activity and/or cytostasis (NDI) and cell death by necrosis or apoptosis (ratios of necrotic and apoptotic cells) (Fenech, 2007).
Micronucleus in exfoliated buccal cells
Regeneration is dependent on the number and division rate of the proliferating (basal) cells, their genomic stability and their propensity for cell death. These events can be studied in the buccal mucosa (BM), which is an easily accessible tissue for sampling cells in a minimally invasive manner and does not cause undue stress to study subjects. This method is increasingly used in molecular epidemiology studies for investigating the impact of nutrition, lifestyle factors, genotoxin exposure and genotype on DNA damage, chromosome malsegregation and cell death (Thomas et al., 2009).
The assay has been successfully to study DNA damage as measured by MN or by the use of fluorescent probes to detect in BM is an indication of the regenerative capacity of this tissue. The BM provides a barrier to potential carcinogens that can be metabolized to generate potential reactive products. As up to 90% of all cancers appear to be epithelial in origin, the BM could be used to monitor early genotoxic events as a result of potential carcinogens entering the body through ingestion or inhalation. Exfoliated buccal cells have been used non-invasively to successfully show the genotoxic effects of lifestyle factors such as tobacco smoking, chewing of betel nuts and/or quids, medical treatments, such as radiotherapy as well as occupational exposure, exposure to potentially mutagenic and/or carcinogenic chemicals, and for studies of chemoprevention of cancer.
In this assay cells derived from the BM are harvested from the inside of a patient's mouth using a small-headed toothbrush. The cells are washed to remove the debris and bacteria, and a single-cell suspension is prepared and applied to a clean slide using a cytocentrifuge. The cells are stained with Feulgen and Light Green stain allowing both bright field and permanent fluorescent analysis that can be undertaken microscopically (Thomas et al., 2009).
The Buccal Mucosa Cytome (BMCyt) assay has been used to measure biomarkers of DNA damage (MN and/or nuclear buds), cytokinetic defects (binucleated cells) and proliferative potential (basal cell frequency) and/or cell death (condensed chromatin, karyorrhexis, pyknotic and karyolitic cells). The protocol can also make use of molecular probes for DNA adduct, aneuploidy and chromosome break measures within the nuclei of buccal cells. Furthermore, chromosome-specific centromeric probes have been used to measure aneuploidy by determining the frequency of nuclei with abnormal chromosome number. Tandem probes have been successfully applied to measure chromosome breaks in specific important regions of the genome (Thomas et al., 2009). The methodology and concepts described in this protocol may be applied to other types of exfoliated cells such as those of the bladder, nose and cervix but the morphological characteristics, sampling and scoring methods are neither properly described nor standardized for cells from these tissues (Thomas et al., 2009).
The time of sampling is also an important variable to consider. As the buccal cells turn over every 7-21 days, it is theoretically possible to observe the genotoxic effects of an acute exposure approximately 7-21 days later.
Ideally, repeat sampling, at least once every 7 days after acute exposure, should be performed for 28 days or more so that the kinetics and extent of biomarker induction can be thoroughly investigated. In the case of chronic exposure due to habitual diet or alcohol consumption or smoking it is recommend that multiple samples are taken at least once every 3 months to take into account seasonal variation (Thomas et al., 2009).
The uniformity of sampling is one of the many aspects to consider; therefore a circular expanding motion is used with toothbrush sampling to enhance sampling over a greater area and to avoid continual erosion in a single region of the BM. This is performed on the inside of both cheeks using a different brush for sampling left and right areas of the mouth to maximize cell sampling and to eliminate any unknown biases that may be caused by sampling one cheek only. It is important to note that repeated vigorous brushing of the same area can lead to increased collection of cells from the less differentiated basal layer. About transportation, in some investigations buccal cells may have to be collected from a distant site which may cause sample deterioration. About cell fixation, there are many possible alternatives of fixatives such as methanol: glacial acetic acid (3:1), 80% methanol or ethanol: glacial acetic (3:1). The staining technique recommend is Feulgen because is a DNAspecific stain and because permanent slides can be obtained that can be viewed under both transmitted and/or fluorescent light conditions. There are many false-positive results in MN frequency as a result of using Romanowsky-type stains such as Giemsa, May-Grunwald Giemsa and/or Leishmann's which leads to inaccurate assessment of DNA damage. Romanowsky stains have been shown to increase the number of false positives as they positively stain keratin bodies that are often mistaken for MN and are therefore not appropriate for this type of analysis. For these reasons, it is advisable to avoid Romanowsky stains in favour of DNA-specific fluorescent-based stains such as propidium iodide, DAPI, Feulgen, Hoechst 33258 or Acridine Orange (Thomas et al., 2009).
The criterion of scoring is originally based in the described by Tolbert et al. that are intended for classifying buccal cells into categories that distinguish between "normal" cells and cells that are considered "abnormal" on the basis of cytological and nuclear features, which are indicative of DNA damage, cytokinetic failure or cell death. Therefore, some definitions of the cytological findings are (Thomas et al., 2009): Normal "differentiated" cells have a uniformly stained nucleus, which is oval or round in shape. They are distinguished from basal cells by their larger size and by their smaller nucleus-to-cytoplasm ratio. No other DNA-containing structures apart from the nucleus are observed in these cells. These cells are considered to be terminally differentiated relative to basal cells, as no mitotic cells are observed in this population.
Cells with MN are characterized by the presence of both a main nucleus and one more smaller nuclear structures called MN. The MN are round or oval in shape and their diameter should range between 1/3 and 1/16 of the main nucleus. MN has the same staining intensity and texture as the main nucleus. Most cells with MN will ontain only one MN but it is possible to find cells with two or more MN. Baseline frequencies for micronucleated cells in the BM are usually within the 0.5-2.5 MN/1000 cells range. Cells with multiple MN are rare in healthy subjects but become more common in individuals exposed to radiation or other genotoxic events.
Cells with nuclear buds contain nuclei with an apparent sharp constriction at one end of the nucleus suggestive of a budding process, i.e. elimination of nuclear material by budding.
www.intechopen.com
The NBUD and the nucleus are usually in very close proximity and appear to be attached to each other. The NBUD has the same morphology and staining properties as the nucleus; however, its diameter may range from a half to a quarter of that of the main nucleus. The mechanism leading to NBUD formation is not known but it may be related to the elimination of amplified DNA or DNA repair (Thomas et al., 2009).
The scoring method should include coded slides by a person not involved in the study in order to be a blind study. The best magnification to the observation is 1000X. An automated procedure of scoring, by image cytometry have to be developed and validated. The authors suggested first determine the frequency of all the various cell types in a minimum of 1000 cells, following this step, the frequency of DNA damage biomarkers (MN and NBUD) is scored in a minimum of 2000 differentiated cells (Thomas et al., 2009).
At the end the results with the BMCyt are dependent on the level of exposure and potency of genotoxic or cytotoxic agents, genetic background and the age and gender of the donor cells being tested (Thomas et al., 2009).
Is important to define the role of BMCyt in human biomonitoring as a new tool, less invasive in comparison with the CBMN assay, and with many potentialities in molecular epidemiology (Thomas et al., 2009).
Genotoxicity biomonitoring endpoints such as micronucleus, chromosome aberrations and 8-OHdG and DNA repair measured by comet assay are the most commonly used biomarkers in studies evaluating environmental or occupational risks associated with exposure to potential genotoxins. A review made by Knudsen and Hansen (2007) about the application of biomarkers of intermediate end points in environmental and occupational health concluded that MN in lymphocytes provided a promising approach with regard to assessing health risks but concluded that the use of chromosome aberrations in future studies was likely to be limited by the laborious and sensitive procedure of the test and lack of trained cytogeneticists. Methodologies like comet assay in lymphocytes, urine and tissues are increasingly being used as markers of oxidative DNA damage (Battershill et al., 2008).
Studies investigating correlations between endpoints used in genotoxicity biomonitoring studies have yielded inconsistent results, where we can find studies that correlate cytogenetic and comet and studies there do not achieve a correlation between micronucleus, chromosome aberrations and comet. The relative sensitivities of the different endpoints discussed, together with the importance of other factors which influence the persistence of the biomarkers such as DNA repair, may plausibly impact on background levels in the studies considered and would need to be considered before the relationship regarding increases in genotoxicity endpoints with exposure to environmental chemicals or endogenous factors is explored (Battershill et al., 2008).
Application of genotoxicity biomarkers in an occupational setting -Histopathology laboratories
A biomonitoring study was conducted in 7 histopathology laboratories in Portugal in order to assess the genotoxicity effects in occupational exposure to formaldehyde (FA).
FA is a reactive, flammable and colourless gas with a strong and very characteristic pungent odour that, when combined with air, can lead to explosive mixtures. FA occurs as an endogenous metabolic product of N-, O-and S-demethylation reactions in most living systems. It is used mainly in the production of resins and their applications, such as adhesives and binders in wood product, pulp and paper, synthetic vitreous fibre industries, production of plastics, coatings, textile finishing and also as an intermediate in the synthesis of other industrial chemical compounds. Common non-occupational sources of exposure to FA include vehicle emissions, particle boards and similar building materials, carpets, paints and varnishes, food and cooking, tobacco smoke and its use as a disinfectant (Conaway et al., 1996;Franks, 2005;IARC, 2006;Pala et al., 2008;Viegas & Prista, 2007).
Commercially, FA is manufactured as an aqueous solution called formalin, usually containing 37 to 40% by weight of dissolved FA (Zhang et al., 2009), which is commonly used in histopathology laboratories as a cytological fixative to preserve the integrity of cellular architecture for diagnosis.
Exogenous FA can be absorbed following inhalation, dermal or oral exposure, being the level of absorption dependent on the route of exposure. The International Agency for Research on Cancer (IARC) reclassified FA as a human carcinogen (group 1) in June 2004 based on "sufficient epidemiological evidence that FA causes nasopharyngeal cancer in humans" (IARC, 2006;Zhang et al., 2009). In their review, IARC also concluded that there was ''strong but not sufficient evidence for a causal association between leukaemia and occupational exposure to FA'' (Zhang et al., 2009(Zhang et al., , 2010. However, some studies have also led to mixed results and inconclusive evidence (Franks, 2005;Speit et al., 2010).
The inhalation of vapours can produce irritation to eyes, nose and the upper respiratory system. Whilst occupational exposure to high FA concentrations may result in respiratory irritation and asthmatic reactions, it may also aggravate a pre-existing asthma condition. Skin reactions, following exposure to FA are very common, because the chemical is both irritating and allergenic (Pala et al., 2008). FA induces genotoxic and cytotoxic effects in bacteria and mammals cells (Ye et al., 2005) and its genotoxicity and carcinogenicity has been proved in experimental and epidemiological studies that used proliferating cultured mammalian cell lines and human lymphocytes (Pala et al., 2008;Speit et al., 2007) by DNAprotein cross-links, chromosome aberrations, sister exchange chromatides, and MN (Zhang et al., 2009).
The goal of this study was to compare the frequency of genotoxicity biomarkers, provided by CBMN assay in peripheral lymphocytes and MN test in buccal cells between workers of histopathology laboratories exposed to FA and individuals non-exposed to FA and other environmental factors, namely tobacco and alcohol consumption.
The study population consisted of 56 workers occupationally exposed to FA from 7 hospital histopathology laboratories located in Portugal (Lisbon and Tagus Valley region), and 85 administrative staff without occupational exposure to FA. The characteristics of both groups are described in Table 1.
Ethical approval for this study was obtained from the institutional Ethical Board and Director of the participating hospitals, and all subjects gave informed consent to participate in this study. Every person filled a questionnaire aimed at identifying exclusion criteria like history of cancer, radio or chemotherapy, use of therapeutic drugs, exposure to diagnostic X-rays in the past six months, intake of vitamins or other supplements like folic acid as well www.intechopen.com as information related to working practices (such as years of employment and the use of protective measures). In this study, none of the participants were excluded.
Environmental monitoring of FA exposure
Exposure assessment was based on two techniques of air monitoring conducted simultaneously. First, environmental samples were obtained by air sampling with low flow pumps for 6 to 8 hours, during a typical working day. FA levels were measured by Gas Chromatography analysis and time-weighted average (TWA 8h ) was estimated according to the National Institute of Occupational Safety and Health method NIOSH 2541 (NIOSH, 1994).
The second method was aimed at measuring ceiling values of FA using Photo Ionization Detection (PID) equipment (11.7 eV lamps) with simultaneous video recording. Instantaneous values for FA concentration were obtained on a per second basis. This method allows establishing a relation between workers activities and FA concentration values, as well to reveal the main exposure sources (McGlothlin et al., 2005;Viegas et al., 2010).
Measurements and sampling were performed in a macroscopic room, provided with fume hoods, always near workers breath.
Biological monitoring
Evaluation of genotoxic effects was performed by applying the CBMN assay in peripheral blood lymphocytes and exfoliated cells from the buccal mucosa.
Whole blood and exfoliated cells from the buccal mucosa were collected between 10 a.m. and 12 p.m., from every subject and were processed for testing. All samples were coded and analyzed under blind conditions. The criteria for scoring the nuclear abnormalities in lymphocytes and MN in the buccal cells were the ones described by, respectively, Fenech et al. (1999) and Tolbert et al. (1991).
Heparinized blood samples were obtained by venipuncture from all subjects and freshly collected blood was directly used for the micronucleus test. Lymphocytes were isolated using Ficoll-Paque gradient and placed in RPMI 1640 culture medium with L-glutamine and red phenol added with 10% inactivated fetal calf serum, 50 ug/ml streptomycin + 50U/mL penicillin, and 10 ug/mL phytohaemagglutinin. Duplicate cultures from each subject were incubated at 37ºC in a humidified 5% CO 2 incubator for 44h, and cytochalasin-b 6 ug/mL was added to the cultures in order to prevent cytokinesis. After 28h incubation, cells were spun onto microscope slides using a cytocentrifuge. Smears were air-dried and double stained with May-Grünwald-Giemsa and mounted with Entellan®. One thousand cells were scored from each individual by two independent observers in a total of two slides. Each observer visualized 500 cells per individual. Cells from the buccal mucosa were sampled by endobrushing. Exfoliated cells were smeared onto the slides and fixed with Mercofix®. The standard protocol used was Feulgen staining technique without counterstain. Two thousand cells were scored from each individual by two independent observers in a total of two slides. Each observer visualized 1000 cells per individual. Only cells containing intact nuclei that were neither clumped nor overlapped were included in the analysis.
Statistical analysis
The deviation of variables from the normal distribution was evaluated by the Shapiro-Wilk goodness-of-fit test. The association between each of the genotoxicity biomarkers and occupational exposure to FA was evaluated by binary logistic regression. The biomarkers were dichotomized (absent/present) and considered the dependent variable in regression models where exposure was an independent variable. Odds ratios were computed to evaluate the risk of biomarkers presence and their significance was assessed. The nonparametric Kuskal-Wallis and Mann-Whitney U-tests, were also used to evaluate interactions involving confounding factors. All statistical analysis was performed using the SPSS package for windows, version 15.0.
FA exposure levels
Results of FA exposure values were determined using the two methods described -the NIOSH 2541 method for average concentrations (TWA 8h ) and the PID method for ceiling concentrations. For the first exposure metric, FA mean level of the 56 individuals studied was 0.16 ppm (0.04 -0.51 ppm), a value that lies below the OSHA reference value of 0.75 ppm. The mean ceiling concentration found in the laboratories was 1.14 ppm (0.18 -2.93 ppm), a value well above the reference of the American Conference of Governmental Industrial Hygienists (ACGIH) for ceiling concentrations (0.3 ppm). As for the different tasks developed in histopathology laboratories, the highest FA concentration was identified during macroscopic specimens' exam. This task involves a careful observation and grossing of the specimen preserved in FA, therefore has direct and prolonged contact with its vapors (
Genotoxicity biomarkers
For all genotoxicity biomarkers under study, workers exposed to FA had significantly higher mean values than the controls (Table 3).
The odds ratios indicate an increased risk for the presence of biomarkers in those exposed to FA, compared to non-exposed ( Table 4. Results of binary logistic regression concerning the association between FA and genotoxicity biomarkers, as evaluated by the odds ratio (OR).
Regarding the impact of the duration of exposure to FA, the mean values of M N in lymphocytes and in buccal cells tended to increase with years of exposure (Table 5) Age and gender are considered the most important demographic variables affecting the MN index. However, Table 6 shows that the mean of all the genotoxicity biomarkers did not differ between men and women within the exposed and the controls (p> 0.05). Table 6. Descriptive statistics by gender of MN in lymphocytes and buccal cells, NPB, and NBUD means in the two groups (mean ± mean standard error, range)
Groups
In order to examine the effect of age, exposed and non-exposed individuals were stratified by age groups: 20-30, 31-40, and ≥ 41 years old (Table 7). There was no consistent trend regarding the variation of biomarkers with age, the only exception being the MN in lymphocytes in the exposed group (Kruskal-Wallis, p= 0.006), where the higher means where found in the older group. According to Mann-Whitney test, there is a statistical significant result between the elder and the older group (20-30 and > 41 years old, p= 0.02), however the comparison between 20-30 and 31-40 groups (p= 0.262) and 30-40 and > 41 groups (p= 0.065) did not reach statistical significance. Table 7. Age effects on descriptive statistics of MN in lymphocytes and buccal cells, NPB and NBUD means in the studied population (mean ± mean standard error, range).
Groups
The interaction between age and gender in determining the frequencies of genotoxicity biomarkers was investigated and found to be significant only for MN in lymphocytes in exposed subjects (Kruskal-Wallis, p=0.04). In general the MN tended to be more frequent in the > 41 years old category in both genders; however women had the higher means.
Regarding smoking habits, a non-parametric analysis rejected the null hypothesis that biomarkers are the same for the four categories (control smokers and non-smokers, exposed smokers and non-smokers) (Kruskall-Wallis, p<0.001). However, the analysis of the interactions between FA exposure and tobacco smoke between exposed and controls (Mann-Whitney test) showed that FA exposure, rather than tobacco, has a preponderant effect upon the determination of biomarker frequencies. In the control group, non-smokers had slightly higher MN means in buccal cells in comparison with smokers; although the result did not reach statistical significance (Mann-Whitney, p> 0.05).
As for alcohol consumption, because uptake reported in enquires may differ considerably from real consumption, all consumers were gathered into a single entity, in contrast with non-consumers. Nevertheless, no one acknowledged having "heavy drink habits" in the questionnaires.
Overall, biomarkers in controls exhibited higher mean frequencies among alcohol consumers than among non-consumers. Among those exposed, however, mean frequencies were slightly lower among drinkers, suggesting that exposure was the major predominant factor in determining the high biomarker frequencies of those who are exposed. Differences between drinkers and non-drinkers were not statistically significant, to the exception of MN in lymphocytes in controls (Mann-Whitney, p=0.011), where drinkers have higher means. The interaction between alcohol consumption and smoking habits was statistically significant (Kruskal-Wallis, p=0.043), as subjects that do not smoke and do not drink tend to have lower frequencies of MN in buccal cells than those who drink and smoke, with a gradient of frequencies in between.
Discussion
Long exposures to FA, as those to which some workers are subjected for occupational reasons, are suspected to be associated with genotoxic effects that can be evaluated by biomarkers (Conaway et al., 1996;IARC, 2006;Viegas & Prista, 2007;Zhang et al., 2009). In this study the results suggest that workers in histopathology laboratories are exposed to FA levels that exceed recommended exposure limits. Macroscopic specimens' exam, in particular, is the task that involves higher exposure, because it requires a greater proximity to anatomical species impregnated with FA, as supported by the studies of Goyer et al. (2004) and Orsière et al. (2006).
A statistically significant association was found between FA exposure and biomarkers of genotoxicity, namely MN in lymphocytes, NPB, NBUD and MN in buccal cells. Chromosome damage and effects upon lymphocytes arise because FA escapes from sites of direct contact, such as the mouth, originating nuclear alterations in the lymphocytes of those exposed (He & Jin, 1998;IARC, 2006;Orsière et al. 2006;Ye et al., 2005). Our results thus corroborate previous reports (Ye et al., 2005) that lymphocytes can be damaged by long term FA exposure. Moreover, the changes in peripheral lymphocytes indicate that the cytogenetic effects triggered by FA can reach tissues faraway from the site of initial contact (Suruda et al., 1993). Long term exposures to high concentrations of FA indeed appear to have a potential for DNA damage; these effects were well demonstrated in experimental studies with animals, local genotoxic effects following FA exposure, namely DNA-protein cross links and chromosome damage (IARC, 2006).
In humans, FA exposure is associated with an increase in the frequency of MN in buccal epithelium cells (Burgaz et al., 2002;Speit et al., 2006Speit et al., , 2007b, as corroborated by the results presented here. Suruda el al. (1993) claim that although changes in oral and nasal epithelial cells and peripheral blood cells do not indicate a direct mechanism leading to carcinogenesis, they present evidence that DNA alteration took place. It thus appears reasonable to conclude that FA is a cancer risk factor for those who are occupationally exposed in histopathology laboratories (IARC, 2006).
MN and NPB measured in lymphocytes had higher means in pathologists compared with technologists. This result can be explained by the exposure to higher concentrations of pathologists that perform macroscopic exam. Also this chemical mode of action is more related with the concentration than with time of exposure expressed by TWA results.
In epidemiological studies, it is important to evaluate the role played by common confounding factors, such as gender, age, smoking and alcohol consumption, upon the association between disease and exposure (Bonassi et al., 2001;Fenech at al., 1999). Concerning gender, studies realized by Fenech et al. (1999) and Wojda et al. (2007) reported that biomarker frequencies were greater in females than in males by a factor of 1.2 to 1.6 depending on the age group. With the exception of MN in the buccal cells of controls, the results presented here point to females having higher frequencies than males in all genotoxicity biomarkers, although the differences usually lacked statistical significance. Such trend is concordant with previous studies that reported higher MN frequency in lymphocytes in females and a slightly higher MN frequency in buccal cells in males (Holland et al., 2008) and that can be explained by preferential aneugenic events involving the X-chromosome. A possible explanation is the micronucleation of the X chromosome, which has been shown to occur in lymphocytes in females, both in vitro and in vivo, and that can be accounted for by the presence of two X chromosomes. This finding might explain the preferential micronucleation of the inactive X (Catalán et al., 1998(Catalán et al., , 2000a(Catalán et al., , 2000b. Aging in humans appears to be associated with genomic instability. Cytogenetically, ageing is associated with a number of gross cellular changes, including altered size and morphology, genomic instability and changes in expression and proliferation (Bolognesi et al., 1999;Zietkiewicz et al., 2009). It has been shown that a higher MN frequency is directly associated with decreased efficiency of DNA repair and increased genome instability (Kirsch-Volders et al., 2006;Orsière et al., 2006). The data has shown a significant increase of MN in lymphocytes in the exposed group. This can be explained in light of genomic instability, understood as an increased amount of mutations and/or chromosomal aberrations that cytogenetically translate into a greater frequency of changes in chromosome number and/or structure and in the formation of micronuclei (Zietkiewicz et al., 2009). The involvement of micronucleation in age-related chromosome loss has been supported by several studies showing that the rate of MN formation increases with age, especially in women (Catalán et al., 1998). This study provides evidence that age and gender interact to determine the frequency of MN in the lymphocytes of exposed subjects. The higher incidence of MN in both genders is more manifest in older age groups and the effect of gender becomes more pronounced as age increases. Several reports link this observation to an elevated loss of X chromosomes (Battershill et al., 2008).
Tobacco smoke has been epidemiologically associated to a higher risk of cancer development, especially in the oral cavity, larynx, and lungs, as these are places of direct contact with the carcinogenic tobacco's compounds. In this study, smoking habits did not influence the frequency of the genotoxicity biomarkers; moreover, the frequencies of MN in buccal cells were unexpectedly higher in exposed non-smokers than in exposed smokers, though the difference was not statistically significant. In most reports, the results about the effect of tobacco upon the frequency of MN in human lymphocytes were negative as in many instances smokers had lower MN frequencies than non-smokers (Bonassi et al., 2003).
In the current study, the analysis of the interaction between FA exposure and smoking habits indicates that exposure is preponderant in determining the frequency of biomarkers. Nevertheless, the effect of smoking upon biomarkers remains controversial. Some studies reported an increased frequency of MN in lymphocytes, NPB, and NBUD as a consequence of the tobacco-specific nitrosamine 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK). Still in this study no associations were observed between tobacco and nuclear abnormalities (El-Zein et al., 2006. As for alcohol consumption, it did not appear to influence the frequency of genotoxicity biomarkers in study, to the exception of MN in lymphocytes in controls (Mann-Whitney, p=0.011), with drinkers having higher means. Alcohol is definitely a recognized genotoxic agent, being cited as able to potentiate the development of carcinogenic lesions (Ramirez & Saldanha, 2002). In our study, drinkers in the control group had higher mean frequencies of all biomarkers than non-drinkers, but the differences were only significant for MN in lymphocytes. Stich and Rosin (1983) study of alcoholic individuals, reported absence of significant differences concerning MN frequencies in buccal cells. That is important to corroborate our result, because of the lack of "heavy drinkers" in our study. The same study concluded that neither alcohol nor smoking, alone, increase MN frequency in buccal cells, but a combination of both resulted in a significant elevation in micronucleated cells in the buccal mucosa. However, the synergism between alcohol consumption and tobacco has not been observed to act upon all biomarkers and, in several studies of lifestyle factors, it was difficult to differentiate the effect of alcohol from that of smoking (Holland et al., 2008).
The CBMN assay is a simple, practical, low cost screening technique that can be used for clinical prevention and management of workers subjected to occupational carcinogenic risks, namely exposure to a genotoxic agent such as FA. The results obtained in this study provide unequivocal evidence of association between occupational exposure to formaldehyde in histopathology laboratory workers and the presence of nuclear changes.
Given these results, preventive actions must prioritize safety conditions for those who perform macroscopic exams. In general, exposure reduction to FA in this occupational setting may be achieved by the use of adequate local exhaust ventilation and by keeping biological specimen containers closed during the macroscopic exam.
Conclusion
Another important application of biological monitoring, besides exposure assessment, is the use of biomarkers, at either individual or group level, for the correct interpretation of doubtful clinical tests. These are usually performed as part of occupational health surveillance program when exposure assessment data are unavailable or are deemed unreliable. Health surveillance is the periodical assessment of the workers' health status by clinical, biochemical, imaging or instrumental testing to detect any clinically relevant, occupation-dependent change of the single worker's health. Biomarkers are usually more specific and sensitive than most clinical tests and may be more effective, therefore, for assessing a causal relationship between health impairment and chemical exposure when a change is first detected in exposed workers (Manno et al., 2010).
Experience in biological monitoring gained in the occupational setting has often been applied to assess (the effects of) human exposure to chemicals in the general environment. The use of biological fluids/tissues for the assessment of human exposure, effect or susceptibility to chemicals in the workplace represents, together with the underlying data (e.g. personal exposure and biological monitoring measurements, media-specific residue measurements, product use and time-activity information), a critical component of the occupational risk assessment process, a rapidly advancing science (Manno et al., 2010). Au et al. (1998), advise to put more emphasis on monitoring populations which are known to be exposed to hazardous environmental contaminant and on providing reliable health risk evaluation. The information can also be used to support regulations on protection of the environment. | 2019-03-30T13:08:38.732Z | 2012-04-27T00:00:00.000 | {
"year": 2012,
"sha1": "f920e7220576ce18fc9476b8ab8407e79890d92c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/37621",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f264d6446289f339967a5da1293ed42d1e350507",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
245010439 | pes2o/s2orc | v3-fos-license | Identification of prognostic and therapeutic value of CC chemokines in Urothelial bladder cancer: evidence from comprehensive bioinformatic analysis
Background Urothelial bladder cancer (BC) is one of the most prevalent malignancies with high mortality and high recurrence rate. Angiogenesis, tumor growth and metastasis of multiple cancers are partly modulated by CC chemokines. However, we know little about the function of distinct CC chemokines in BC. Methods ONCOMINE, Gene Expression Profiling Interactive Analysis (GEPIA), Kaplan–Meier plotter, cBioPortal, GeneMANIA, and TIMER were used for analyzing differential expression, prognostic value, protein–protein interaction, genetic alteration and immune cell infiltration of CC chemokines in BC patients based on bioinformatics. Results The results showed that transcriptional levels of CCL2/3/4/5/14/19/21/23 in BC patients were significantly reduced. A significant relation was observed between the expression of CCL2/11/14/18/19/21/23/24/26 and the pathological stage of BC patients. BC patients with high expression levels of CCL1, CCL2, CCL3, CCL4, CCL5, CCL8, CCL13, CCL15, CCL17, CCL18, CCL19, CCL22, CCL25, CCL27 were associated with a significantly better prognosis. Moreover, we found that differentially expressed CC chemokines are primarily correlated with cytokine activity, chemokines receptor binding, chemotaxis, immune cell migration. Further, there were significant correlations among the expression of CC chemokines and the infiltration of several types of immune cells (B cells, CD8+ T cells, CD4+ T cells, macrophages, neutrophils, and dendritic cells). Conclusions This study is an analysis to the potential role of CC chemokines in the therapeutic targets and prognostic biomarkers of BC, which gives a novel insight into the relationship between CC chemokines and BC.
Cystectomy was not promising to increase the recurrence-free survival (RFS) and overall survival (OS) in the expected range even with extended removal of lymph nodes [9].
Chemokines, constituting the largest family of cytokines, are chemotactic cytokines that mediate immune cell migration and lymphoid tissue growth [10]. Sequencing and gene expression studies have found the CC chemokines may play an important role in the tumorigenesis and progression of distinct tumors [11][12][13]. Previous studies have identified several CC chemokines were associated with disease-specific survival [14], tumor growth and progression [12]. Studies interpreted that CC chemokines may affect the abundance, infiltration and accumulation of immune cells [15,16]. Thus, CC chemokines have multiple functions in tumor progression and invasion, and they serve as prognostic biomarkers for many types of tumors, including BC. However, the expression and prognostic values of CC chemokines in BC still remain unclear.
In this study, we performed a comprehensive analysis of CC chemokines to evaluate their potential value as therapeutic targets and prognostic biomarkers based on several large public databases, thus supplying informative assistance to help clinicians select appropriate therapeutic drugs and more accurately prognosis in BC patients.
ONCOMINE
The mRNA levels of distinct CC chemokines in diverse cancer types were analysed in ONCOMINE (www. oncom ine. org), an online database providing powerful, genome-wide expression analysis with cancer microarray information [17]. In this study, a p-value < 0.05, a fold change of 2, and a gene rank in the top 10% were set as the significance thresholds. The mRNA expression of CC chemokines in clinical cancer specimens were compared with those in normal controls. Student's t-test was used to analyze the difference in the expression of CC chemokines in BC.
GEPIA
GEPIA (http:// gepia. cancer-pku. cn/ index. html) is a new analytical tool using a standard processing pipeline and consist of thousands of tumors and normal tissue samples data [18]. In this research, a differential gene expression analysis of mRNA expression of tumor and normal tissues, pathological stage analysis, and correlative prognostic analysis through GEPIA. Student's t-test was used to generate a p-value for the expression or pathological stage analysis.
Kaplan-Meier plotter
The prognostic analysis of CC chemokines patients was also performed by using Kaplan-Meier plotter (http:// kmplot. com/ analy sis/) [19], which is an online tool about the association of gene expression with the survival of patients. Data as the number-at-risk cases, median values of mRNA expression levels, HRs, 95% CIs and p-values can be obtained from the K-M plotter webpage. A statistically significant difference was considered when the p-value was < 0.05. Patient samples were split into two groups by median expression (high versus low expression) and assessed by a Kaplan-Meier survival plot. cBioPortal cBioPortal (www. cbiop ortal. org) is a comprehensive web resource, can visualize and analyze multidimensional cancer genomics data [20]. Based on The Cancer Genome Atlas (TCGA) database, genetic alterations, and co-expression of CC chemokines were obtained from cBioPortal.
String STRING (https:// string-db. org/) is a website that provides a comprehensive and objective global network of protein-protein interaction (PPI) [21]. A PPI network analysis was performed to collect and integrate the different expressions of CC chemokines and potential interactions through STRING.
GeneMANIA
GeneMANIA (http:// www. genem ania. org) is a website about gene information, analyzing gene lists and prioritizing genes for functional assays [22]. The potential interactions between different CC chemokines were analysed on it.
Timer TIMER (https:// cistr ome. shiny apps. io/ timer/) is web interface that provides systematic evaluations of the infiltration of different immune cells and their clinical impact [23]. In this study, "Gene module" was selected to evaluate the correlation between CC chemokines level and the infiltration of immune cells. "Survival module" was used to evaluate the correlation among clinical outcome and the infiltration of immune cells and CC chemokine expression.
Prognostic value of the mRNA expression of CC chemokines in BC patients
We explored the value of differentially expressed CC chemokines in the progression of BC patients. According to the data from GEPIA (not including CCL1), patients with higher levels of CCL14 (P = 0.0036) (Fig. 4) showed shorter overall survival (OS), but OS tended to be longer in patients with higher levels of CCL15 (P = 0.00069) (Fig. 4). And current results did not show a significant relation between overall survival (OS) or Disease-free survival (DFS) and other CC chemokines.
Besides, we also analysed the prognostic values of CC chemokines using Kaplan-Meier plotter in BC patients (Fig. 5). Significantly increased OS and DFS were observed in patients with higher levels of CCL3, CCL4, CCL5, CCL13 or CCL27. Patients with higher levels of CCL1, CCL2, CCL8, CCL18, CCL19, CCL22, CCL24, or CCL25 were associated with increased DFS and higher levels of CCL15, CCL17 were correlated with longer OS. However, there is a significant negative correlation between CCL11, CCL24, CCL26 with OS and between CCL28 with DFS.
Immune cell infiltration of CC chemokines in BC patients
To explore the relation between immune cell level and cancer cell, the TIMER database are used to perform an analysis. The results (Fig. 7)
Discussion
Bladder cancer is one of the most common causes of cancer-related deaths worldwide [24]. CC chemokines, which can be expressed by tumor cells and other cells, play an important role in the immune cell tumor trafficking [25][26][27], tumor metastasis [28] and apoptosis [29]. Accumulating evidence has revealed the potential value of CC chemokines in cancer immunotherapy. However, the prognostic and possible therapeutic value of CC chemokines in BC is not yet defined. Among the CC chemokines, CCL2 is the most studied in BC. Expression of CCL2 was higher in BC tissues and in human BC cell lines [30]. And this trend was obvious with the stage of BC [30]. The reduced expression of CCL2 downregulated by miR-1-3p could inhibit the metastasis and proliferation of BC cells [30]. Besides, animal experiment also proved the increased CCL2 expression in murine bladder cancer cell line [31]. Recent studies have revealed a negative relationship between prognosis, survival and CCL2 in BC patients received chemotherapy [16,32]. While gemcitabine-treated BC cells also induced more CCL2 which may recruit more monocyte-myeloid-derived suppressed cells (M-MDSCs) and incurred poor prognosis [33]. Several studies demonstrated overexpression of CCL2 in bladder cancer was correlated with tumor invasion, tumor progression [34] and lymphatic metastasis [35]. HSP47 [36], LNMAT1 [35], ERβ [37] seem to be related to CCL2 directly or indirectly. While in this study, the results indicated that the expression level of CCL2 was reduced in BC than normal sample. Moreover, a low CCL2 expression was significantly correlated with poor DFS.
For the other CC chemokines, a previous study demonstrated that CCL1 can be up-regulated by estrogen receptors alpha and then enhance bladder cancer cell invasion [38]. CCL1/CCR axis was found to be correlated with cancer-related inflammation and immune evasion [39]. Besides, GAS5 may inhibit bladder cancer cell proliferation by suppressing the expression of CCL1 [40]. However, our results did not reveal a significant difference in CCL1 expression between BC and normal patients. In vitro experiments found that upregulated CCL3 inhibits the immune response which would favor tumor growth [31]. Interestingly, a higher CCL3 expression seems to be correlated with a better OS and DFS in our study. Steve et al. [41] found CCL18 was significantly increased in voided urine of BC, but it seems not related with bladder cancer grade nor stage [42]. There are studies showed CCL18 may enhance migration and invasion by binding CCR8 in bladder cancer cells [43]. According to our study, there is no difference in CCL18 expression between normal patients and BC. Different from previous studies, patients with a higher level of CCL18 are associated with a better DFS in this study. Feng et al. [44] found the CCL21/CCR7 axis promotes migration and invasion capacity of urinary bladder cancer cells and induces lymphatic metastatic spread. In the present study, CCL21 did not show a significant influence on OS or DFS. We also found the expression level of CCL4/5/14/19/21/23 were lower in BC patients than normal patients. Whereas in certain data sets CCL13 was increased significantly in BC patients compared with normal patients. Various means of data collection in different studies may be the reason for differentially expressed CCL13. In addition, CCL2, CCL11, CCL14, CCL18, CCL19, CCL21, CCL23, CCL24, CCL26 were markedly related with clinical stage in BC patients and CC chemokines were related to cytokine activity, chemokines receptor binding, chemotaxis immune cell migration. In this study, we found a significant correlation between the expression of CC chemokines and the infiltration of the immune cell types, indicating that CC chemokines may also play a significant role in the immune activity.
One of the limitations of our study was that a detailed description or stratified analysis was missed about characteristics of the patients and clinical subtypes of BC. The data in our study are extracted from several online databases and different studies. Particularly, part of the clinical course of bladder tumors, characteristics of the patients baseline data is not complete in these studies. Among these patients, stage Ois bladder urothelial carcinoma, superficial bladder cancer, infiltrating bladder urothelial carcinoma are included. Therefore, a detailed description or stratified analysis was missed about these items related to diagnosis or therapeutic. Moreover, in the Figs. 4 and 5, several of the survival curves cross each other, which limit the usefulness of the log-rank test for comparing the survival outcome in our study. Further analysis, like Parametric Regressive Model, should be perform to compare it. Regrettably, we failed to establish a univariate-and multivariable Cox proportional-hazards model due to the fact that some valid information may be missed. What`s more, it`s a limitation of our study that adjusted p-value was not employed to prevent family-wise error rates.
Conclusions
In this research, we analyzed the prognostic and therapeutic value of CC chemokines in BC. Our results provided the information that CC chemokines might play an important role in BC oncogenesis which indicates a potential target of BC. We hope our results provide novel insights on the therapeutic targets of BC and help clinicians make a better personal treatment plan. However, further studies are needed to elucidate the | 2021-12-10T14:32:56.154Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "3eea5182ac3f20471c5a0732eb2949fb678ecd28",
"oa_license": "CCBY",
"oa_url": "https://bmcurol.biomedcentral.com/track/pdf/10.1186/s12894-021-00938-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3eea5182ac3f20471c5a0732eb2949fb678ecd28",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3451980 | pes2o/s2orc | v3-fos-license | Healthy Parent Carers programme: development and feasibility of a novel group-based health-promotion intervention
Background Parent carers of disabled children report poor physical health and mental wellbeing. They experience high levels of stress and barriers to engagement in health-related behaviours and with ‘standard’ preventive programmes (e.g. weight loss programmes). Interventions promoting strategies to improve health and wellbeing of parent carers are needed, tailored to their specific needs and circumstances. Methods We developed a group-based health promotion intervention for parent carers by following six steps of the established Intervention Mapping approach. Parent carers co-created the intervention programme and were involved in all stages of the development and testing. We conducted a study of the intervention with a group of parent carers to examine the feasibility and acceptability. Standardised questionnaires were used to assess health and wellbeing pre and post-intervention and at 2 month follow up. Participants provided feedback after each session and took part in a focus group after the end of the programme. Results The group-based Healthy Parent Carers programme was developed to improve health and wellbeing through engagement with eight achievable behaviours (CLANGERS – Connect, Learn, be Active, take Notice, Give, Eat well, Relax, Sleep), and by promoting empowerment and resilience. The manualised intervention was delivered by two peer facilitators to a group of seven parent carers. Feedback from participants and facilitators was strongly positive. The study was not powered or designed to test effectiveness but changes in measures of participants’ wellbeing and depression were in a positive direction both at the end of the intervention and 2 months later which suggest that there may be a potential to achieve benefit. Conclusions The Healthy Parent Carers programme appears feasible and acceptable. It was valued by, and was perceived to have benefited participants. The results will underpin future refinement of the intervention and plans for evaluation.
Multiple child, family and environmental factors can affect parent carers' health and wellbeing, and might contribute to their poor health (see 'needs assessment' below). Some factors are difficult to change, but others could be more easily modified and therefore targeted by interventions. The particular life circumstances of parent carers may both have adverse effects on their health and be a barrier to participation in health promoting activities. These barriers may relate to difficulties with access because of the demands on their time and energy and to a feeling that activities may lack direct relevance to the complexities of their life experience. For this reason standard health promotion interventions may be inappropriate but there is a paucity of interventions that target parent carers' and are specifically tailored for their needs.
The aims of this research were: (1) to develop an intervention to promote health and wellbeing of parent carers, and (2) to test the feasibility and acceptability of the intervention. In this paper, we describe the development of the Healthy Parent Carers intervention, and report on an initial testing of the feasibility of delivering the intervention.
Stakeholders involvement
The research had a strong ethos of meaningful engagement and partnership with parent carers as the intended 'end users' of the intervention. A group of parents of children with neurodisability from the research unit's Family Faculty public involvement group worked closely with researchers to develop the intervention and the research plan. The working group met on 11 occasions and included 39 parent carers, of whom 21 attended at least one meeting during the development phase, and some contributed by phone or email.
Parent carers co-created the intervention by contributing to all stages of intervention development and testing. They proposed the idea for the project and helped to identify their specific needs through personal experiences. They advised on the design of the feasibility study, interpretation of its results, programme refinement, and future directions of this research. The details of the working group engagement, including meeting notes, are available online [13].
Other stakeholders consulted in this research included NHS health trainers, representatives from the local authority, and colleagues from the National Network of Parent Carer Forums and the Council for Disabled Children.
Intervention development
The Healthy Parent Carers (HPC) programme was developed based on the Intervention Mapping approach [14], which is a systematic approach to developing health promotion interventions. We used six steps: (1) needs assessment, (2) developing programme outcomes and change objectives, (3) selecting methods and practical applications, (4) designing programme components, (5) testing intervention feasibility and acceptability and incorporating feedback, and (6) planning intervention adoption, implementation and evaluation. Table 1 outlines the key tasks completed in each step of the HPC intervention development (with Step 6 currently ongoing).
Needs assessment (step 1)
In Step 1 we conducted a review of published research and consulted stakeholders to identify factors that affect parent carers' health and wellbeing, and considered which of these could potentially be modified. We also sought to identify and appraise existing interventions for parent carers.
Some of the strongest predictors of mental health of mothers of disabled children identified in our needs assessment include participation in health-promoting behaviours, such as recreation, healthy diet and exercise, and time spent alone or on managing one's health [15]. However, parent carers face specific challenges to engaging with health behaviours. These include constraints on their time and energy, insufficient breaks from their caring role or lack of qualified alternative caregivers [7]. Previous studies have also found that parent carers' health was associated with their self-efficacy [1], feelings of guilt [5], locus of control and coping styles [16], selfesteem and self-mastery [17], and self-perceptions [18], all of which could potentially be targeted and modified by health-promoting interventions.
Many existing interventions for parent carers target external influences, for example by focusing on promoting parenting skills [19] or effectively navigating the healthcare services [20]. Other interventions target individual factors, but are limited in scope, for example by focusing on treating stress [19] or providing emotional support [21] rather than actively promoting health and wellbeing. A systematic review of psychological therapies for parents of children with chronic illness suggested promising results in terms of improved parent mental health [22]. Problem solving therapy was found to be effective for improving parent mental health. No benefits were found for cognitive behavioural therapy or family therapy on parent outcomes. However, the quality of the evidence was low and few relevant trials were found. A systematic review of mindfulness interventions for parents of children with autism indicated potentially positive effects on parents' stress and psychological wellbeing with studies reporting good attendance and retention in 8-week programmes [23]. We found no interventions targeting important factors identified by our Working Group in the existing literature; that is interventions targeting both physical health and mental wellbeing, focused on parent carers' outcomes, and involving a range of behaviours that can be tailored to parents' needs, preferences and opportunities.
Programme outcomes and performance objectives (step 2)
Step 2 involved specifying (i) who and/or what will change as a result of the intervention (programme outcomes), (ii) what participants will need to do to achieve these outcomes (performance objectives), and (iii) factors associated with performance of behaviours (determinants of change).
In terms of behavioural outcomes, our parent carers' working group recommended that the programme promotes engagement with a wide range of small, everyday behaviours from which parents could choose, depending on their specific circumstances and needs. They suggested that this would be more empowering, acceptable and feasible to parent carers than promoting specific behaviours (e.g. a healthy diet). A set of health-promoting behaviours linked with health and wellbeing was identified. These behaviours have been promoted as evidence-based 'Five Ways to Wellbeing' [24,25] and 'CLANGERS' [26]. They include: (1) Connecting with people, (2) continuing Learning, (3) being Active, (4) taking Notice (or being mindful), (5) Giving, (6) Eating well, (7) Relaxing, and (8) maintaining Sleep hygiene. These behavioural targets were discussed with parent carers and perceived to be potentially more difficult for parent carers. Hence, in order to make this generic public health advice specific to parent carers' circumstances, it was tailored by including parent carer-specific examples of behaviours, barriers to health behaviours, and problem solving.
In addition, we identified psychological outcomes, such as increasing a sense of empowerment and resilience, necessary to engage with changing health-related behaviours. Empowerment is a sense of agency or internal locus of control, whereas resilience is an ability to cope with adversities, problems and barriers. Both are likely to be particularly important to parent carers who often face, and need to cope with, many factors related to their care-taking role (as discussed in our Working Group).
Programme outcomes were broken down into smaller, observable actions (i.e. performance objectives). As we had a number of outcomes, and given that parents' baseline levels and approaches to achieving them will vary, we formulated generic actions that can be taken to achieve them (i.e. 'steps to making lifestyle changes') ( Table 2).
Subsequently, we selected determinants of change. As the evidence on determinants specific to parent carers is Table 1 Steps and tasks undertaken in the intervention development
Steps in Intervention Mapping
Main tasks in the development of HPC intervention Step [27,28] based on general evidence of associations with behaviour change, needs assessment and consultation with the working group. These included knowledge, attitudes, self-efficacy, social support, and skills (i.e. skills for behaviour change, such as goal setting or problem solving, and for performance of behaviours, such as relaxation techniques).
Theoretical methods and practical applications (step 3) In Step 3, drawing on a taxonomy of behaviour change techniques (BCTs) [29] and evidence showing associations of BCTs with effectiveness of health interventions [29][30][31][32], we selected evidence-based methods that were relevant to the outcomes and objectives of the HPC intervention. Through consultations with the working group, we selected modes of delivery and practical strategies to deliver intervention content with BCTs ( Table 2). Two modes of delivery were selected: a printed participant booklet (the Guide for Parent Carers) and group sessions. An intervention logic model was also developed (Fig. 1).
Programme components (step 4)
In Step 4, we designed and produced programme materials for participants and facilitators. The Guide for Parent Carers was intended to be used between the sessions and after the programme ended. It included the same topics as covered in the group sessions divided into three parts: (1) understanding health and wellbeing (i.e. factors affecting health and wellbeing, health-promoting behaviours, resilience and empowerment, self-assessment); (2) taking steps to better health and wellbeing (i.e. CLANGERS, goal-setting and self-monitoring worksheets); and (3) planning for the long-term (i.e. building resilience and managing stress, self-assessment and reflecting on progress, setting long-term maintenance goals). In addition, we created a website for parent carers with additional resources relevant to the HPCs programme. A Facilitator Manual included detailed session outlines, instructions and timings for the activities, and materials to be used in group activities, such as URLs for videos and worksheets. The intervention was designed to be delivered sequentially following the Facilitator Manual, but some degree of flexibility within the sessions was possible.
The Guide for Parent Carers and activities included in the Facilitator Manual were discussed, pre-tested and refined with the parent carers in the working group. In addition, key recommendations on planning and reporting health interventions, education and training were consulted; these included the NICE Guideline on Behaviour Change [33], the Template for Intervention Description and Replication (TIDieR) [34], and the checklist for group-based behaviourchange interventions [35].
Feasibility study (step 5)
We conducted a feasibility study to test (i) the feasibility of delivery of the HPC programme and (ii) its acceptability to participants and peer facilitators (Intervention Mapping Step 5). Specifically, the study aimed to assess: strategies for recruitment and selection of participants, delivery of the programme and facilitation of group sessions, intervention content, participants' session The University of Exeter Medical School ethics committee approved the study (REC 15/11/084) and all participants documented their informed consent to participate.
Feasibility study methods
The study was advertised online on the research group's website and social media of relevant local organisations for parent carers. Participants were also recruited through personal networks of parent carers involved in the working group. The recruitment was conducted between December 2015 and January 2016, and the six group sessions were delivered between the end of January and beginning of March 2016. We sought to recruit a minimum of six participants to constitute 'a group'.
Potential participants expressed interest by contacting the research unit. A researcher explained the study and conducted a preliminary screening by phone. Participants who could not attend six sessions were offered a one-off introductory session. A researcher and the group facilitator then met each potential participant to provide a more detailed explanation of the study, answer any questions, and to screen for inclusion.
Inclusion criteria included self-identification as a primary carer of a child or young person with additional needs and/or disabilities under 25 years (consistent with current UK Department of Health and Department of Education Special Educational Needs & Disability (SEND) legislation and The Children's Act). Potential participants had to be willing and able to attend the sessions on pre-scheduled dates, be able to communicate in English, not participated in the intervention development, and had no symptoms of severe depression or suicidal ideation identified using the Patient Health Questionnaire (PHQ-9) [36,37]. A risk protocol was in place if any concern arose at the screening or during sessions. Volunteers who met the inclusion criteria were invited to participate in the study.
Intervention
The HPC programme was delivered in a small group setting. We aimed to include between 6 and 15 participants in the group; the actual group included seven participants. The two female peer facilitators who delivered the programme were involved in the development of the HPC programme from inception and co-designed the Facilitator Manual. They were also experienced in delivering training to parent carers and facilitating parent carers' groups. Due to their involvement in the programme development and relevant experience, no further training was seen as necessary, but on-going support and supervision were provided. The group sessions took place in a university seminar room, with tables arranged in a horseshoe shape facing the facilitators, a screen to view online videos, and a whiteboard on which discussions were noted in the form of mind-maps, photographed, and sent to participants.
The facilitators delivered the sessions following the Facilitator Manual. There were 6 weekly 3-h group sessions, with 1 week break in the middle due to school holidays. Each session was structured in a similar way: starting with an introduction and ice-breaker, review of the week, introducing each topic through group brainstorming, followed by one or two activities to illustrate the topic, individual action planning, recap of the session and conclusion. Before each session there was an additional half an hour for arriving, tea and coffee and informal conversations, and at the end of each session there was another half an hour for lunch and more informal interaction. Beverages and lunch were provided at sessions and participants were offered reimbursement of travel costs; they were not paid for participating in the programme or offered other inducements. Session topics and exemplary activities are in Table 3. The sessions were interactive, based around group discussions and sub-group activities. Although the sessions and main discussions were structured and outlined in the Facilitators Manual, a degree of flexibility for tailoring group discussions was possible.
Measures
The main outcomes were feasibility of delivery and acceptability of the intervention to participants and facilitators, with pre-defined criteria for judging success (as listed in Table 5). To assess feasibility we collected information on recruitment (number of interested and eligible participants, recruitment channels), attendance (including reasons for missing any sessions) and retention in the programme. Acceptability was assessed through participants' feedback using questionnaires at the end of each session. We used rating scales (scored 1-5 where 1 indicated least and 5 most satisfied) to assess satisfaction with delivery, content, relevance, perceived helpfulness and likely impact (i.e. whether participants intended to make any changes in result of the session and if yes, what would these be). We also collected free-text comments on favoured elements and/ or suggestions for improvements. A week after the final session, the researchers conducted an audio-recorded focus group with the participants. Feedback from facilitators was collected through de-briefing meetings with researchers at the end of each session. Fidelity of session delivery was assessed in de-brief meetings and session audio recordings.
Additionally, we collected quantitative data on intervention outcomes in order to test assessment methods. We assessed 'health utility' using the EuroQol 5 Dimensions questionnaire (EQ-5D) [38], depression symptoms with the PHQ-9 [36,37], and wellbeing with the Warwick-Edinburgh Well-Being Scale (WEMWBS) [39]. Measures were taken on three occasions: before the intervention, at the end of the programme and 2 months after the programme was completed.
The EQ-5D is recommended by NICE and is commonly used in health economic evaluations to measure health utility. The version used had five questions, each with three response options. Health utility scores are calculated from self-reported health states and weighted according to preferences for health states from a UK reference population [40]. The PHQ-9 questionnaire is recommended by NICE to assess depression in adults [39], and its use is highlighted in NHS clinical pathways [41,42]. It has nine items with four response options; individual responses are scored 0 to 3 and then summed to produce a score from 0 to 27. Scores of 20 and above are considered indicative of severe depressive symptoms [43]. The WEMWBS was developed and validated as a measure of mental wellbeing in general populations and to evaluate interventions that aim to improve mental wellbeing [39]. It has a 14-item scale with five response categories, summed to provide a single score ranging from 14 to 70. The items measure both emotional and functional aspects of mental wellbeing.
Analysis
Quantitative data from feedback forms and questionnaires were analysed using descriptive statistics. Qualitative data including comments from feedback forms, de-brief meetings and the focus group were analysed thematically by identifying opinions about the programme and the sessions, perceived impact, and suggestions for improvements.
Feasibility
We received 12 expressions of interest in participation. Telephone screening identified that one person had been part of the working group, two could not commit to attending the six sessions, one was not able to participate and one could not be contacted.
Participant characteristics
Seven parent carers, all white British women, met the inclusion criteria and signed up to participate in the programme ( Table 4). The participants and their children were broadly of a similar age; the children's conditions were a mix of physical and intellectual disability. They lived in diverse circumstances, some in a city and others in villages. The Indices of Multiple Deprivation [44] in the areas where the participants lived were mixed; four lived in areas that are relatively more deprived relative to England as a whole.
Engagement
Three participants each missed one of the sessions due to prior commitments, which were known in advance; each session was attended by at least five participants. All seven participants remained involved in the study throughout the programme (i.e. there was no attrition); six parents attended a focus group and completed follow-up questionnaires, and six participants and the two facilitators attended an informal, social catch-up meeting approximately 3 months after the end of the intervention that was requested by several participants when the programme and study had finished.
Fidelity of delivery
Group sessions were delivered with fidelity accordingly with the Facilitator Manual, which was assessed by the researchers (AB and CM) through discussion at the de-brief meetings with facilitators (comparing delivery with session plans). Few modifications were made to the Facilitator Manual based on the facilitators' feedback provided in de-brief meetings (e.g., adapting some group activities) but these were made prior to the sessions and subsequently delivered as planned.
Overall, in comparison to our pre-defined criteria for judging the study as feasible, we had successful results in terms of attendance and retention, and the only aspect that did not meet our criteria was recruitment (Table 5).
Acceptability
All participants completed feedback forms at the end of each session they attended. Overall, the results met our pre-set criteria for judging the programme as acceptable (Table 5). At least 80% of responding participants were 'satisfied' or 'very satisfied' (i.e. scoring 4 or 5) with each session (Table 6). At the end of the programme, five out of six responding participants were satisfied with the programme and would recommend it to other parent carers.
Responses to open questions in the feedback forms indicated that the participants valued the programme for the group context (i.e. meeting and identifying with other parent carers, thus providing opportunities for sharing experiences and peer support in a positive group setting) and learning from the programme and others (i.e. becoming aware of doing CLANGERS, giving oneself permission to take care of, and prioritise, own health and wellbeing) ( Table 7).
The focus group was attended by six out of seven participants. The themes identified, with exemplary quotes,
Attendance
Majority of participants attending at least 5 out of 6 sessions. All participants attended at least 5 out of 6 sessions.
Retention
Retaining at least 70% of participants at the 2-month follow up. No participants dropped out from the programme. All participants remained in the programme until Session 6, and all returned the post-intervention and follow-up questionnaires.
(98%)
a Mean score on 1 to 5 scale, where score 4 indicated 'satisfied' and 5 'very satisfied' Table 7. Overall, participants reported very positive experiences of the programme and the group sessions; they found them informative and enjoyable, and as having a generally positive impact on their health and wellbeing. The main benefits reported were developing confidence, realising the importance of taking care of themselves and their own health and wellbeing, becoming aware of CLANGERS, and peer support. The participants especially liked 'crafty' and practical activities, such as creating a box or going for a walk. However, some participants reported that they found setting and failing to meet weekly goals disheartening, and suggested more focus on constructive problem solving and learning to set achievable goals earlier in the Practical group activities • Participants liked practical, 'crafty' activities, such as making a paper box, the compliment flower, colouring, which were perceived positively and as small achievements in the sessions (FF & FG) • They also liked a group walk (FF & FG)
Ambivalent elements and main suggestions for improvements
Goal setting • Some participants felt that they set unrealistic goals; not achieving their goals had a negative effect; e.g.: 'I found with my goals, looking back, they were probably unrealistic. So although I was doing the CLANGERS every week, I wasn't achieving my goal. So then I felt guilty and disheartened.' (FG) • Some felt that setting and reviewing goals was helpful as it raised self-awareness and helped identify barriers; e.g.: 'Just becoming more aware. It brings up things like "why haven't I done that?" or "have I been doing that?" which has been really good. And for me just having that awareness is a good starting point because then long-term it will benefit me.' (FG) • Others felt that there should be more focus on thinking about long-term goals; 'I think it would have been good to think "these are all the things I want to do long-term' so setting long-term goals, but in that particular week all goes wrong and you don't even think about or worry about your goal setting, and you come in and you think "oh no, I've not done it". • Overall, participants agreed that there should be more focus at the beginning of the programme on discussing goal setting (e.g. why and how to set goals).
Contact time and time management in the sessions
• Participants reported that they would welcome more sessions (or on-going groups) that would provide more time to discuss issues related to CLANGERS and other issues that they wanted to share in the group and for peer support; 'It could have done with another week or two. Especially the second week when we did the Connect, that was quite a big issue for some and we could have done a lot more time on it, cause we had to park stuff but we never actually got back to the parked things cause we didn't have the time to.' (FG) • They also would like to have more time within the sessions for ice-breaking, goal setting, unstructured group discussions and filling in feedback forms.
Managing group interaction
• Although participants generally found the group positive and enjoyable, they also felt that sometimes the group dynamics were challenging as everyone wanted to talk about their experiences and issues in a limited time; 'Also, if [the facilitator] would start this side of the room doing feedback and said 'let's just have one', we'd start with one and by the time we got to the other end then we suddenly went onto everything.' (FG) 'There are going to be times where everybody is going to feel they have something else to say and they really want to expand on what they've said, and having that opportunity but it's just when everyone stops? And with some topics perhaps we could've done with a little bit more time to allow to just get that off, when we really needed to off load something.' (FG) • Participants suggested that it might be helpful to better manage how much time people take talking about their own experiences and views, mixing up where people sit and who they work with, and revisiting group ground rules more regularly.
Feedback forms and questionnaires
• Participants felt there should be more time to fill them in.
• They preferred feedback forms specific to each of the CLANGERS.
• They would like to be able to report any other circumstances (e.g. recent health issues) affecting their health and wellbeing, and longer-term follow up.
programme. In relation to programme delivery, the participants valued having peer facilitators who were understanding and supportive. Six weekly sessions was acceptable contact time, but the participants would also welcome more sessions or if the group was on-going. The participants thought that the programme could also be delivered to existing parent carer groups, thus allowing for longer-term contact and support.
Indicators of intervention impact
At least half of the participants found each of the sessions helpful in improving their health and wellbeing, and a majority were willing to make some changes as a result of attending the sessions and the programme ( Table 6). Only in Session 5, which focused on Relax and Sleep, did half of the participants report that they would not make any changes as a result of the session; their comments indicated that this was due to issues affecting their sleep that they needed to address first (e.g. current health problems, anxiety, child's sleep problems). At the end of the programme five out of six respondents found the programme helpful in improving their health and wellbeing. At 2-month follow-up participants continued to perceive a positive impact of the programme (mean satisfaction score 4.2 on a scale 1 to 5). Four out of five respondents assessed the programme as very helpful (scored 4 or 5). These four participants reported also making lifestyle changes (e.g. walking and swimming, taking more notice, having a 'CLANGERS day'). One respondent reported not finding the programme helpful in improving her health and wellbeing (scored 2), not making any changes as a result of the programme (scored 2), and commented that she found the programme negative and that she had already followed some guidelines. Finally, four out of five respondents reported staying in touch with other group members through other support group meetings or social media. All participants completed the three health and wellbeing questionnaires at each of the three time-points. There was wide variability in individual scores and a trend for change in scores on all questionnaires. The baseline EQ-5D health utility scores indicated poor health with mean baseline score of 0.68 (s.d. 0.067) (Fig. 2a). The baseline PHQ-9 depression scores showed some indications of moderate depressive symptoms (Fig. 2b) with mean baseline score of 9.3 (s.d. 4.2). Similarly, the WEMWBS wellbeing scores suggested low wellbeing scores with a mean baseline score of 39.0 (s.d. 6.8) (Fig. 2c)
Incorporating feedback and refining the intervention
The feedback and suggestions for programme improvements from participants and facilitators, and the lessons learned from this feasibility study were summarised and, where possible, incorporated in the revised intervention design. In particular, the Facilitator Manual was revised to include suggestions for delivering the programme and facilitating the groups, and some group activities were added or removed. The main participants' suggestions for intervention improvements are listed in Table 7. Issues relevant to study design and intervention implementation and adoption are being incorporated in the currently on-going Step 6 'Planning intervention adoption, implementation and evaluation'.
Discussion
The HPC programme was developed using a systematic, user-led approach to promote the health and wellbeing of parent carers. We followed the Intervention Mapping process for developing a health promotion intervention and co-created it with parent carers as the intended end users. The HPC programme was found to be feasible to deliver, acceptable to parent carers and peer facilitators, and has potential to improve health and wellbeing of parent carers. Thus, we consider it a pragmatic first proof of principle of the programme feasibility and acceptability. One crucial aspect of the HPC programme, acknowledged by parent carers involved in the programme development and in the feasibility study, is giving oneself 'permission' to focus on your own health and wellbeing.
The parent carers who participated in the study had indications of poor health and wellbeing with low health utility scores, similar to samples of people with chronic conditions [45]; baseline PHQ-9 scores suggesting a moderate risk of depression; and, for six of the seven participants, wellbeing scores considerably lower than population norms for the WEMWBS in the Health Survey for England data from 2011 [46]. These results confirm the findings of the needs assessment and indicate that it is possible to recruit the target population.
Although we had concerns about being able to reach participants from diverse socio-economic circumstances, our participants came from a range of backgrounds, including some living in relatively more deprived areas. However, one aspect of the study that did not meet our criteria for 'success' was the rate of recruitment. We began recruiting in December, as soon as ethics approval was confirmed. Although a difficult time to get parents' attention, we hoped that recruiting over Christmas and beginning the sessions in a New Year might be advantageous as it is a time when many people formulate intentions to improve their health and/or wellbeing. However, recruitment was lower than expected. This might have been, in part, due to parent carers' being busy with family responsibilities over Christmas holidays. We also learned that some of the parent carer organisations, through which we hoped to advertise, had ceased sending out emails to their parent carer mailing lists. Thus, recruitment for future studies should work more closely with stakeholder organisations that can help with reaching and recruiting parent carers and/or recruiting existing parent carer groups.
The time needed to attend 6 weekly, half-day sessions was not a barrier to participating in the programme. Indeed, participants stated in the focus group that they would welcome a longer programme. Although we offered a one-off, introductory group session, interest in this session was so low we decided not to proceed with it.
Although the programme focuses on promoting health and wellbeing on an individual level (i.e. individual-level psychological and behaviour change), we acknowledge the importance of other factors on inter-personal, community and societal levels that affect parent carers' health and wellbeing. For example, societal factors, such as access to services or negative public attitudes towards disability can have a huge impact on parent carers' wellbeing. Whilst the HPC programme may help with handling the consequences of these factors through increased empowerment and resilience, the programme does not aim to provide guidance on the practical strategies for obtaining rights or navigating the healthcare system. The programme included signposting to sources of advice in the UK, such as Cerebra's legal toolkit and advice [47] and Council for Disabled Children's 'Expert Parent Programme' [20].
This study has significant limitations. The sample was small, self-selecting and homogeneous in terms of gender and ethnicity. Thus, the sample is not representative of the population. The lack of ethnic diversity in South West England, where this study was conducted, limits the generalisation of our findings to different cultures and contexts. Compared to other areas of United Kingdom, the South West has the highest proportion of people declaring themselves 'white British'. Ethnic and cultural factors may well influence the uptake and implementation of the intervention and merits further research.
The feasibility study (part of intervention development) did not include a comparison group so that no clear inferences can be made regarding effectiveness or generalisability. Offering group sessions only during school-times might have precluded recruitment of parent carers unavailable at these times (e.g. working parents). The group was delivered by two facilitators experienced in delivering training and support groups to parent carers, and who had been involved in the programme development; thus, they were skilled and knowledgeable about the ethos and content of the programme. Finally, we did not specifically assess whether participants engaged with CLANGERS or changed their behaviours in result of the sessions/ programme (although we asked them for intentions and examples in the session feedback forms and in the focus group). Future research should address these issues, for example, by adding a comparison group, offering group sessions on different days, times and places, assessing the fidelity of delivery and participants' perceptions when the programme is delivered by different facilitators, and assessing changes in behaviours and other intermediary factors hypothesised to affect health and wellbeing.
The HPC programme was developed systematically using an Intervention Mapping approach [14]. We found this methodology challenging as it requires considerable resources to complete tasks and relies on an existing evidence base specific to the population and context. As we had limited time and resources, found little high-quality research focusing on parent carers, and wanted to include psychological as well as behavioural outcomes, we had to adapt the methods. For example, we were unable to conduct a full-scale systematic review of health promotion interventions for parent carers (although we identified some helpful reviews) or to explore systematically (e.g. through a qualitative study) parent carers' views on health promotion. However, as we worked closely with parent carers and stakeholders throughout the study, we believe our methodology was robust.
Conclusions
The Healthy Parent Carers programme was co-created and tested with parent carers and appears to be a promising health promotion intervention for parent carers. This study has led to refinement of the intervention and the next stage of testing is being planned. The programme purposefully promotes relatively simple messages, and small, achievable steps, which have been tailored to the context of parent carers' lives. Actively promoting health and wellbeing is critical if we were to ensure better quality of life of parent carers and their children and families.
Availability of data and materials
The datasets analysed during the current study are available from the corresponding author on reasonable request.
Authors' contributions AB and CM developed the intervention and programme materials, designed the study and wrote the protocol, analysed and interpreted the data. BM co-designed the facilitator manual and the group activities. BM and MF contributed to the intervention development, delivered the group sessions, and made suggestions for revising the facilitator manual. AB led drafting of the manuscript. CM, GB and SL contributed to the drafting of the manuscript. All authors read and approved the final version of the manuscript.
Ethics approval and consent to participate Ethical approval was granted by the University of Exeter Medical School ethics committee (REC 15/11/084). All participants in the study gave informed consent to participate.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 2018-02-23T02:13:59.388Z | 2018-02-20T00:00:00.000 | {
"year": 2018,
"sha1": "9fd81c0a2eaf6a9e52c72b4f54c96cd4632c2fca",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-018-5168-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fd81c0a2eaf6a9e52c72b4f54c96cd4632c2fca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208145233 | pes2o/s2orc | v3-fos-license | Radiocarbon Production Events and their Potential Relationship with the Schwabe Cycle
Extreme cosmic radiation events occurred in the years 774/5 and 993/4 CE, as revealed by anomalies in the concentration of radiocarbon in known-age tree-rings. Most hypotheses point towards intense solar storms as the cause for these events, although little direct experimental support for this claim has thus far come to light. In this study, we perform very high-precision accelerator mass spectrometry (AMS) measurements on dendrochronological tree-rings spanning the years of the events of interest, as well as the Carrington Event of 1859 CE, which is recognized as an extreme solar storm even though it did not generate an anomalous radiocarbon signature. Our data, comprising 169 new and previously published measurements, appear to delineate the modulation of radiocarbon production due to the Schwabe (11-year) solar cycle. Moreover, they suggest that all three events occurred around the maximum of the solar cycle, adding experimental support for a common solar origin.
the initial soft x-ray/extreme ultra-violet flux on the ionosphere remains one of the largest ever reported for the mid-latitudes; and intensified aurorae, instigated by the geomagnetic disturbance, were observed as far south as the tropics 20 . Further consequences, such as malfunctions to telegraph systems, were also recorded world-wide 21 . Despite these dramatic impacts, the Carrington Event left no detectable imprint on the atmospheric radiocarbon record 22 . However, continuous and reliable observations of the solar cycle were already being made at the time of the Carrington Event. This was achieved by recording the number of sunspots (known as the International Sunspot Number, ISN). Hence, it was quickly established that the storm took place six months prior to the maximum activity of the solar cycle.
Historical solar activity may be elucidated by carrying out very high precision AMS measurements on series of annual tree-rings spanning the events in question. In our study, Event-775 and Event-994 are analyzed at levels of precision that allow us to claim evidence of the modulation of atmospheric radiocarbon due to the 11-year Schwabe cycle 11,23,24 . As a result, we are able to assign a possible timing for these events in relation to the solar cycle, an outcome that has several important implications. Firstly, if each event occurred around the maximum of the solar cycle, like the Carrington Event, this would provide additional support for a common solar origin. To elaborate, solar flares and coronal mass ejections (CMEs) are in general more frequent and more intense during times of highest solar activity. For example, the recorded ISN and the number of higher energy flares (M-class and X-class flares) are very well correlated (r = 0.95, r 2 = 0.90); although, significant flares can occur throughout the whole period 25 . Secondly, establishing whether the Carrington Event, Event-775 and Event-994 all occurred at the same point on the Schwabe cycle may suggest further shared characteristics, such as whether the mechanisms that drove Event-775 and Event-994 can be regarded as extreme versions of those that initiated Carrington.
Results
In this study, we make use of 169 radiocarbon data; of these, 76 are new measurements and 93 come from previously published datasets. The suite of new results, averaged per year, is given in Table 1. The Supplementary Information (SI) contains the full set of newly obtained data (Table S1), the previously published datasets (Table S2) and details about how duplicate measurements were dealt with, including the outputs of chi-squared statistical comparisons (Table S3). The new results were obtained at an average precision of 1.71‰ (1 σ) per year, with reported uncertainties encompassing counting statistics, normalization and sample preparation calculated in accordance with standard data reduction procedures 26 . Table 1. Results of the radiocarbon analysis over the Carrington Event, Event-775 and Event-994. *Years where pretreatments were repeated and duplicate measurements obtained and averaged (See S1, S2 and S3, SI).
Investigation of radiocarbon modulation due to the solar cycle has previously been conducted in different ways. Burchuladze et al. 27 directly compared the ISN with radiocarbon measurements on vintage wine samples 27 ; Stuiver & Braziunas 10 applied a cubic spline interpolation to annual tree-ring data and evaluated the residuals between the subsequent fit and a moving average 10 ; and Güttler et al. 11 used a band-pass filter to analyze the signal in order to extract the hidden periodicities from tree-ring data 11 . Of these three approaches, the first one can only be applied over the time since the ISN has been recorded, therefore we make use of it only in the case of the Carrington Event. Of the other two, we concentrate on the band-pass filter, but we have also completed some analyzes using the residuals from the spline interpolation, which can be found in S7 (SI). In relation to the filter, we used a Butterworth band-pass filter 28 which was designed to extract periodicities between 8 and 20 years (see Güttler et al. 11 ). It is important to note that applying a digital filter to a dataset will undesirably but unavoidably introduce an offset, as the output of the filter will be altered from the input signal 29 . This happens because using a filter with fixed cut-off frequencies and order will shift sinusoids of different frequencies by different amounts. In our case, the filter will introduce a variable offset of 3 years or less.
comparing Δ 14 c with iSn over the carrington event. For this study, we made use of 21 radiocarbon measurements over 13 single-year tree-rings. Multiple measurements were performed on 7 of the samples, and the results were averaged (see Table S1a, SI). Due to the high precision of the radiocarbon measurements obtained, a broad modulation of the Δ 14 C data can be observed, with a peak-to-trough amplitude of about 5‰. This value is consistent with the effect of solar modulation pointed out in previous analyzes of the Schwabe cycle 8,11,[30][31][32] . Moreover, when presented together with the ISN data (see Fig. 1), it is clear that the 14 C data follow the same pattern (the ISN axis is inverted to account for the inverse relationship between the two parameters). As also expected, the two signals were not perfectly aligned, as the sunspot record reflects immediate solar behavior, while the radiocarbon record is delayed due to the so-called residence time of the isotope in the atmosphere. By measuring the cross-correlation between the normalized Δ 14 C and the normalized ISN data, one can estimate the delay time between the two signals. Here this residence time is estimated to be 3 ± 1 years, which is in good agreement with previous estimates 33,34 . As can be seen, our Δ 14 C data approximately follow the sinusoidal profile of the solar cycle, and hence mimic the variability of incoming solar radiation.
Event-775.
For this study, we made use of a total of 58 radiocarbon measurements over 32 single-year tree-rings. Of these, 15 are new measurements from sample B1, 27 are from sample B2, and 16 were previously produced by Miyake et al. 2 . Multiple measurements were performed on 11 of the 32 single-year tree-rings, and the results were averaged for each year (see Tables S1b and S1c, SI).
The radiocarbon results are in close agreement with previous measurements over the event 1,3 . The outcome of our analysis of the data is shown in Fig. 2. From the averaged Δ 14 C data leading up to the spike in the year 775 CE, a sinusoidal pattern is clearly evident. This is especially true for the decades immediately preceding the event, where we achieve the highest precisions. We observe approximately four cycles within this interval of 45 years, which have a peak-to-trough amplitude that is similar to the Carrington Event (~5‰), and a period of approximately 11 years in length. This pattern was accentuated by the residuals from the cubic interpolation (see S7, SI). Thus, we note that Event-775, like the Carrington Event, appears to occur when solar activity is at its maximum (Fig. 2).
Event-994.
For this study, we made use of a total of 90 radiocarbon measurements over 48 single-year tree-rings, although on this occasion, due to availability of tree rings, the radiation event is located towards the middle of the series. Of these, 13 are new measurements from sample C, including 2 replicated measurements (see Table S1d, SI). This data was complemented by 13 results from Damon et al. 35 44 . Right, normalized monthly ISN (red) and smoothed moving average (blue). The delay between the two signals (~3 years) is an approximation of the radiocarbon atmospheric residence time.
Once more, the radiocarbon results of the newly measured data are in good agreement with previously published results 2 . Individual tree-rings prior to the Event-994 spike were sampled every other year, apart from the last data point (989-991 CE), which came from three tree-rings too narrow to separate. Our band-pass filter analysis of the data is shown in Fig. 3, while the spline residuals are presented in S7 (SI). From the averaged Δ 14 C data leading up to and after the spike, a sinusoidal pattern is again evident. They indicate that approximately 4.5 cycles occurred within this interval of 50 years. It should be noted that the band-pass filter was applied separately before and after the spike, in order to limit the disruption caused by the sudden increase in Δ 14 C. However, for at least a decade afterwards the output is still clearly influenced by the radiocarbon spike (Fig. 3).
It is also important to point out that this peak occurs in 993 CE, one year before the recognized date for Event-994, but this is consistent with several other studies (see Büntgen et al. 38 ). In this case, we also tentatively interpret Figure 2. Results of the analysis over the Event-775. In the top row, averaged data points with 1-sigma error bars (black) and timing of the radiation event (yellow). In the bottom row, the result of a Butterworth band-pass filter applied to the Δ 14 C data (green). The band-pass filter is only applied to the data prior to the spike, in order to not affect the periodicity. www.nature.com/scientificreports www.nature.com/scientificreports/ the patterns in our data as evidence of the solar modulation of atmospheric radiocarbon production. Therefore, we note that Event-994 also appears to occur when solar activity is at its maximum (Fig. 3) 38 .
Monte carlo resampling.
In order to establish if the results of our numerical analyzes were influenced by a possible low signal-to-noise ratio in our datasets, we performed Monte Carlo simulations in which we randomly resample our datasets 1,000 times, assuming the data was Normally distributed. Then, we applied the same Butterworth band-pass filter over the resampled datasets. In Fig. 4, we report the outcome of these analyzes for both the events in terms of Δ 14 C versus sample growth year (CE). For Event-775 and Event-994, the results are in accordance with the outputs shown in Figs. 2 and 3, albeit more distinct and robust. Hence, we conclude that the Carrington Event, Event-775 and Event-994 all appear to have occurred near the point of maximum activity of the solar cycle.
Discussion
Our results present a coherent picture across all three solar events. We observed a moderate variation in radiocarbon over the Carrington Event of 1859 CE; our new and previously published data over Event-775 followed a sinusoidal pattern with peak-to-trough amplitude of about 5‰, and a periodicity of about 12 years; likewise, our Event-994 data varied with a 5‰ amplitude and period of around 11 years. Although it cannot be stated categorically, we believe that the most parsimonious explanation for the undulating pattern in all our datasets is the solar modulation of radiocarbon production. No other repeating process of this same magnitude and duration is easily conceivable. This finding is best exemplified by the Monte Carlo resampling, as it accounts for the inherent variability in estimates of radiocarbon concentration.
In all three cases, the radiation events occur in close proximity to the point of maximum activity of the 11-year solar cycle. Although it did not leave any radiocarbon signature, the Carrington Event of 1859 CE was already known to have occurred around the point of maximum activity of the solar cycle, due to contemporary accounts of the sunspot record. Direct comparison between our new Δ 14 C data over the Carrington Event and such sunspot counts also allowed us to make an estimation of 3 ± 1 years for the residence time of radiocarbon in the atmosphere at this time. In relation to Event-775 and Event-994, our data also show that the radiation events occurred when the sun was at its most active, and radiocarbon production exhibited a local minimum.
In summary, our finding strengthens the likelihood of a solar origin for Event-775 and Event-994, and provides valuable experimental evidence of a link between them and the Carrington Event.
Methods
Four different dendrochronologically dated wood samples were obtained and pretreated, each containing distinct annual growth rings spanning the years of the three different events. Sample A, covering the Carrington Event, was a piece of oak from southern England (see Fig. 5). Samples B1 and C, spanning Event-775 and Event-994, respectively, were two pieces of juniper, which grew in the Sierra Nevada Mountains, California, at an altitude of about 3000 m. These samples were obtained from the Oxford Dendrochronology Laboratory, England. A further oak sample which also traversed Event-775, B2, was obtained from the Cultural Heritage Agency of The Netherlands. The rings corresponding to the years of interest in sample B1 were quite thin, while the ones from Sample C were somewhat irregular in shape, which made separation challenging. Nonetheless, the possibility of material from adjacent rings being mixed together was considered minimal.
The photosynthetic uptake of 14 CO 2 is indistinguishable from its stable isotopic analogues, save a degree of readily correctable mass-dependent fractionation. Much of the carbon absorbed is immediately locked into the cellulosic structure of the growth-rings, and chemical exchange between rings is negligible 39,40 . Hence, the optimal extract for reconstructing past Δ 14 C levels is the fraction known as alpha-cellulose. The method used for such Figs. 2 and 3. The yellow vertical band represents the year in which the event occurred, which was at least one year before the peak in Δ 14 C, for Event-775 and Event-994. For both events, the intersection between the yellow vertical band and the data interpolation corresponds to maximum activity of the sun within the Schwabe cycle or lowest radiocarbon production rate. extractions at Groningen is described in detail by Dee et al. 41 , so only a brief summary is presented here. The wood samples first undergo a physical preparation procedure, including elimination of extraneous soil and particulates from the bulk material, cutting each growth ring from the main sample, and then slicing or crushing each into small fragments. The sample is then chemically pretreated with an intense Acid-Base-Acid procedure, namely HCl (5.47% w/vol (1.5 M), 80 °C, 20 min); NaOH (17.5% w/vol, 60 min, RT) with ultra-sonication under an N 2 atmosphere; HCl (5.47% w/vol (1.5 M), 80 °C, 20 min), followed by a strong acidified oxidant (NaClO 2 , 1.5% w/vol in HCl (0.06 M), 80 °C, 20 hrs), in order to extract the alpha-cellulose fraction. Each step is separated by a thorough rinse to neutrality with deionized and decarbonized water. Next, the alpha-cellulose fraction is combusted, and the CO 2 liberated cryogenically trapped, reduced to graphite and pressed into Al cathodes 42,43 . The radiocarbon content of graphite extracted from each sample is then determined by AMS (200 kV MICADAS, Ionplus). In order to achieve very high precision measurements, the samples are measured for longer periods than usual.
Data availability
All data generated or analyzed during this study are included in this published article (and its Supplementary Information files). Figure 5. Left, Sample A, the oak piece analyzed for the Carrington event. As is evident in the photo, the wood exhibits marked and distinct annual growth rings. Center, Samples (B1) (above) and (C) (below), juniper wood analyzed for the study of Event-775 and Event-994. The growth rings are once more evident but finer and at times more warped than in Sample (A). Right, Sample (B2), oak wood analyzed for Event-775. Growth rings are marked and distinct over most of the sample. | 2019-11-19T15:36:24.955Z | 2019-10-31T00:00:00.000 | {
"year": 2019,
"sha1": "332d445a82c1a76b2e13f567a59a8ebf8647eab5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-53296-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "332d445a82c1a76b2e13f567a59a8ebf8647eab5",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
} |
44739524 | pes2o/s2orc | v3-fos-license | TC 2 C 776 g pOlymOrphism sTudies in paTienTs wiTh Oral CanCer in The pOlish pOpulaTiOn
The first signs of oral cancer may resemble developing infections in the mucous membranes, with throat cancer symptoms being similar to those of upper respiratory tract infections. This greatly hinders rapid diagnosis and treatment. Better knowledge of the changes occurring in the metabolism of folic acid can help in understanding the carcinogenesis affecting DNA methylation and genome stability. Polymorphisms in genes encoding enzymes involved in this pathway may influence enzyme activity and thereby interfere with the concentrations of homocysteine and S-adenosylmethionine, which are important for DNA synthesis and cellular methylation reactions. The aim of the study was to determine the risk of oral cancer associated with the TC2 C776G polymorphism, as determined in 119 patients. Genotypes were determined by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP). The test genotype was found to correspond to the Hardy-Weinberg (HW) equilibrium (p > 0.05). In our population G/G homozygosity of C776G TC2 gene polymorphism increases the risk of oral cancer; OR (odds ratio): 4.3875; 95% CI (confidence interval): 2.0518-9.319; p = 0.001. Regarding C/G genotype of the C776G TC2 gene, polymorphism also increases the risk of developing this cancer; OR 2.4146 95% CI: 1.2803-4.5541; p = 0.01.
Introduction
Malignant tumors of the oral cavity constitute a diverse group of diseases which vary with regard to their location, construction and clinical course.The most common form of oral cancer is squamous cell carcinoma (OSCC), which accounts for more than 90% of all cancers and develops predominantly within the mobile (front) part of the tongue and floor of the mouth.Other kinds of carcinomas in this area are very rare.It is noted that adenocarcinomas of the oral cavity usually develop in the minor salivary glands distributed throughout the oral cavity.The Polish National Cancer Registry (PNCR) reported that oral cancers are quite rare, with the total number of cases diagnosed each year in Poland being slightly higher than 1000.Data from 2010 show that annual survival of lip cancer is 91.6% in men and 91.3% in women, and tongue cancer survival is 54.9% in men and 74.9% in women.Men are 2 to 4 times more likely to be diagnosed with oral cancer than women.The most significant causes of cancer of the oral cavity are harmful carcinogens contained in tobacco smoke, although other factors, such as consumption of strong alcohol and poor oral hygiene, also have an impact.
In recent years there has been a proliferation in the number of young patients (in their thirties) who have developed oral cancer, including women, who were neither smokers nor habitual drinkers of alcohol.In most European countries the number of men diagnosed with oral cancer is still greater than the number of women, in contrast to Asian countries, where the incidence of such tumors is the same in both groups [1,2].According to medical literature and statistics, India is the country where most oral cancers occur, which is likely to be connected with overuse of tobacco products, especially chewing black tobacco.In the epidemiology of oral cancer, beside such risk factors as smoking cigarettes and alcohol abuse, inadequate oral hygiene, human papillomavirus (HPV) infection, riboflavin and iron deficiency also have a strong influence.The occurrence of oral cancer is five times more likely in smokers than non-smokers.In a study conducted in patients who abused alcohol we recorded a convergence between the occurrence of cancer and the impact of the risk factor, confirmed by the fact that alcohol products contain carcinogens [5,6,7].In addition, mechanical irritation of the oral mucosa by poorly fitting or damaged dentures also increases the risk of cancer of the oral cavity.In the last few years there has been a surge of interest in the coincidence between genetic changes of some enzymes responsible for foliate transformations and metabolic transition.It was confirmed that in some oral cavity disorders there is involvement of genetic polymorphisms of enzymes reacting in mono-carbon group metabolism, especially transcobalamin II (TC2).TC2 is a b2-globulin which belongs to the group of peptides transporting B 12 vitamin in the blood and allowing it to enter the human cell.TC2 is a key factor essential for proper activity of methionine synthase enzyme, the function of which is the transformation of homocysteine into methionine.The most common polymorphism of the TC2 gene is substitution of cytosine for guanine at position 776, which was first described by Namour et al. in 1998.The substitution causes a change of proline arginine at position 259 of the peptide sequence of TC2 [8].Folic acid is an essential nutrient that plays an important role in DNA synthesis and methylation [9,10,11].Folate deficiency can reduce global DNA methylation, which is associated with genetic instability and the formation of tumors [12].Low folate intake has been positively associated with the occurrence of colon [12], breast [17,18,19,20], lung [17,18,21,22], colorectal [17,18,21,22], and head and neck cancer [23].The presence of C776G polymorphism of the TC2 gene in transcobalamin leads to substitution of the amino acid proline arginine at codon 259 (P259R) [24,25].Studies suggest that the presence of the C776G polymorphism in the TC2 gene may affect the binding affinity of transcobalamin to cobalamin (Cbl) and the ability to transport Cbl to tissues [25,26].Although no studies have linked C776G TC2 gene polymorphism with the occurrence of oral cancer, Biselli [27] reports a relationship between the polymorphism and the risk of maternal Down syndrome, which is related to the etiology of abnormal folate metabolism.Afman et al. [25] found no association between the presence of C776 TC2 gene polymorphisms and the risk of neural tube defects.Folate, a vitamin of the B group involved in one-carbon group metabolism, plays an important role in DNA synthesis and methylation.Several polymorphisms in the genes involved in folate uptake and biotransformation have been shown to be associated with the risk of cancer and response to anticancer drugs.The aim of this study was to determine the relationship between the C776G polymorphism of the TC2 gene and the risk of oral cancer.
Material and methods
The study was conducted with the approval of the Local Ethics Committee of the Medical University of Lodz (RNN/142/09/KB ).For the study group, DNA was isolated from peripheral blood lymphocytes obtained from a group of 119 unrelated patients, 48 women and 71 men (mean age 48 ±13.50), with oral cavity cancer which had been confirmed histologically.All patients were diagnosed and treated in the Department of Head and Neck Neoplasms Surgery Medical University of Lodz between 2008 and 2014.For a control, DNA was extracted from a group of 102 unrelated healthy volunteers: 43 women and 59 men (mean age 48 ±17.90).Samples of peripheral blood (5 ml in EDTA -ethylenediaminetetraacetic acid) were taken from the antecubital vein.Postoperative material consisted of cancerous tissues taken from the mouth.All patients and controls were matched for age and gender.RS-(RefSNP accession ID) for C776G polymorphism of the gene TC2 is 1801198.
Polymorphism analysis TC2 C776G genotyping by PCR-RFLP
DNA was extracted from peripheral blood lymphocytes using DNA Blood Mini Kits (A&A Biotechnology, Gdynia, Poland).Genotypes were determined by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP).PCR was carried out in a volume of 10 µl.The reaction mixture consisted of 100 ng of genomic DNA, 0.5 µmol of each primer and 3 U of Taq polymerase.The PCR cycling conditions consisted of an initial denaturation step of 94°C for 10 minutes, followed by 34 cycles of 94°C for 1 minute, 56°C for 45 s, 72°C for 45 s and final extension of 72°C for 10 minutes.
A 10 µl amount of specifically positive PCR products was digested overnight with 1 µl of ScrFI (New England Bio Labs, Beverly, MA) at 37°C, and the digested DNA fragments were resolved on 3% agarose gel.The randomly selected DNA samples amplified by PCR for each genotype were cross-checked by DNA sequencing, and the results were found to be 100% concordant [27].The primers, length of PCR products and restriction enzymes are summarized in Table I.Genotypes were designated as follows: 99 bp and 201 bp for C/C; 99, 201 and 300 bp for C/G; and 300 bp for G/G. Figure 1 presents a representative electropherogram obtained after digestion of PCR products.
Statistical analysis
The frequency of the analyzed polymorphisms among CRC patients and controls was evaluated using the Hardy-Weinberg (HW) equilibrium test.The odds ratios (OR) and 95% confidence intervals (95% CI) were adjusted for age and gender.Means were compared using the t-test or analysis of variance.The distribution of the alleles was analyzed with Yates' χ 2 test according to age, gender, tumor location, grading, nodal involvement, distant metastases and recurrence.P-values were calculated as two-sided.Probabilities were considered significant at p-values less than 0.05.
The distribution of the TC2 C776G polymorphism in patients (X2 = 0.4671; p = 0.4942) and the control group (X2 = 1.6034; p = 0.2054) was found to correspond to the Hardy-Weinberg prediction.The TC2 C776G genotype was found to be C/C in 25 (21.00%),C/G in 55 (46.20%) and G/G in 39 (33.00%) patients with cancer of the oral cavity, while in control subjects C/C was found in 45 (44.10%),C/G in 41 (40.20%) and G/G in 16 (15.70%)patients.Two genotypes were associated with an increased risk of sporadic cancer compared with controls: C/G (OR: 2.4146; 95% CI: 1.2803-4.5541;p = 0.01) and G/G (OR: 4.3875; 95% CI: 2.0518-9.319;p = 0.001).A more detailed distribution of genotypes and alleles for the case and control groups is given in Table II.
Table III contains data showing a lack of correlation between genotype, sex, smoking and alcohol use.
We analyzed the distribution of the TC2 C776G genotype in the largest group of cancers, i.e.C00, C02 and C04.The TC2 C776G genotype was found to be C/C in 4 (20%), C/G in 9 (45%) and G/G in 7 (35%) patients with cancer of the lip (C00).In patients with tongue cancer (C02) the TC2 C776G genotype was found to be C/C in 6 (11.1%),C/G in 23 (42.6%) and G/G in 25 (46.3%).In patients with floor of the month cancer (C04) the TC2 C776G genotype was found to be C/C in 4 (13.3%),C/G in 9 (30%) and G/G in 17 (66.7%).
Discussion
Oral cavity cancer is characterized by rapid development, and clinical malignancy and lymph node metastatic tumors occur in about 40% of patients with oral cancer [28].80% of cases of oral cancer are curable.Precancerous lesions should be removed and treatedw in smokers or those infected with HPV 16 and 18.Although one quarter of patients are too young to smoke and drink alcohol, the number of these patients is increasing rapidly.The risk of developing oral cancer is seven times higher in smokers and six times higher in alcohol abusers [29,30,31].In those who give up smoking, the risk of cancer decreases with time, whereas the risk increases with the number of cigarettes smoked per day.
Dietary factors may play a role, together with genetic predisposition, because not all smokers or alcoholics get cancer, and not all patients with cancer have such habits [32,33].Other factors are associated with the ability to metabolize carcinogens or repair DNA.Individual susceptibility to cancer may be associated with a particular genotype, which might in turn be associated with metabolic disorders, resulting in increased exposure to carcinogens [34], the most important being aromatic amines.To summarize, oral cancer is known to be associated with smoking and chewing tobacco, the consumption of alcoholic beverages, a diet low in fresh fruits and vegetables, poor oral hygiene and ill-fitting dentures, and, in the case of cancer of the lips, exposure to sunlight [35].Water-soluble vitamins, such as B vitamins (B 1 , B 2 , B 3 , B 6 , B 12 ) folic acid and vitamin C, cannot be stored by the body and are rapidly excreted [26,35].One essential nutrient is cobalamin, which plays an important role as a coenzyme in the conversion of L-methylmalonyl-CoA, succinyl-CoA and the remethylation of homocysteine to methionine.HPV infection (the most common HPV16) can also cause some forms of cancer, and patients with oral cavity cancer associated with HPV infection are younger and often do not use alcohol and cigarettes.
Although no connection has been found previously between the occurrence of polymorphism TC2 C776G and the risk of cancer of the oral cavity, it has been found to be related to the occurrence of other disease entities.For example, the TC2 C7776G poly-morphism has been associated with a higher risk of developing colorectal adenoma [35].In addition, the 776G allele has been associated with the presence of colorectal neoplasia [36].
Our results indicate that the CG and GG genotypes and the G allele have the effect of increasing the likelihood of oral cancer.However, further research of gene interactions in the metabolism of folic acid and studies using different populations are needed to examine polymorphisms and risk of cancer.
In conclusion, the present study suggests that the presence of TC2 C776G is associated with increased risk of oral cancer.Genotypes C/G and G/G, and the 776G allele, increase the risk of cancer.
The authors declare no conflict of interest.
Table I .
Primers, length of PCR products and restriction enzymes
Table II .
Genotype and allele frequencies for TC2 C776G in oral cancer in patients and control subjects from the Polish population
Table III .
Odds ratio of head and neck cancer related to TC2 genotypes by gender, tobacco and alcohol consumption in patients TC2 C776G Reference: CC wild type genotype | 2018-04-03T05:51:12.306Z | 2016-11-25T00:00:00.000 | {
"year": 2016,
"sha1": "5404c3abbf3e8e91f969fd222e85753c8a73e419",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-55/pdf-28765-10?filename=TC2%20C776G.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5404c3abbf3e8e91f969fd222e85753c8a73e419",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15219062 | pes2o/s2orc | v3-fos-license | The effect of endometrial thickness and pattern measured by ultrasonography on pregnancy outcomes during IVF-ET cycles
Background To study the effect of endometrial thickness and pattern measured using ultrasound upon pregnancy outcomes in patients undergoing IVF-ET. Method One thousand nine hundred thirty-three women undergoing IVF treatment participated in the study. We assessed and recorded endometrial patterns and thickness on the day of human chorionic gonadotropin (hCG) administration. Receiver operator curves (ROC) were used to determine the predictive accuracy of endometrial thickness. Cycles were divided into 3 groups depending on the thickness (group 1: ≤ 7 mm; group 2: > 7 mm to ≤ 14 mm; group 3: > 14 mm). Each group was subdivided into three groups according to the endometrial pattern as follows: pattern A (a triple-line pattern consisting of a central hyperechoic line surround by two hypoechoic layers); pattern B (an intermediate isoechogenic pattern with the same reflectivity as the surrounding myometrium and a poorly defined central echogenic line); and pattern C (homogenous, hyperechogenic endometrium). Clinical outcomes such as implantation and clinical pregnancy rates were analyzed. Results The endometrial thickness predicts pregnancy outcome with high sensitivity and specificity. The cutoff value was 9 mm. The implantation rate and clinical pregnancy rate in group 3 were 39.1% and 63.5%, respectively, which were significantly higher than those in group 2 (33.8% and 52.1%, respectively) and group 1 (13% and 25.5%, respectively). Among those with Pattern A, the implantation rate and clinical pregnancy rate were 35.3% and 55.2%, respectively, which were significantly higher than among women with Pattern B (32.1% and 50.9%, respectively) and Pattern C (23.4% and 37.4%, respectively). In groups 1 and 3, clinical pregnancy and implantation rates did not show any significant differences between different endometrial patterns (P > 0.05), whereas in group 2, the clinical pregnancy rate and implantation rate in women with pattern A were significantly higher than those with pattern B or C (P < 0.05). Conclusions Endometrial thickness and pattern independently affect pregnant outcomes. Combined endometrial thickness and pattern could not predict the outcome of IVF-ET when endometrial thickness was < 7 mm or >14 mm, while a triple-line pattern with a moderate endometrial thickness appeared to be associated with a good clinical outcome.
Background
The success of in vitro fertilization and embryo transfer (IVF-ET) cycles depends mainly on embryo quality and uterine receptivity [1]. With respect to uterine receptivity, evaluation of endometrial receptivity continues to be a challenge in assisted reproductive technology (ART). Ultrasonographic examination has been routinely performed for evaluation of the endometrium in ART treatment because it allows accurate and noninvasive evaluation.
Although many studies have implicated endometrial thickness and pattern as prognostic parameters for successful outcomes in IVF-ET, there is still no consensus on whether the endometrial ultrasound characteristics can predict the pregnancy outcome. Many studies have shown a correlation between endometrial thickness or a certain type of echogenic pattern and uterine receptivity [2][3][4][5][6][7][8][9][10]. Some studies have suggested a minimal thickness for a successful pregnancy to occur, while others have reported adverse effects of increased endometrial thickness above which pregnancy is unlikely to occur [6,11,12]. In contrast, others have failed to demonstrate a relationship between endometrial thickness, pattern, and pregnancy and implantation rates [13][14][15][16][17]. Furthermore, few studies have combined endometrial thickness and pattern to predict the outcome of IVF-ET. The aim of our study was to evaluate the endometrial characteristics on the day of hCG administration. In particular, we aimed to assess the correlation between endometrial thickness and pattern (individually and together) and IVF outcome.
Patient recruitment and counseling
The study was reviewed and approved by the Institutional Review Board and the Ethics Committee of Xiangya Hospital, Changsha, China. The study was conducted in accordance with the Declaration of Helsinki, as revised in 1983. We conducted a retrospective cohort study of 1933 consecutive infertile patients. Briefly, patients underwent fresh IVF-ET between January of 2009 and May of 2011 at the Reproductive Medicine Center of Xiangya Hospital Central South University (Changsha, China). Exclusion criteria included the following: the presence of a known endometrial polyp or uterine anomaly, an insemination method other than IVF, and cycles using donor oocytes or cryopreserved embryos. Patients underwent no therapeutic interventions except routine procedures.
Ovulation induction and IVF-ET precedures
The choice of stimulation protocol was individual and was based on the patient's age, diagnosis, reproductive history and ovarian response, and coexisting medical conditions. When the serum estradiol concentration (E2) level was ≤50 pg/ml, and the longest follicle diameter was <10 mm without ovarian cysts, controlled ovarian hyperstimulation (COH) was performed. COH was achieved with administration of gonadotrophin, including follicle stimulating hormone (FSH) and/or human menopausal gonadotrophin (hMG). The initial dosage of gonadotrophin ranged from 150 to 450 IU, depending on the basal FSH level, antral follicular count (AFC), and maternal age. When at least two follicles were ≥18 mm in diameter and when serum E2 level was within the acceptable range for the number of mature follicles present, 10000 IU of hCG was administered. Oocyte retrieval was performed 36 hours after the administration of hCG and followed by conventional IVF. Up to three embryos were transferred 72 hours after oocyte collection. The luteal phase was supported using a daily intramuscular injection of 80 mg of progesterone in oil. Biochemical pregnancies were considered as failure to conceive. Clinical pregnancy was defined as identification of a gestational sac 4-5 weeks after embryo transfer.
Ultrasound measurement
Measurement of endometrial thickness and pattern was performed 11-12 hours before the hCG injection by transvaginal 8 MHz ultrasonography with Doppler Ultrasound (Mindray DC-6 Expert, Shenzhen, China) after patients had rested for at least 15 minutes and completely emptied their bladders. Endometrial thickness was measured in a median longitudinal plane of the uterus as the maximum distance between the endometrial-myometrial interface of the anterior to the posterior wall of the uterus. All cycles were divided into the following three group depending on the thickness: group 1: ≤ 7 mm; group 2: > 7 mm to ≤ 14 mm; group 3: > 14 mm. Endometrial pattern was classified as pattern A (a triple-line pattern consisting of a central hyperechoic line surrounded by two hypoechoic layers), pattern B (an intermediate isoechogenic pattern with the same reflectivity as the surrounding myometrium and a poorly defined central echogenic line), or pattern C (homogenous, hyperechogenic endometrium). Endometrial thickness groups were subdivided into three endometrial types.
Statistical analysis
Continuous data are expressed as the mean ± SD values or as median and range according to the distribution and were analyzed with Student's t-test. Categorical data were presented as counts, and the statistical comparison of percentage was carried out with the chi-square test. Statistical analysis was performed with SPSS (Statistical Package for Social Science, SPSS Inc, Chicago, IL, USA) version 16.00. P < 0.05 was considered statistically significant.
Results
The clinical pregnancy rate was 52.3%, and the implantation rate was 33.2%. Patients ranged in age from 21 to 47 years, and endometrial thickness on the day of hCG administration ranged from 4.8 mm to 28.02 mm. Other demographic date, such as basal FSH, duration of infertility, and number of embryos transferred are summarized in Table 1.
In women with Patterns A and B, the pregnancy rate were 55.2% and 50.9%, respectively, which was significantly higher than the rate of 37.4% in women with Pattern C (P < 0.05), while there was no difference between Patterns A and B (55.2% vs. 50.9%, respectively; P > 0.05). The implantation rates differed significantly between women with patterns A, B and C (35.3% vs. 32.1% vs. 23.4%, respectively; P < 0.05). Progesterone levels on the day of hCG administration among women with pattern C was significantly higher than those of women with Patterns A and B ( Table 2).
Clinical pregnancy rates were 25.5% in group 1 (≤7 mm), 52.1% in group 2 (>7 mm to ≤14 mm) and 63.5% in group 3 (> 14 mm), and the difference between the groups was statistically significant (P < 0.05) ( Table 2). Implantation rate among group 3 was significantly higher than that of groups 1 and 2, and there was no significant difference between groups 1 and 2. Endometrial thickness was further evaluated at threshold increments of 1 mm to assess its discriminatory ability for clinical pregnancy. Pregnancy rates ranged from 28.6% among patients with an endometrial thickness of ≤6 mm to 67.7% among patients with an endometrial thickness of >16 mm. Implantation rates also increased with increasing endometrial thickness (date not shown). An endometrial thickness threshold of 7 mm was observed below which pregnancy rates decreased rapidly.
For further analysis, the three endometrial thickness groups were subdivided into three endometrial pattern groups. In group 1, pregnancy rates and implantation rates showed no significant differences between those with patterns A, B and C (pregnancy rates: 27.8% vs. 20.8% vs. 40.0%, respectively; P >0.05; implantation rates: 15.8% vs. 9.6% vs. 20%, respectively; P > 0.05). Among group 2, the pregnancy rates and implantation rates were significantly different between groups A, B and C (pregnancy rates: 55.6% vs. 50.2% vs. 34.3%, respectively; P < 0.05; implantation rates: 35.7% vs. 31.9% vs. 22.1%, respectively; P < 0.05). In group 3, there was no difference in clinical pregnancy and implantation rates between women with the three patterns (pregnancy rates: 56.0% vs. 76.1% vs. 62.5%, respectively; implantation rates: 35.5% vs. 46.0% vs. 35.3%, respectively; P > 0.05). Clinical pregnancy and implantation rates increased significantly with increasing endometrial thickness only among those with pattern A, but showed no significant increase with endometrial thickness among those with patterns B and C. (Table 3).
Discussion
Some studies have reported a significant correlation between endometrial thickness and pregnancy rate [9,[18][19][20]. However, some do not support this view [1,13]. Our results agreed with previous studies that reported a correlation between endometrial thickness and clinical pregnancy. This clear relationship provided additional evidence to suggest that endometrial thickness is a useful indicator of endometrial receptivity.
Many studies have found a thin endometrium to be associated with a lower implantation rate, but no absolute cutoff for endometrial thickness exists; good pregnancy rates have been reported in cycles with endometrium <6 mm, and a successful pregnancy has been reported with endometrial thickness of only 4 mm [17]. Noyes N et al. [8] found that clinical pregnancy rate and live birth rate were significantly lower when endometrial thickness was less than 8 mm than when endometrial thickness was ≥9 mm. In the present study, the thinnest endometrial lining for successful clinical pregnancy was 4.8 mm. The clinical pregnancy (25.5%) and implantation (13%) rate in group 1 was significantly lower than groups 2 and 3. The relatively lower pregnancy rate observed in this group suggests that more attention needs to be given to embryos transferred to such patients.
Why does a thinner endometrium result in implantation failure? Casper RF [21] speculated that it may be related to oxygen tension. When the thickness measured by ultrasound is < 7 mm, the functional layer is thin or absent, and the implanting embryo would be much closer to the spiral arteries and the higher vascularity and oxygen concentrations of the basal endometrium. The high oxygen concentrations near the basal layer could be detrimental compared with the usual low oxygen tension of the surface endometrium.
Weissman et al. [22] showed that pregnancy rate was significantly lower above a maximum thickness of 14 mm, and they also suggested a possible increase in spontaneous abortion rates. Rashidi et al. [11] reported no pregnancies with an endometrial thickness >12 mm (n = 9). However, Richter et al. [4] and Ai-Ghamdi et al. [23] demonstrated a significant increase in the pregnancy rates as endometrial thickness increased, which was independent of the number and quality of the embryos transferred. In the present study, implantation and pregnancy rate increased with increasing endometrial thickness. Therefore, our findings support some previous studies in which increased endometrial thickness did not have a detrimental effect on clinical outcome. A case report [24] has described a successful twin IVF pregnancy in a woman with an endometrial stripe measuring 20 mm. In our study, the maximum endometrial thickness for a successful pregnancy was 19.7 mm.
Ultrasound measurement of endometrial pattern has been suggested to predict pregnancy outcome, but consensus has not been reached regarding the importance of either variable. Some studies [10,[25][26][27] believed that a trilaminar pattern of the endometrium was correlated with higher implantation and pregnancy rates, while other studies did not find a significant relationship between endometrial pattern and pregnancy rate [11,18,28,29].
Our analysis found that significantly decreased implantation and pregnancy rates were observed in women without a triple-line endometrial pattern on the day of hCG administration. Several studies have suggested that a premature secretory endometrial pattern is introduced by the advanced P rise, and this premature conversion has an adverse effect on pregnancy rates. In our study, higher P levels were found in women with patterns C and B compared to those with pattern A (0.79 ng/mol vs. >0.65 ng/mol vs. >0.58 ng/mol, respectively; P < 0.05). However, another team [30] found that Progesterone receptor-B has stimulatory effects and an increased PR-B expression induced by ovarian stimulation would lead Note: Pattern A was defined a triple-line pattern consisting of a central hyperechogenic line surrounded by two hypoechoic layers; Pattern B was defined an intermediate isoechogenic pattern with the same reflectivity as the surrounding myometrium and a poorly defined central echogenic line; Pattern C was defined as homogeneous, hyperechogenic endometrium. Group 1: endometrial thickness was ≤7 mm; Group 2: endometrial thickness was >7 mm to ≤14 mm; Group 3: endometrial thickness was>14 mm. △ P ○ P < 0.05; ★ P ▲ P ◆ P < 0.05, a P b P c P < 0.01; ☆ P < 0.05, ◊ P ※ P • P ■ P < 0.01. There is significant difference between the groups ( P < 0.05). to the persistence of a proliferative endometrium. The delayed endometrial maturation would thus be desynchronized with the stage of embryo development, leading to decreased implantation rates in ART cycles. The exact mechanism for this is not known, and a rational explanation for this phenomenon awaits further study. Despite a lower pregnancy rate and implantation rate when a homogeneous, hyperechoic pattern is noted, we disagree with some investigators who recommend embryo cryopreservation and subsequent ET in a frozen cycle. We agree with Friedler [31] that endometrial pattern offers important predictive information but should not be used as an absolute predictor of conception. Therefore, we believe that such patients should be adequately counseled and given the most adaptive advice.
When assessing the combined effect of endometrial thickness and pattern on clinical outcome, we found that the clinical pregnancy and implantation rates were not significantly different between women with patterns A, B and C in group 1 ( P > 0.05), which may indicate that a thinner endometrium represents poor receptivity of the endometrium regardless of endometrial pattern, while Chen et al. [18] found that a thinner endometrial thickness with a triple line pattern is associated with a higher clinical pregnancy rate compared to a thinner endometrium with no triple line pattern. There was also no difference between the patterns in group 3, and perhaps adequate endometrial thickness (>14 mm) mitigated the detrimental impact of not having a triple line pattern. There was significant difference in clinical pregnancy and implantation rates between women with the three patterns in group 2. These findings were not in accord with previous studies. Check et al. [26] found that no pregnancies occurred in patients with homogeneous hyperechoic endometrium, and Chen et al. [18] found that there were no differences in clinical pregnancy rate between patterns when endometrial thickness was ≥7 mm. Our results suggest that endometrial pattern has an effect on pregnancy rate when women have a moderate endometrial thickness (7-14 mm).
There are several possible explanations for these inconsistencies. Most studies assessed endometrial thickness and pattern on the day of or following hCG administration and on the day of oocyte retrieval, while other studies assessed the endometrium on the day of ET, and even fewer assessed it on both the days of hCG injection and ET. Therefore, the optimal timing of endometrial assessment remains unknown. Previous studies found that assessment on the day of hCG might be more useful as a prognostic test given the earlier timing and the absence of P exposure [32,33].
In addition, it is necessary to note that the correlation between endometrial thickness and pattern and pregnancy outcome shown in our study does not imply a causal relationship. The relationship may merely result from some other factors that are directly responsible for endometrial receptivity (such as blood flow or some other underlying physiological machinery responsible for cyclic endometrial development). Therefore, although some treatments may significantly improve endometrial thickness, such therapies may not necessarily have any clinical benefit in terms of pregnancy rate.
This study has some limitation, the most important of which is that it is retrospective in nature. However, we believe the results are of interest because similar studies have published with conflicting results. A well-designed and powered randomized clinical trial will be needed to confirm this result.
Conclusions
When endometrium thickness was ≤ 7 mm, other prognostic factors, such as embryo quality and age, should be taken into consideration. Because endometrial thickness of ≤ 7 mm was observed in only 2.4% of cycles in our study, further study is needed to make a definitive conclusion regarding this group. Regardless of the endometrial pattern, a thicker endometrium (>14 mm) did not have an adverse effect on the clinical outcome. Endometrial pattern can be considered when women have a moderate endometrial thickness. | 2015-03-27T18:11:09.000Z | 2012-11-28T00:00:00.000 | {
"year": 2012,
"sha1": "0d59e398859661e96fc8070aa24ba4f347cd4aee",
"oa_license": "CCBY",
"oa_url": "https://rbej.biomedcentral.com/track/pdf/10.1186/1477-7827-10-100",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6444b7ae7538bc43abcfba41f3c16f42faf2314c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
79978345 | pes2o/s2orc | v3-fos-license | VACCINE FOR NEUROCYSTICERCOSIS : A PRESENT UPDATE
Cysticercosis is an important tropical parasitic worm infestation. This infection can result in cyst in any organs of human beings including to brain. The brain involvement of cysticercosis or neurocyticercosis is an important neurological infection that can cause serious neuruological problem. A good sanitation is the basic prevention for cysticercosis. Nevertheless, the hope is the use of vaccination. Here, the author briefly reviews and discusses on the present situation of neurocysticercosis vaccine.
INTRODUCTION
Neurological infection is usually an important problem in neurology.There are several tropical neurological infections and an important group is the parasitic neurological infection.
Several parasites can infect neurological system and cause serious neurological problem.Of several diseases, cysticercosis is a common problem in tropical medicine. 1In general cysticercosis is an important tropical parasitic worm infestation cause by pathogenic Taenia spp.Cestode. 1 This infection can result in cyst in any organs of human beings including to brain.The brain involvement of cysticercosis or neurocyticercosis is an important neurological infection that can cause serious neuruological problem 2 .Although most cases might be asymptomatic silent infection, some case care presents severe neurological problems as well as death. 2,3A good sanitation is the basic prevention for cysticercosis. 4Nevertheless, the hope is the use of vaccination.Here, the author briefly reviews and discusses on the present situation of neurocysticercosis vaccine.
Researches on neurocysticercosis vaccine: present situation
There are some reports regarding neurocysticercosis vaccine.Most reports are on the basic studies on the antigenic property and epitopes finding that can be useful for further vaccine development.The use of bioinformatics technique becomes a new useful tool for epitope searching for cysticerocis vaccine development.The good example is the report by Zimic et al. 5 Zimic et al. reported the "immunoinformatics prediction of linear epitopes from Taenia solium TSOL18. 5" An additional study by Guo et al. on "mapping of Taenia solium TSOL18 antigenic epitopes by phage display library" is also very interesting. 6uo et al. noted that "the antigenic epitope could be mapped through screening the phage-displayed peptide libraries with mAbs and a mimotope of TSOL18, which could provide an alternative approach for the diagnosis and development of a vaccine for T. solium. 6" The data from these reports are used for further study on epitope testing and there are also some reports on testing in animal models.For example, Kyngdon et al. reported the pig model study on the Vaccine neurocysticercosis: A Present Update "antibody responses and epitope specificities to the Taenia solium cysticercosis vaccines TSOL18 and TSOL45-1 A. 7 " Until present, the test result in pigs are favorable and et al. concluded that "a control scenario involving vaccination plus oxfendazole treatment delivered at 4 monthly intervals was predicted to achieve the best outcome . 8" The interesting issue is the vaccine candidate for human beings.There are some reports on new available vaccine candidates.The good examples of vaccine candidate is "S3Pvac vaccine antigens" which is "constituted by three protective synthetic peptides: KETc1, KETc12 and GK. 9,10" This candidate was approved for effectiveness in pigs and is ongoing test for human subjects. 11Apart from vaccine candidate finding, another interesting issue in tropical vaccinology for neurocysticercosis is the development technique for the vaccine.Based on advanced biomedical engineering, several new techniques are proposes as useful methods for vaccine development.The recombinant protein technology I basicllay used. 12any reports confirm the antigenicity of the products resulted from recombinant protein technology. 13,14 vertheless, as already noted, most of the studies are on the animal models and there is still no report on trial on human subjects.The recent trial on human mononuclear cell by Díaz-Orea et al. is very interesting. 9Díaz-Orea et al. reported that "S3Pvac vaccine antigens" could be a useful adjuvant for treatment of the patients with neurocysticercosis. 9Hence, the new vaccine candidate can be a hope for further development of therapeutic vaccine against neurocysticercosis.
CONCLUSION
There are ongoing researches on the new vaccines against cysticercosis and this is an actual hope for prevention and control of cysticercosis including to neurocysticercosis.Nevertheless, as a tropical neglected disease, the limitation of grant and fund for cysticerosis vaccine research and development can be expected and this might result in delayed success in new vaccine finding.
Figure 1 .
Figure 1.Brief concept for vaccine finding for neurocysticercosis | 2018-12-06T20:30:22.010Z | 2018-01-02T00:00:00.000 | {
"year": 2018,
"sha1": "d8c4a3b56c4ddfb17336157704eee5b836676ea0",
"oa_license": "CCBYNC",
"oa_url": "https://mnj.ub.ac.id/index.php/mnj/article/download/303/172",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d8c4a3b56c4ddfb17336157704eee5b836676ea0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254620962 | pes2o/s2orc | v3-fos-license | Advancing Access to Healthcare through Telehealth: A Brownsville Community Assessment
(1) Background: This paper focuses on the development of a community assessment for telehealth using an interprofessional lens, which sits at the intersection of public health and urban planning using multistakeholder input. The paper analyzes the process of designing and implementing a telemedicine plan for the City of Brownsville and its surrounding metros. (2) Methods: We employed an interprofessional approach to CBPR which assumed all stakeholders as equal partners alongside the researchers to uncover the most relevant and useful knowledge to inform the development of telehealth community assessment. (3) Results: Key findings include that: physicians do not have the technology, financial means, or staff to provide a comprehensive system for telemedicine; and due to language and literacy barriers, many patients are not able to use a web-based system of telemedicine. We also found that all stakeholders believe that telehealth is a convenient tool that has the capacity to increase patient access and care. (4) Conclusions: Ultimately, the use of an interprofessional community-based participatory research (CBPR) design allowed our team to bring together local knowledge with that of trained experts to advance the research efforts.
Introduction
Telehealth is a broad term that describes the remote provision of health care using technology such as telephone, apps, or web-based platforms [1,2]. Telemedicine is a subset of telehealth that refers specifically to the provision of clinical health care services ranging from asynchronous transmission of information or synchronous, live conferencing, between patient and clinician. For this assessment, we used the definition of telemedicine as defined by The Center for Connected Health Policy (CCHPCA) which states that telemedicine is "a collection of means or methods for enhancing health care, public health and health education delivery and support using telecommunications technologies. Telemedicine encompasses a broad variety of technologies and tactics to deliver virtual medical, health, and education services [3]".
In a virtual visit, the patient and clinician are connected via a live, synchronous, interactive video system. Some researchers believe that making information, education, and management resources readily available to patients, telehealth allows individuals to become partners in their own health, thus empowering them to make decisions along-side their care providers [4,5]. Established standards for evaluating telehealth interventions recommend that a crucial time for evaluation of a telehealth plan is during the conceptualization and design phase [6]. Research shows, however, that many telehealth interventions are only evaluated at the end of the study period, once the intervention has been fully developed, tested, and implemented [4,7].
Although telehealth was originally used to access patients in remote locations, virtual visits have increasingly been accepted as a tool to provide real-time, convenient medical
Materials and Methods
Our main objective for the research was to create a plan for implementing telemedicine and connected health technologies broadly across the City of Brownsville and surrounding metropolitan areas. To complete this assessment, we assembled an interprofessional team of experts from the fields of urban planning, health technology policy, technology implementation, and telemedicine. This assessment took place between April and September of 2021 and was broken up into three phases: exploration, assessment, and utilization.
During the exploration phase, we identified the users of this information, the geography of the defined coverage area, the demographic profile, the current internet capacity, and a list of internet providers. We defined the assets of the region, noting the number and genres of healthcare providers, universities, think-tanks, regional accolades, and unique capabilities. It was during the exploration phase that our disciplinary silos were bridged to design a research approach that encompassed both an urban planning and health technology policy approach to better inform the project assessment needs. Lastly, in this exploration phase, we reviewed and collate all relevant laws which apply to telemedicine, privacy and security, and healthcare reimbursement.
During this assessment phase, we sought contemporary feedback from stakeholders. Specific stakeholders included clinicians, patients, and leaders in the community. We used a variety of methods which included survey, focus groups, and one-on-one qualitative interviewing techniques. All methodologies were reviewed by the Institutional Review 1.
One-on-one meetings were held in person with hospital leaders, outpatient clinics, physician offices, and healthcare care clinics.
2.
Individual focus groups were conducted with patients, interested parties, and Industries. 3.
Follow-up calls were made to multiple individuals until conceptual saturation. 4.
Anonymous surveys of clinicians, patients, and stakeholders were broadly distributed in English and Spanish through community networks and collected through a Qualtrics database.
For the in-person meetings, several provider organizations in the region gave us their time, expertise, experiences, and aspirations for telemedicine. The one-on-one meeting participants were identified through snowball sampling [25]. Participants for the one-onone meetings were recruited through clinic and provider rosters provided by the local municipal partners. They were contacted via phone and email and were not compensated for their participation.
These organizations that participated in the meetings included: • We asked provider organizations to assess the potential for a telemedicine program to impact value specific to their revenue, health outcomes, and patient experience. We asked the following:
•
Will the program require a significant expenditure for the provider? • How will the program impact workflow? • Does the provider organization have the technology to support an implementation? • Will your patients be able to access and use the telemedicine services? • What is on your wish list for implementing an ideal telemedicine program?
Meetings with providers were arranged by the project principal investigator one month in advance of our visit. Meetings were scheduled back-to-back over a three-day period on 14 June through 16 June 2021. The research team met with practitioners for one-hour sessions and used the questions above as a guide for the discussion. The purpose of these conversation was to gain insights into their experience with telehealth and to have them strategize on what defines an ideal telemedicine program for their needs. Conversations from the one-on-one meetings offer a standardized method for gathering information from multiple respondents, while allowing the flexibility to pursue interesting threads that may arise in conversations [20,26].
On Tuesday 28th June 2021, we held an in-person focus group at the Brownsville Chamber of Commerce. This focus group engaged with key community partners. Representatives from local non-profits organizations, and both the public and private sectors from across Cameron County were invited. Participants for the in-person focus group were recruited from a listing of local community partners prepared by the City of Brownsville. Participants were invited via email and asked to confirm their attendance. The focus group was conducted in at the local Chamber of Commerce as this was seen as a neutral location to discuss a regional planning strategy. The focus group was conducted in English by a member of our research team, and they were assisted by four student volunteers from the Texas Southmost College, a local community college. These students served as facilitators during our breakout sessions. The focus group began with an introductory presentation to the topic by our research team. The focus group questions were presented and then the participants were invited to divide into two smaller discussion groups to facilitate more candid discussions. Two students were assigned to each group; one student facilitated the discussion while the other took notes. The discussion sessions ran for an hour then the groups reported back to the larger group for a thirty-minute discussion.
Organizations and entities that attended included: Two in-person focus groups were held on 15 and 16 July 2021 targeting patients. The focus groups were held at Bob Clark Social Service Center and the Proyecto Juan Diego, both locations for community resources located in low-income areas in Brownsville. Sessions were held in both English and Spanish. Participants were recruited from patients currently participating in a parallel study by the University of Texas Health Science Center at Houston. The focus group centered on gaging patient buy on with utilizing telehealth. Patient focus groups were conducted via Zoom. They were conducted in Spanish and facilitated by a researcher from the University of Texas Health Science Center at Houston who has a working relationship with the participants through other ongoing studies. The patient focus groups were supported by a Texas Community College student volunteer that served as support for the moderator and notetaker. Each session last one hour.
Audio recordings of the focus group were transcribed verbatim. All Spanish-language transcripts were translated into English by a researcher fluent in both Spanish and English. Using the flexible coding method [27], the text was divided into larger sections with broader structural codes; these sections were then further parsed using more granular, conceptual and thematic codes. This approach allows for a more focused analysis of subsections of the data, which is particularly effective for a data set that will be used for multiple research groups [20]. The team used a qualitative descriptive approach to data analysis, identifying themes inductively and thematically. Qualitative description (QD) is often used in health research to inform the development of interventions or policies that can improve health outcomes for various populations [28]. On the basis of exploring "the who, what, and where of events and experiences", QD provides a straight description based on participants' responses, making use of participants' own language to support the themes that emerge [4,29,30].
To overcome barrier to improve of health outcomes, it is important that researchers utilize practices that consider the social and cultural aspects of the population they intend to study [24]. Our diverse research team consisted of researchers native to South Texas, and to overcome challenges of researchers being viewed as outsiders, local community college students were hired to facilitate discussions at all focus groups.
Once data was gathered through exploration and assessment, we proceeded to the final stage of utilization-where we made recommendations. These include a list of the health technology priorities based upon what the study finds. A list of technology recommendations was established; and from these, we created a set of use-cases. In implementation science, we create a "use case" to show how technology may be used in a variety of scenarios to provide a representation of a future state. We provided suggestions of potential pilots that can be tested in smaller areas to establish the feasibility of a broader implementation.
Results
The following section provides a summary of the results of our telehealth assessment for the region. Overall, both the infrastructure assessment, which includes an analysis of the provider landscape and intellectual resources, and the provider landscape assessment focus on findings at the regional scale. The third portion of the assessment, the focus groups for community partners and participants, is focused mostly on the City of Brownsville.
Infrastructure Assesment
The study took place in the Rio Grande Valley (RGV). The RGV is the southernmost region of Texas, consisting of Cameron, Hidalgo, Starr, and Willacy County. Cameron County is the southernmost county in Texas. The United States Census of 2019 estimated 423,163 people, of which an estimated 23% are foreign-born [31]. The majority population in Cameron County consist of people who identify as Hispanic (90%) [32]. Based on age, 29.9% of the population in Cameron County are younger than 18 years of age, and 13.8% are 65 years of age or older. Only 17.3% of the population 25 years and older have a bachelor's degree or higher. According to the 2019 Census, it is estimated that more than one quarter of the population (25.5%) live in poverty [31].
Brownsville is the largest metropolitan city within Cameron County and located in the Rio Grande Valley. Known as the southernmost point of Texas, Brownsville sits adjacent to Matamoros, Mexico and has a growing population of 182,781 people [33]. The City of Brownsville is also home to the rural Cameron Park area, known locally as a colonia. The Spanish term colonias is used to describe unincorporated settlements, neighborhoods, or communities along the U.S. border with Mexico. These areas typically lack multiple elements of infrastructure commonly found in developed neighborhood such as paved roads, sewer systems, electricity, gas, and potable water [34].
Brownsville is one of the most impoverished metropolitan areas in the United States, where 25% of the population and 48% percent of children live in poverty, 30% of the population is uninsured, 80% of the population is obese or overweight, and 30% have diabetes with 50% of them unaware of it [35]. South Texas represents about 18% of the state's entire population, of which more than 2/3 are Hispanic. The population has a low post-secondary education rate of only about 18%. Only about half the population has access to broadband internet. South Texas residents are confronted with poor health outcomes and health gaps compared to the state of Texas as a whole; these include tuberculosis, chlamydia, cancer, birth defects, diabetes, obesity, and lead poisoning.
Obesity and diabetes are endemic to South Texas, with incidence rates much higher than state and national levels. Obesity is a causal risk factor for diabetes and is directly linked to lifestyle behaviors, physical behaviors, and eating habits. Lower-income individuals and patients who do not have health insurance have a significantly higher likelihood of having undiagnosed diabetes, and the resulting costs in both economic and human terms can be devastating [36].
One overarching consideration here is that the Rio Grande Valley, being a border area, has a large number of undocumented residents who are more likely to live in poverty, have no health insurance and little education, and are reluctant to participate in US Census surveys due to fear of deportation. In recent years, the U.S./Mexico border has seen an increase of homelessness, domestic violence, and an increased number of individuals with substance use disorder in many areas in the region [37]. These factors all contribute to a population that lives in the shadows, is unable to receive government help with healthcare, and because of fears of the government and deportation, often do not get the healthcare assistance they need.
As noted above, about 30% of the population is uninsured. Of the 70% that are insured, 29.1% are on an employee health care plan, 27.4% are on Medicaid, 7.51% on Medicare, 7.17% on non-group plans, and 1.23% are on military Veterans Affairs plans [38]. There is still a large gap between insured and uninsured, contributing to unfavorable health outcomes. The need to improve health care coverage in this area has been an ongoing challenge. According to the Texas Medical Association, the uninsured are a diverse group that can or cannot afford private insurance and thus chose to not purchase it.
Current health care coverage options in the RGV are private insurance, government health care coverage such as Medicaid, children's health insurance plan (CHIP), and Medicare. Governmental health care coverage programs require individuals to meet requirements to receive coverage, as well as frequent re-enrollments. This process can be tedious and difficult for individuals to understand, leading to eligible individuals not being enrolled.
For the uninsured, there are different ways in which they might seek healthcare. In Cameron County, several Federally Qualified Health Centers (FQHCs) community-based health care providers receive funds from the HRSA Health Center Program to provide low-cost to no-cost primary care services to qualifying individuals. Cameron County Public Health has a clinic in Harlingen, San Benito, Brownsville, and Port Isabel. Cameron County Public Health also offers an Indigent Health Program for county residents at or below 21% of the federal poverty line, with resources less than $2000, who do not qualify for other state or federal healthcare programs such as Medicaid. Indigent Health Care provides medical screenings, annual physical examinations, inpatient and outpatient hospital visits, and laboratory and radiology services [39]. These disparities are exacerbated in residents living in the most rural areas where they have a greater risk of disease and substance abuse. The secondary effect of living without clean water and the overall lack of infrastructure for these residents puts them at a higher risk of asthma and environmental allergies [40].
Provider Landscape
The Health Resources and Services Administration (HRSA) declared Cameron County to be a Health Professional Shortage Area (HPSAs) and a Medically Underserved Area (MUA). MUA are geographic areas and populations with a lack of access to primary care services. HPSAs are designations that indicate health care provider shortages in primary, dental health, or mental health. Cameron County has a Local Health Department, Cameron County Public Health (CCPH), and the City of Brownsville (COB) also has a health program to assist in addressing and serving community health needs. In addition to local government resources, other institutions in the community provide healthcare and assist in educating patients about healthy lifestyle choices. The Rio Grande State Center in Harlingen is funded by the State of Texas and offers both in-house adult psychiatry services and outpatient services, including primary care, women's health, and prescription assistance. The non-profit Proyecto Juan Diego targets low-income families through their educational programs and family activities with the aim to create community members who are self-sufficient and prioritize preventive health services. The project provides educational programs, family activities, advocacy, and preventative health services. During the focus group, a University of Texas Health Science Center researcher stated, "the people that we work with, they're not used to the healthcare system working for them." Figure 1 shows the locations of healthcare providers in Brownsville.
Proyecto Juan Diego targets low-income families through their educational programs and family activities with the aim to create community members who are self-sufficient and prioritize preventive health services. The project provides educational programs, family activities, advocacy, and preventative health services. During the focus group, a University of Texas Health Science Center researcher stated, "the people that we work with, they're not used to the healthcare system working for them." Figure 1 shows the locations of healthcare providers in Brownsville. Brownsville and the RGV are home to rural colonias, economically distressed high poverty communities lacking one or more essential community infrastructure elements such as paved roads, sewer and water systems, electricity, gas, and most importantly for this report, health services. Cameron Park in Cameron County is a prime example of a colonia, with a population that is 98.7% Hispanic. In colonias, Promotoras are health educators who work within the community to educate residents on various chronic health conditions and preventive measures targeting obesity, diabetes, maternal health, breastfeeding, and health care access information.
Mexico offers affordable and easy access to health care services such as prescription medications and services from dentists and doctors. Health Care Utilization, a study conducted on border counties, revealed that individuals who went to Mexico for health care needs had similar characteristics: the majority were Hispanic and spoke Spanish, and were already familiar with health care services in Mexico; about half were low income or at the poverty level, 47% were uninsured, and 10% expressed dissatisfaction with health care services in the U.S. side [41]. For many, healthcare in Mexico is not preventative but rather is obtained when illness strikes; as a result, this option does little to prevent or mitigate underlying risk factors of chronic diseases such as diabetes and hypertension.
Intellectual Resources
In Cameron County, there are nine postsecondary education academic intuitions; seven are private and three public schools. The University of Texas Rio Grande Valley (UTRGV) is a four-year, public university which offers 293 academic programs. UTRGV is the largest postsecondary institution in the RGV, with undergraduate and graduate Brownsville and the RGV are home to rural colonias, economically distressed high poverty communities lacking one or more essential community infrastructure elements such as paved roads, sewer and water systems, electricity, gas, and most importantly for this report, health services. Cameron Park in Cameron County is a prime example of a colonia, with a population that is 98.7% Hispanic. In colonias, Promotoras are health educators who work within the community to educate residents on various chronic health conditions and preventive measures targeting obesity, diabetes, maternal health, breastfeeding, and health care access information.
Mexico offers affordable and easy access to health care services such as prescription medications and services from dentists and doctors. Health Care Utilization, a study conducted on border counties, revealed that individuals who went to Mexico for health care needs had similar characteristics: the majority were Hispanic and spoke Spanish, and were already familiar with health care services in Mexico; about half were low income or at the poverty level, 47% were uninsured, and 10% expressed dissatisfaction with health care services in the U.S. side [41]. For many, healthcare in Mexico is not preventative but rather is obtained when illness strikes; as a result, this option does little to prevent or mitigate underlying risk factors of chronic diseases such as diabetes and hypertension. Texas Southmost College (TSC) in Brownsville is a public junior college that offers the first two years of education towards a bachelor's degree, as well as certificate programs, associate degrees, and technical education. Texas State Technical College (TSTC) in Harlingen is a public college that offers 171 programs.
Provider Assessment
The provider survey had 33 respondents including clinicians and executives with 26 coming from physician practices. Figure 2 shows a breakdown of the distinct roles of the 33 respondents.
UTHealth Rio Grande Valley School of Medicine began enrollment of students after accreditation in 2015, and now currently enrolls over 200 medical students along with over 200 medical residents in 16 accredited residency programs including family medicine, internal medicine, obstetrics and gynecology, and psychiatry.
The University of Texas Health Science Center (UT Health) at Houston has a satellite campus in Brownsville for the School of Public Health and the School of Biomedical Informatics, offering graduate and doctoral level programs to dozens of students annually.
Texas Southmost College (TSC) in Brownsville is a public junior college that offers the first two years of education towards a bachelor's degree, as well as certificate programs, associate degrees, and technical education. Texas State Technical College (TSTC) in Harlingen is a public college that offers 171 programs.
Provider Assessment
The provider survey had 33 respondents including clinicians and executives with 26 coming from physician practices. Figure 2 shows a breakdown of the distinct roles of the 33 respondents. Key findings of the survey include: 1. The primary use for telemedicine at this time is primary care followed by e-prescribing. 2. Physicians do not have the technology, financial means, or staff to provide a comprehensive system for telemedicine. 3. Due to language and literacy barriers, many patients are not able to use a web-based system of telemedicine. 4. Many patients do not have a computer and their broadband is limited. 5. Patients prefer to see their provider in person but would use the telephone to communicate healthcare issues with their physician. 6. For both the provider and the patient, the use of a phone without video provides the most reliable tool to access remote care, but its utility for telemedicine is very limited. Key findings of the survey include: 1.
The primary use for telemedicine at this time is primary care followed by e-prescribing.
2.
Physicians do not have the technology, financial means, or staff to provide a comprehensive system for telemedicine.
3.
Due to language and literacy barriers, many patients are not able to use a web-based system of telemedicine.
4.
Many patients do not have a computer and their broadband is limited.
5.
Patients prefer to see their provider in person but would use the telephone to communicate healthcare issues with their physician. 6.
For both the provider and the patient, the use of a phone without video provides the most reliable tool to access remote care, but its utility for telemedicine is very limited.
Physical Barriers to Telemedicine
The key barriers include lack of knowledge to implement an effective telemedicine program. Other concerns include lack of technology to support telemedicine, staffing and workflow concerns. Lack of funding and a clear understanding of the reimbursement policies for telemedicine are also listed as barriers.
All the providers believe that a significant barrier to a successful telemedicine program is that patients do not have adequate resources-both financially and technically-to use telemedicine services. Figure 3 shows a breakdown of the technology-based barriers to telemedicine based on the provider survey. This was confirmed through patient focus groups. Although providers felt that they had an adequate system to educate patients about telemedicine offering, few of the patients we spoke to know their providers had this service.
to telemedicine based on the provider survey. This was confirmed through patient focus groups. Although providers felt that they had an adequate system to educate patients about telemedicine offering, few of the patients we spoke to know their providers had this service.
Other concerns include lack of knowledge for appropriate codes for different types of telemedicine visits, so visits are reimbursed appropriately, and clear guidelines and standards are met.
Physical Drivers of Telemedicine
When asked if telemedicine can help patients manage their health, 47% of the providers strongly agreed with this statement, particularly as they believe that the key reason patients had difficulty accessing care is due to lack of transportation. Providers also mentioned that their patients desired after hours medical care which can be provided by telemedicine.
For many providers, telemedicine was not utilized prior to the pandemic. There were concerns of how to implement the technology, how it would impact workflow, and how their patients would be able to utilize the system. One physician state, "Telemedicine would streamline the process, and physicians could see more patients." There was consensus among all those we met that using telemedicine minimizes COVID-19 risk to healthcare workers and patients. The providers can screen patients remotely rather than having them visit the practice or hospital and deliver care for those who do not need medical intervention or can receive care at home. Providers are also able to proactively communicate with their patients and use telemedicine for after-hours access. Another physician stated, "Telemedicine is fabulous for a screening tool to determine if a clinic visit is needed." Other benefits include having access to additional providers, including specialty providers. Su Clinica has only 3 physicians on staff but can access other providers remotely to increase access to care and provide additional services such as specialty care, behavioral health, patient education, and pediatric care while improving work efficiency and helping to meet clinical outcomes. Other concerns include lack of knowledge for appropriate codes for different types of telemedicine visits, so visits are reimbursed appropriately, and clear guidelines and standards are met.
Physical Drivers of Telemedicine
When asked if telemedicine can help patients manage their health, 47% of the providers strongly agreed with this statement, particularly as they believe that the key reason patients had difficulty accessing care is due to lack of transportation. Providers also mentioned that their patients desired after hours medical care which can be provided by telemedicine.
For many providers, telemedicine was not utilized prior to the pandemic. There were concerns of how to implement the technology, how it would impact workflow, and how their patients would be able to utilize the system. One physician state, "Telemedicine would streamline the process, and physicians could see more patients." There was consensus among all those we met that using telemedicine minimizes COVID-19 risk to healthcare workers and patients. The providers can screen patients remotely rather than having them visit the practice or hospital and deliver care for those who do not need medical intervention or can receive care at home. Providers are also able to proactively communicate with their patients and use telemedicine for after-hours access. Another physician stated, "Telemedicine is fabulous for a screening tool to determine if a clinic visit is needed." Other benefits include having access to additional providers, including specialty providers. Su Clinica has only 3 physicians on staff but can access other providers remotely to increase access to care and provide additional services such as specialty care, behavioral health, patient education, and pediatric care while improving work efficiency and helping to meet clinical outcomes.
Community Partners' Experience with Telehealth
Key takeaways from the discussion were that while telehealth is an option for employees in most of the represented companies, workflows are ambiguous for patient buy-in, staffing is limited, and there are questions about funding moving forward.
"We recently had our annual health fair, and our insurance company contracted someone from San Antonio at another clinic . . . we set up the computers, we set up everything for them [employees], we told them just sit here and wait until the nurse connects with you. But otherwise, if we would have asked, just connect here, they [employees] wouldn't have done it." -Industry, CEO Of the organizations present, the following provide health care that includes access to telehealth for their employees: City of Brownsville, University of Texas School Health Science Center at Houston, Brownsville Independent School District, and a large local Industry. A representative of the Brownsville Fire Department stated that even though they have the technology, the city lacks both connectivity and the platform to run telehealth.
While those present are community leaders in the area, a prevailing theme across the focus group participants is that most of their employees are not using the telehealth services that they indeed have access to. In the case of the large Industry employer, a private sector company, they have been offering telehealth to their employees for serval years, but most did not start to use it until they were forced to during the COVID-19 pandemic. The University of Texas Health Science Center at Houston transitioned to a virtual telehealth model during the COVID-19 pandemic to consult with over five thousand patients in their Chronic Disease Management Program. Of the nine thousand, nearly 25% have no access to the internet at home nor on a smart phone/devise. Those that do (75%) needed assistance to connect to the internet.
Community Partners' Perception of the Relevance of Telehealth
Key takeaways on the relevance of telehealth are that telehealth is convenient; it can increase capacity for patient access, and could increase access to specialty care, specifically mental health services.
"Our students, most of the kids, I would say at least 70% or 80%, the closest to healthcare they have is contact with the nurses on campus. So that's what telehealth could help with."
-Brownsville Independent School District (BISD), Administrator
A recurring theme across the focus group participants is the need to understand the health care context of Brownsville and the South Texas region. As a region with high rates of uninsured residents, access to healthcare is limited for many patients.
"The biggest benefit is having the right care for the right patient, at the right time."
-Brownsville EMS, Chief
The community health clinics which see a high volume of uninsured residents have long wait lists and lack capacity to service all those in need. If a telehealth model were introduced in Brownsville, it would allow health care providers to increase capacity and reach a wider net of patients. Increased access and more frequent doctor visits could have potential long-term benefits to decrease health disparities in the region.
Community Partners' Perception of Obstacles for Telehealth
Key obstacles for telehealth in Brownsville noted by the focus group participants were the language barrier for Spanish speaking population, trust issues which might limit buy-in for patients, the need to educate users on the value of the service, cultural stigmas, and the lack of resources to implement the technology.
To increase access to telehealth it is important that any service adapts to the needs of the local population. Starting with language, Spanish-only households face an increased burden in navigating virtual platforms available in English only. In addition to that, there is the obstacle of limited technological skills of the older population and thus there is a need to address how to provide basic computer skills to this group through some type of telehealth educational plan. Among the focus group participants, a common theme was potential users were skeptical of the value of telehealth. For example, for the large Industry employer, even though the company offers their employees an incentive of reduced health insurance premiums for using their insurance telehealth plan, employees remain hesitant.
"Even though it's a privilege to have all these things [telehealth], people do not give them the importance that it has." -Industry, CEO A common thread among the focus group participants was the possible attribution of cultural stigmas of healthcare in general. To address this limitation, the Brownsville Wellness Coalition suggested the use of peer support to increase buy-in. This could be achieved through community leaders or local ambassadors willing to champion a campaign to increase buy-in.
"If we find some leaders of the community itself and if that person can relate to that person too . . . when it comes from your employer or your boss, it kind of feels like it's direct, you know, it's forced. Pero si la comadre te invita, y que vamos . . . it's always that way. That's also a kind of incentive."
-Brownsville Wellness Coalition, Director
From the public sector side, entities such as Cameron County and the City of Brownsville expressed concerns on the limited internet capacity. Government employees lack bandwidth and often struggle to find the resources to provide telehealth services to a wider segment of the population. In 2019 Brownsville EMS began a federal pilot program called ET3 (treat, triage, and transport). The primary purpose of this program is to reduce the number of people that use the emergency room as their primary health care. One alternative that the program outlines is to provide an alternative destination such as a 24 h urgent care site which Brownsville currently lacks. A second option is telehealth, but there is no plan currently in place to implement this for the city.
Within the private sector, a shift to telehealth might see pushback from hospitals and private practices as this might imply a loss of revenue due to the standard billing practices for in-person consultations versus virtual. A challenge will be to increase doctor buy-in from the private sector for the hesitant.
Patient Drivers to Telemedicine
Of the 13 patient focus group participants, 3 had graduated high school, 2 spoke both English and Spanish, 8 had no insurance, and 11 had Smart Phones. The responses from the patient focus groups provided a wide range of responses to telemedicine-from a patient not knowing what telemedicine is to another patient having a great experience. Most of their telemedicine experiences occurred during the pandemic and the clinical sessions were conducted over the phone-not on a computer using video.
For patients with limited access to transportation, telemedicine provides an opportunity for them to connect with their providers by overcoming a key barrier to their accessing healthcare. Other patients would use the option of a telemedicine session, but ultimately prefer to see their provider in person.
Other comments from the patient focus groups include: • Younger patients are more tech-savvy than older patients and thus more likely to utilize and benefit from telemedicine. • Everyone has a smartphone, but they often use only the most basic features.
•
Telemedicine can provide greater access to specialists out-of-town. • Diabetic patients like the possibility of providing their blood or sugar levels to their provider remotely. • Many patients would be willing to try telemedicine.
•
Patients are happy with the program overall at Proyecto Juan Diego.
•
The patients were more enthusiastic about a hybrid model, providing a balance between Telemedicine and in-person visits.
"It is a great option for follow up as it saves time just to be able to let the doctor know your child is doing better." -Community patient "Telemedicine is probably ok for pediatrics, but it is a lot more important to go in person or go for more serious occasions." -Community patient 3.3.5. Patient Barriers to Telemedicine Some patients cannot use telemedicine as they are cash only and cannot be billed to insurance for the provider services. Figure 4 displays the most common responses provided as to why patients have difficulty accessing care.
•
Patients are happy with the program overall at Proyecto Juan Diego.
•
The patients were more enthusiastic about a hybrid model, providing a balance between Telemedicine and in-person visits.
"It is a great option for follow up as it saves time just to be able to let the doctor know your child is doing better." -Community patient "Telemedicine is probably ok for pediatrics, but it is a lot more important to go in person or go for more serious occasions." -Community patient 3.3.5. Patient Barriers to Telemedicine Some patients cannot use telemedicine as they are cash only and cannot be billed to insurance for the provider services. Figure 4 displays the most common responses provided as to why patients have difficulty accessing care. The technical barriers include the fact that many of the focus group participants had no computer or tablet, either lacked internet or if they did it was very slow, and they do not feel confident learning about technology. Poor cellphone coverage resulting in dropped calls was a challenge for some and having more than one person online is difficult with limited broadband. Some of the patients preferred using a phone over a video The technical barriers include the fact that many of the focus group participants had no computer or tablet, either lacked internet or if they did it was very slow, and they do not feel confident learning about technology. Poor cellphone coverage resulting in dropped calls was a challenge for some and having more than one person online is difficult with limited broadband. Some of the patients preferred using a phone over a video as they felt it invaded their privacy. Other barriers include illiteracy in both English and Spanish so that for some, navigating a website would be difficult if not impossible.
"The signal in the Valley is not very good, so you lose connection, if you want a better cell connection, you will have to pay way more and it is not always beneficial." -Community patient
Discussion: Lessons from an Interprofessional Approach to Developing a Community Assessment for Telehealth
Brownsville, a community with high health risk indicators, is place that could benefit from the long-term implications of a regional telehealth strategy. Through the interprofessional study, we were able to glean insights into the nuances of implementing such strategy both from the perspective of providers and the community at large. The infrastructure assessment demonstrated that there is a diversity of health care resources across the region, but there is a need to further connect these through community partnerships. Studies show that characteristics of the built environment can be determinants of the level of adoption of telehealth since living in certain types of environments may favor performing activities from distance [42][43][44]. Links between the built environment and telehealth might in turn influence how cities are built or adapted, and whether and how residents travel to access healthcare resources [44][45][46]. Overall, our research findings reinforce those of previous studies which illustrate that telehealth has the potential to increasing access for continuous care in rural areas and increase access to patients that lack transportation to health care facilities [20,47]. Studies show that individuals living in rural areas, racial and ethnic minority groups, and the elderly face higher rates of transportation barriers to care leading to poorer health outcomes and worsening of chronic conditions [48].
Nevertheless, while interventions involving telehealth technology show promise in promoting health care engagement in communities lacking health infrastructure [21], a key finding across our various stakeholder groups was the need to address context-specific barriers to buy-in. Similar to other studies, assistance with technology gaps would be key to a successful deployment of regional telehealth plan [49]. The study shows that both patients and providers see a complex interplay of patient-level barriers to access, such as individual interest and technology access, in addition to macro-level barriers to access, such as software access, funding, and personnel [49,50].
The community assessment gave providers an opportunity to strategize and outline priorities in developing a regional telehealth strategy. Providers were therefore asked to list their wish list is for the plan. Common responses included:
•
Additional support and education for diabetic patients. • Flexible work shifts: several mentioned using a Hybrid Model (e.g., one visit in person followed by a telemedicine visit). • Provide resources to address literacy and basic computer skills.
•
Add behavioral health to telemedicine due to low numbers of behavioral health providers. • Ability to take vitals, labs through telemedicine.
"Telemedicine provides seamless communication with providers regardless of where they are." -physician The focus group participants [see Figure 5] also highlighted several key considerations when developing a regional telehealth plan: there is a need to integrate measures of accountability, regional approaches must engage with rural colonias, a hybrid option would make the system more robust, and citizen representation is essential for equitable engagement. Research suggests that culturally tailored interventions can lead to enhanced treatment engagement and improved treatment effectiveness [51,52]. What some might acceptable is contextualized and interlinked with prevailing social and cultural norms, therefore understanding and designing for such norms would therefore be critical to a successful plan implementation [53] Healthcare 2022, 10, x 14 of 18 therefore understanding and designing for such norms would therefore be critical to a successful plan implementation [53] "Our healthcare system is so uncoordinated that it [a regional telehealth plan] could help coordinate our healthcare system and people could have their information consistent." -University of Texas Health Science Center in Houston, Researcher "A regional plan is a good idea, but if it's being set up to benefit the many . . . We just have to make sure that regional really means 'region that benefits everyone' . . . make sure we have the same goal . . . I think anytime you don't want to regionalize, we're hurting ourselves because the region itself is a powerful voice . . . so we need to make sure that our voice is loud enough to sit on those tables and those conversations, city, county and so forth and saying, 'Hey, we really need mental health'. That [our] voice is always a part of it." -City of Brownsville Public Health Department, Director Generally, all participants agreed that developing a regional telehealth strategy or plan is a good idea. A key benefit outlined was that a regional plan could increase access to healthcare to a wider range of people across the RGV. There is an opportunity to increase outreach into areas of the valley with some of the most marginalized groups such as rural residents in colonias. The City of Brownsville could collaborate with other municipalities and regional governing bodies to define what scale of regionalism is appropriate in defining a regional telehealth plan. To ensure that a regional telehealth plan is successful, recognizing the need for a hybrid approach is critical. Rural residents need access to general consultations but identifying strategies to integrate lab visits and physical exams would be necessary. The role of the promotoras, the healthcare workforce that serve as a bridge between provider and patient, could be further leveraged as a conduit between the patients and new technology. Training the existing workforce on telemedicine utilization may also motivate the population to try new technologies. Indeed, promotoras have been the catalysts in many public health initiatives where new technologies were successfully implemented in technology-naïve population [54,55].
Participants emphasized the need to ensure accountability for any third-party agency or company that might operate a regional telehealth plan. Some expressed concerns about a doctor-provider-driven plan, which might marginalize representation from the public sector or the community. To address this concern, two recommendations were made. First, contractors must be held accountable to a minimum threshold of quality measures. Second, these measures would need to be defined by a local governing board. A governing board could be configured with citizen representation. Two successful models currently operating in Brownsville that the city could replicate are the boards for the Brownsville Housing Authority and the Proyecto Juan Diego.
Conclusions
This study applied an interprofessional lens to explore the development of a telehealth plan for the Brownsville, Texas. The collaboration between experts in the fields of urban planning, health technology policy, technology implementation, and telemedicine allowed for a more holistic approach to the research design. This study directly informed the design of a regional telehealth plan. We distilled and highlighted internal and external forces impact the community and outlined potential health information technology project implementation strategies. These strategies applied a tiered approach to model implementation based on infrastructure and human capacity. The intersection of needs would not have been so readily identifiable had it not been for the interprofessional approach to the research. Furthermore, these approached would go on to inform policy recommendations to implement a pilot study for implementing a telehealth plan in the region.
Our project aimed at addressing concerns in the literature which suggest that a crucial time for the evaluation of a telehealth plan is during the conceptualization and design phase, not post-evolution [56]. To address this concern, our team was able to leverage the capacity and expertise of the project partners by allowing each researcher to apply research methodologies that their fields deemed appropriate for the evaluation of the various components of the study. By doing so, we captured the needs of the region through the infrastructure assessment, then were able to triangulate those findings with those extracted from the provider surveys, and the in-depth discussion help with the key stakeholder and patient focus groups. The interprofessional community-based participatory research (CBPR) design allowed our team to bring together local knowledge with that of trained experts to advance the research efforts [23]. Further engagement with the community would be needed to make the process more robust, this would require additional touchpoints or feedback loops to continue to engage the community in the expansion of development of any city driven plans. Institutional Review Board Statement: Ethical review and approval were waived for this study due to the nature of the study and the populations we were engaging.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Not applicable. | 2022-12-14T16:13:21.924Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "6574facf56537a73daf2c9700da6b3bda2d65496",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/10/12/2509/pdf?version=1670759801",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01c1f7045d3d9502e2b4a63a55b187631306a90d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261695985 | pes2o/s2orc | v3-fos-license | Structural Modification of the Natural Product Valerenic Acid Tunes RXR Homodimer Agonism
Retinoid X receptors (RXR) are ligand‐sensing transcription factors with a unique role in nuclear receptor signaling as universal heterodimer partners. RXR modulation holds potential in cancer, neurodegeneration and metabolic diseases but adverse effects of RXR activation and lack of selective modulators prevent further exploration as therapeutic target. The natural product valerenic acid has been discovered as RXR agonist with unprecedented preference for RXR subtype and homodimer activation. To capture structural determinants of this activity profile and identify potential for optimization, we have studied effects of structural modification of the natural product on RXR modulation and identified an analogue with enhanced RXR homodimer agonism.
Introduction
Ligand-activated transcription factors, termed nuclear receptors (NRs), act as sensors for multiple endogenous metabolites and signaling molecules and regulate gene expression in response to these ligand stimuli. [1]NRs hence enable pharmacological control of gene expression rendering them as attractive drug targets. [1]Among the 48 human NRs, the three highly conserved retinoid X receptors (RXRs, NR2B1-3) [2,3] have particular importance as they represent the universal heterodimer partners for other NRs. [4,5]Therefore, RXRs participate in multiple NR dependent regulatory systems and are involved in a vast number of physiological processes.[11] The natural product valerenic acid (2) has emerged from a virtual screening campaign as new type of RXR modulator with pronounced preference for RXR homodimer (EC 50 = 7 μM) and RXRβ activation (RXRα: EC 50 = 27 μM, 9-fold activation; RXRβ: EC 50 = 5.2 μM, 69-fold activation; RXRγ: EC 50 = 43 μM, 4-fold activation) demonstrating that functionally selective and subtype-preferential RXR ligands can be obtained [12] as a potential avenue to pharmacological RXR modulation with reduced adverse effects. [13]Here we evaluated the effects of structural modifications on the hexahydroindene motif of 2 on RXR agonism.We observed a steep SAR in terms of RXRβ-preference but identified a valerenic acid derivative (7) with enhanced agonism on the RXR homodimer.
Results and Discussion
The RXR ligand binding site constitutes an L-shaped hydrophobic tunnel that narrows towards its polar end which is defined by an arginine residue (Arg387 in RXRβ) forming a strong ionic contact with most RXR agonists as typically exclusive polar interaction. [3,6]Docking of valerenic acid (2) interestingly suggested binding close to the activation function at the hydrophobic end of the RXR ligand binding sites in all RXR subtypes (Figure 1).This result was obtained with Auto-Dock Vina, [14] which we recently found well-suitable for RXR ligand docking, [13] and reproduced by the Molecular Operating Environment (MOE) [15] docking algorithm with rigid receptor and induced fit.The predicted binding mode indicated that mainly the hydrophobic hexahydroindene motif mediated binding of 2 to RXR and RXR activation.Therefore, we evaluated the impact of modifications in this two-ring scaffold on RXR 2).(b) Binding of 2 to RXR. 2 (blue) was predicted to bind to the hydrophobic region of the RXR ligand binding site close to the activation function in helix 12 with no contact to Arg387 but forming a polar interaction to Asn377.RXRβ (PDB ID 7a78) [3] is shown as example.
agonism to obtain preliminary insights into the structureactivity relationship of valerenic acid (2) as RXR agonist.
In addition to valerenic acid (2), Valeriana officinalis contains the close analogues acetoxy-(3) and hydroxyvalerenic acid (4).In vitro profiling of these natural products in reporter gene assays (Table 1) revealed no effect on RXR activity up to 200 μM concentration suggesting that modifications in 1-position of the indane skeleton of 2 were detrimental.This observation, however, aligned with the predicted binding mode of 2 in which the 1-position of the indane is buried in a hydrophobic cavity with no space available to accommodate additional substituents.
As 3 and 4 failed to modulate RXR, we next centered our attention on modifications on the opposite side of the indane scaffold.As no previous SAR knowledge was available for 2 as RXR ligand, we took (economic) synthetic accessibility into consideration for analogue design and focused in this study on hydroxylated derivatives of 2 which were accessible via the synthesis strategy developed by Ramharter and Mulzer [16,17] as intermediates or by using alternative starting materials.
The valerenic acid derivatives 5-10 were prepared according to Scheme 1 following the published route. [16,17]As first step, 2-bromoprop-1-ene (11) or bromoethene (12) was treated with n-BuLi and subsequently reacted with cyclopent-2-en-1-one (13) to obtain the cyclopentenols 14 and 15 after workup with TFA.In the interest of an economic synthesis for rapid SAR exploration we skipped the enantiomeric resolution of 14 and 15 but directly treated the dienes with the dienophile 16 in presence of MgBr In vitro profiling in Gal4-hybrid reporter gene assays (Table 2) revealed reduced RXR agonism of 5-10 on all RXR subtypes compared to 2. Compound 5 with a hydroxy substituent replacing the 3-methyl group of 2 and a 7,8-double bond retained weak RXR agonism while 6 additionally lacking the 7-methyl group was inactive thus indicating importance of the 7-methyl motif for interaction with RXR.Interestingly, enhanced RXR agonism was detected for 7 comprising the 3hydroxy and 7-methyl groups but lacking the side chain methyl substituent.The saturated analogue 8 of 5 exhibited similarly weak RXR agonism as 5 and the 3-oxo derivatives with α-methyl acrylic acid (9) or propanoic acid (10) side chain revealed no detectable activity on RXR.
RXRs can act as various dimeric forms with other nuclear receptors mediating their widespread roles in health and disease.While the Gal4-RXR hybrid assays are very useful to reveal activity on the different RXR subtypes, this system cannot capture the potentially different effects on RXR dimers.Hence, we determined the activity of the natural product 2 and the descendants 5-10 on the human full length RXR homodimer and heterodimers with retinoic acid receptor (RAR), liver X receptor (LXR) and farnesoid X receptor (FXR).Valerenic acid (2) activated the RXR homodimer with low activation efficacy but with preference over all studied heterodimers (Table 2, Figure 2a).The analogues 6, 9 and 10 showing no activity on the Gal4-RXR subtypes were also inactive on the homodimer, while the active derivatives 5, 7 and 8 exhibited consistently higher potency on the homodimer than on the hybrid receptors.Among them, 7 emerged with similar low micromolar potency as 2 but significantly increased RXR homodimer activation efficacy.
Isothermal titration calorimetry (ITC) orthogonally confirmed binding of 7 to all RXR subtypes with low micromolar affinity (Table 3, Supporting Information Figure 1).This observation of consistent affinity and potency for homodimer activation but lower potency on the Gal4-hybrid receptors may suggest different molecular determinants for activation of RXR as homodimer or other mono-/oligomeric forms, [18] but this hypothesis requires further structural evaluation.
Overall, our preliminary observations on the SAR of 2 as RXR ligand indicated that larger modifications on the hexahydroindene scaffold or introduction of more polar hydroxy substituents were not favored for activity on the RXR subtypes.Additionally, comparison of 5-8 suggested importance of the 7-methyl group for RXR activation by this scaffold as it is contained in all active derivatives 5, 7 and 8 but lacking in the inactive analogue 6.
Preference for the RXR homodimer and selectivity over related lipid-activated nuclear receptors (Figure 2) distinguish 7 from the widely used RXR reference agonist bexarotene (1). 1 is a potent activator of the RXR homodimer and the RXR:RAR heterodimer with even higher efficacy on the heterodimer (Figure 2).The clinical anticancer effect of bexarotene (1) has been ascribed to RXR-mediated apoptosis induction but whether the molecular mechanism of this activity involves RXR homodimer or RXR:RAR heterodimer activation is debated. [19,20]hen we compared the effects of 7 and 1 on cancer cell proliferation (Figure 2d), we detected no effect of 7 on proliferation of colorectal (HT-29) and breast (MCF7) cancer cells even at high concentrations suggesting that RXR homodimer activation is insufficient for the antiproliferative effects of rexinoids like 1 which thus rather require heterodimer activation.Selective RXR homodimer activation may have unprecedented biological effects and open new therapeutic opportunities of RXR modulation.The homodimer preference of 7 further highlights the potential of the valerenic acid scaffold for the development of a novel type of RXR modulators.
Conclusions
The natural product valerenic acid (2) exhibits an appealing selective RXR modulator profile but has limited potency and weak homodimer activation efficacy disqualifying as a tool.Structural modification of 2 revealed a steep structure-activity relationship and differences for Gal4-RXR and RXR homodimer
Figure 1 .
Figure 1.(a) Chemical structures of RXR agonists bexarotene (1) and valerenic acid (2).(b) Binding of 2 to RXR. 2 (blue) was predicted to bind to the hydrophobic region of the RXR ligand binding site close to the activation function in helix 12 with no contact to Arg387 but forming a polar interaction to Asn377.RXRβ (PDB ID 7a78)[3] is shown as example.
Table 1 .
Activities of the natural products 3 and 4 on RXR. 2 for comparison.EC 50 [μM] (max.foldactivation)[a] • Et 2 O to obtain the key lactones 17 and 18 in
Table 3 .
Binding affinities of 7 to the RXR LBDs determined by ITC.The valerenic acid derivative 7 emerged as improved selective RXR homodimer agonist and may serve as an early tool for in vitro studies.Our results highlight the potential of valerenic acid (2) for further optimization towards selective RXR modulators to open new therapeutic opportunities via finetuned RXR activation. | 2023-09-13T06:17:07.497Z | 2023-09-12T00:00:00.000 | {
"year": 2023,
"sha1": "6bec33a6d391bc657b865b5d1c8fea4321530a74",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cmdc.202300404",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "ede9927782221bcaa1c8e264a9ea6aa5d4d91824",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234090739 | pes2o/s2orc | v3-fos-license | Generalized uncertainty principle and its implications on geometric phases in quantum mechanics
We study the implications of the generalized uncertainty principle (GUP) with a minimal measurable length on some quantum mechanical interferometry phenomena, such as the Aharonov–Bohm, Aharonov–Casher, COW and Sagnac effects. By resorting to a modified Schrödinger equation, we evaluate the lowest-order correction to the phase shift of the interference pattern within two different GUP frameworks: the first one is characterized by the redefinition of the physical momentum only, and the other is a Lorentz covariant GUP which also predicts non-commutativity of spacetime. The obtained results allow us to fix upper bounds on the GUP deformation parameters which may be tested through future high-precision interferometry experiments.
Introduction
Phase factors play a significant role in quantum mechanics (QM). Broadly speaking, they can be classified into different classes depending on their physical origin and features [1]. Among these, dynamic and geometric phases are certainly the most common examples one faces with when working in the realm of quantum theory. As suggested by their names, while the former take into account the time evolution of a system, the latter-which are the focus of the present analysis-are influenced by the change of the n-tuple of parameters R(t) = (R 1 (t), . . . , R n (t)) appearing in the Hamiltonian. In the space spanned by R, as t grows the system traces a path, and the geometric phase only depends on this path, regardless of how long the system takes to go from the starting to the arrival point [1]. This result was firstly achieved by Berry in the context of adiabatic transformations [2] and then generalized by Aharonov and Anandan [3] to the case of cyclic quantum evolutions.
One of the most eloquent manifestations of geometric phases occurs in the Aharonov-Bohm (AB) effect [4], which predicts that an electrically charged particle is affected by an electromagnetic potential, despite being confined to a region in which both the magnetic and electric fields are vanishing. The first evidence of such a phenomenon was found out one year later its theoretical prediction [5] and confirmed with a higher degree of precision in subsequent laboratory tests [6]. As a matter of fact, we mention that a dual effect was a e-mail: gluciano@sa.infn.it b e-mail: lupetruzziello@unisa.it (corresponding author) also discovered for neutral particles with a non-vanishing magnetic moment (Aharonov-Casher (AC) effect [7]), for the case of a gravitational field instead of the electromagnetic one (Colella, Overhauser and Werner (COW) effect) [8,9] and in the presence of two pulses of light sent in opposite directions around a rotating ring interferometer (Sagnac effect [10]).
In the standard analysis of AB, AC and Sagnac phenomena, gravity effects are usually neglected, as they give contributions below current experimental sensitivity. In spite of these technical aspects, their study may be non-trivial at the theoretical level, since it allows for a direct investigation of the influence of gravity on quantum mechanical systems. In the absence of a consistent theory which describes the quantum and gravity worlds on the same footing, this represents an important step toward the understanding of how such a unified framework should appear when applied to well-known QM phenomena.
As usually done in the literature, a natural way to embed gravitational effects in QM is by generalizing the Heisenberg uncertainty relation so as to account for the emergence of a minimal uncertainty in position at Planck scale [11], which thus appears as a gravity-induced UV correction 1 . In the seminal papers on the generalized uncertainty principle (GUP), deformations of the uncertainty relations stem from the attempt of explaining the divergences appearing in quantum field theory (QFT) without invoking an ad hoc cut-off in the momentum space [12,13]. In this context, considerations from string theory [14][15][16][17] and gedanken experiments on micro-black holes [18] have converged to the following proposal for the (one-dimensional) non-relativistic GUP: where σX , σP x are the uncertainties on position and momentum operators, respectively, β is the dimensionless deformation parameter (which is usually assumed to be of order one in the most common quantum gravity formulations) and p denotes the Planck length. For various choices of f (σ 2 P x ), Eq. (1) finds applications in a number of contexts, ranging from black-hole physics [18][19][20][21][22][23][24][25][26][27][28], to non-commutative geometry [29][30][31] and QFT [32][33][34][35][36][37][38][39] (for an overview, see Ref. [40]). Clearly, in all of these scenarios, the standard QM results are recovered for β f (σ 2 1. Starting from the outlined picture, in this work we analyze the effects induced by modifications of the commutation relations on the Aharonov-Bohm, Aharonov-Casher, COW and Sagnac phase shifts within two different GUP frameworks. The first one was used in Ref. [11] to reveal the universality of the quantum gravity influence on almost any system with a well-defined Hamiltonian and is characterized by the only redefinition of the physical (high-energy) momentum. On the other hand, the second model arises from a relativistic covariant generalization of Eq. (1) and also predicts the non-commutativity of spacetime coordinates [41,42]. Apart from understanding how UV gravity effects manifest themselves in well-established QM interferometry phenomena, the obtained GUP-corrected expressions allow us to impose bounds on the deformation parameters that might be tested experimentally in the future.
The paper is organized as follows: in Sect. 2, we set the stage to discuss GUP effects on QM geometric phases. Section 3 is devoted to the study of the Aharonov-Bohm experiment; in particular, we review the standard derivation of the AB phase shift and generalize the outcome to the GUP framework. The same analysis is performed for the Aharonov-Casher, COW and Sagnac effects in Sect. 4. A summary of the results and a discussion about future perspectives are presented in Sect. 5.
Geometric phases in quantum mechanics with a minimal length
In what follows, we consider two different generalizations of the Heisenberg uncertainty principle. In order to figure out how GUP corrections affect QM geometric phases, we first write down the modified Schrödinger equation and then apply it to a set of interferometry experiments.
First framework
Let us start by showing how to deal with the perturbation induced by a weak external potential V to the Schrödinger equation in the presence of a minimal position uncertainty [43,44]. In this regard, we note that, for mirror symmetric states (i.e., states with P For our purposes, we need to generalize the above relation to three dimensions. Assuming rotational isotropy, the most general deformation reads [46,47] X j ,P k = ih δ jk + β 2 to the lowest order in the positive dimensionless parameters β and β and with j, k = {1, 2, 3}. We shall consider the particular case β = 2β: this is a preferred choice, since it does not affect the usual hypothesis of commutativity of coordinates [11,46,47]. A possible way to realize the above algebra is to define the physical (high-energy) operatorsX andP aŝ k=1p kpk and we have denoted byx andp the auxiliary (low-energy) position and momentum operators satisfying the canonical commutator [x j ,p k ] = ih δ jk . Clearly, sincep k has the standard representationp k = −ih ∂/∂ x k , we havê Now, by use of Eq. (4), the Schrödinger equation for a particle of mass m is modified with the addition of a fourth-order derivative term, namely [48] −h 2 2m As usual, for a free-particle (i.e., V = 0), we can speculate the solution to be of the form with E 0 and p 0 being the free energy and momentum, respectively. Substitution of Eq. (7) into (6) leads to the following modified dispersion relation: where the quantum mechanical kinetic energy E (Q M) and the GUP-induced correction E β are defined as and we have used the notation p 0 ≡ |p 0 |. Clearly, in the limit √ β p p 0 /h → 0, the standard energy-momentum relation for a free particle is recovered. Now, since we want to study the case of a stationary phase shifter, we suppose that the perturbation induced by the external potential V modifies the solution (7) in such a way that the energy spectrum remains unaffected, i.e., E = E 0 , but the wave function changes according to 2 [43,44,49] with ξ to be determined. Following Ref. [43,44], we require ψ(t, x) to satisfy the semiclassical condition and hence higher-order derivatives of the function ξ can be safely neglected. By using this approximation and plugging Eq. (10) into (6), we are led to However, since E = E 0 , the above equation can be rearranged as where we have used the dispersion relation (8). If we denote the distance measured along the direction of p 0 by s, we have Therefore, by observing that p 0 /m = ds/dt and separating out the correction induced by the GUP, we obtain where and the integration has to be performed on a closed path.
At this point, it is more convenient to cast Eq. (15) in terms of the variation of the momentum δp due to the external potential. As remarked above, a stationary phase shifter is represented by a potential V which changes the momentum due to the energy conservation [43,44], i.e., Thus, to the leading order in δp, we have which implies 3 Now, by replacing Eq. (19) into (16) and keeping up to O(β), we obtain where 4 is the quantum mechanical phase shift of the interference pattern. From Eq. (20), it follows that the deformation (3) with β = 2β has no effect at all on interferometry phenomena, unless the QM phase shift explicitly depends on the momentum of the test particle. In this case, indeed, the GUP enters the result via the redefinition (4) of the physical momentum. We stress that the same GUP independence of the QM phase shift is peripherally discussed in Ref. [11] for the Aharonov-Bohm effect only. Here, such a result has been derived via a different approach and for a wider class of interferometry examples.
Second framework
For the following analysis, we are mainly inspired by Refs. [41,42], where a relativistic covariant generalization of the usual GUP is formulated. In particular, by using the Minkowski metric with the mostly positive signature η μν = {−, +, +, +} (μ, ν = {0, 1, 2, 3}) and denoting by m p =h/c p the Planck mass, we consider the deformed commutator where ε, α and χ are positive dimensionless parameters. Notice that, by restricting to the spatial components and expressing the Planck mass in terms of the Planck length, the above where we have explicitly written the productP ρP ρ in terms of the energy and threemomentum operators, respectively. It is now straightforward to show that, in the nonrelativistic limit c → ∞, the obtained GUP exactly mimics Eq. (3), provided that we identify β = ε − α and β = χ + 2ε.
Following Ref. [42], we shall henceforth assume that χ = 0 (in order not to break the isotropy of spacetime) and ε = α (which leads to an unmodified Poincaré algebra). It is worth observing that, with such a setting, it is no longer feasible to map the first GUP framework into the second one even in the non-relativistic limit, 5 since for the latter we now have β = 0 and β = 2ε. To corroborate this, one can show that the deformed algebra (22) leads to a noncommutative spacetime X μ ,X ν = 0, contrary to the first GUP scenario. Consequently, both the physical position and momentum operators must now be rewritten according to [42] Next, to compare the two GUP frameworks introduced above, let us consider the deformed and perform the standard expansion according to which the rest mass is the dominant contribution. In so doing, an easy way to embed the GUP corrections is to solve Eq. (25) with respect to the low-energy momentump μ . To the leading order in α, we get [42] from which it arises that GUP effects amount to an effective reparameterization of the mass such that As discussed in Ref. [42], by using this procedure one discards two solutions of the fourthorder KG equation, which, however, would introduce very small corrections and can thus be neglected.
In order to derive the Schrödinger equation with GUP corrections, let us now expand Eq. (26) to the fourth order inp, obtaining which consists of the rest mass, the relativistic kinetic energy and GUP corrections. Here, ∇ must be intended as a derivative acting on the auxiliary variable x. Therefore, although the reparameterization of the mass does not modify the KG equation, in this case GUP corrections affect the kinetic energy, as well as the related relativistic term [42]. In the next section, we will show that such corrections give rise to potentially measurable effects in interferometry experiments.
At this stage, it should be noted that even though Eq. (28) admits a solution formally similar to Eqs. (20)- (21), the effects of the GUP (22) significantly differ from the ones induced by the deformation (3). Indeed, due to Eq. (24), whenever the resulting phase shift depends on the mass and/or the physical length of the system, we have to implement the mass reparameterization (27) along with the substitution [42] which follows from the fact that Eq. (28) is written in terms of the low-energy position operator rather than the high-energy one (here, L is the effective physical dimension of the system, while the auxiliary variable). Clearly, this prescription is absent within the first GUP framework (see Eq. (4)).
Before turning to the discussion of concrete applications, we observe that the different predictions of the two GUP models presented above are not attributable to the relativistic character of Eq. (22). Indeed, for the interferometry phenomena we shall consider below, one can show that the GUP relativistic corrections in Eq. (28) do not enter at all the calculation of the phase shift. Alternatively, one can make a more straightforward comparison by considering the non-relativistic limit c → ∞ of Eq. (28) and omitting the rest energy term. To prove this, we start from rewriting Eq. (28) as Here, we have used Eq. (27) and we have considered only the leading term in αm 2 /m 2 p . Clearly, in the absence of an external potential (i.e., V = 0), the free energy reads where all the information about the relativistic GUP is encoded in E α , as also argued in Ref. [42]. Now, starting from Eqs. (30) and (31) and retracing all the steps performed for the stationary phase shifter in the first GUP framework, one arrives at where we have used the same notation of Eq. (10). By requiring energy conservation as in Eq. (17), one can show that the same relations as in Eqs. (20) and (21) are obtained. Therefore, it follows that the different predictions between the first and the second GUP frameworks are due to the spatial non-commutativity arising in the latter, which in turn is a consequence of the particular choice for β and β . Here, we are assuming to look at the experimental setup from above. Charged particles are emitted from the source on the left side of the apparatus. After being split into two components, the beam coherently recombines on the screen on the right side, giving rise to an interference pattern (blue line) which is shifted of the factor Δg (Q M) when the magnetic field B is turned on (red line)
Aharonov-Bohm effect
The Aharonov-Bohm effect shows how a quantum system made up of charged particles can be affected by the electromagnetic potentials even when both the electric and magnetic fields are vanishing in the region where particles propagate [4]. For the sake of simplicity, here we will deal only with the magnetic AB effect. We will perform the full reasoning for this example only, since for the other phenomena the considerations turn out to be very similar.
Let us consider a beam of charged particles with mass m and charge q separated into two components by a beam splitter (see Fig. 1). An infinitely long solenoid of radius d is located between two beams, which travel along different paths and coherently recombine on a screen, giving rise to the interference pattern. Clearly, although this setup is characterized by a non-vanishing value of the magnetic field B only inside the solenoid (r < d), the magnetic vector potential A is nonzero even for r > d. By resorting to the Coulomb gauge ∇ · A = 0, its expression reads where B = ∇ × A, B = |B| andφ is the azimuthal vector related to the angle ϕ (see figure). Without loss of generality, we assume that the apparatus lies in the equatorial plane (i.e., the polar angle θ is chosen to be θ = π/2).
By performing the minimal coupling procedure of the low-energy momentum p with the potential A, the Schrödinger equation in the above framework takes the well-known form 6 In the language of Sect. 2, it is clear that the shift induced by the potential A on the momentum of the particles is nothing but δp = −qA. Therefore, by using Eq. (21), one can readily derive the QM phase shift acquired by the two beams when recombining on the screen, which is where and the index I = 1, 2 refers to the two paths in Fig. 1. Now, since the QM phase shift does not depend on the momentum of the test particle, it follows that the first GUP framework does not affect the standard Aharonov-Bohm prediction (see the discussion at the end of Sect. 2.1), consistently with the result of Ref. [11].
Vice versa, in order to derive the corrections induced by the GUP (22), let us observe that Eq. (35) depends on the area enclosed by the paths of the two beams. Following the prescription of Sect. 2.2, we then employ Eq. (29) to read off the physical dimensions D of the radius of the solenoid, obtaining where we have denoted by Δg α the GUP correction. Therefore, gravity effects in the guise of the deformed commutator (22) contribute to further shift the AB interference pattern, as shown in Fig. 2. Clearly, as long as the quantity α m 2 /m 2 p is negligible (that is, for masses far away from the Planck scale), the additional term goes to zero and the standard formula for AB interferometry measurements is recovered [43,44]. Now, the obtained α-dependent expression of Δg allows us to infer an upper bound on the GUP parameter on the basis of simple experimental considerations. To the best of our knowledge, indeed, the AB phase shift is measured in current interferometry experiments with an error of about 11% [6,54]. Since GUP gravity effects are well below current experimental sensitivity, we can obtain a bound on the parameter α by fitting the ratio Δg α /Δg (Q M) into the accuracy bound of the experiments to test the AB effect. In so doing, we obtain If we consider electrons as test particles, then we have which is very close to the bound derived in Ref. [42] within different frameworks. To explore the possibility of finding more stringent constraints on the GUP parameters, let us now extend the previous analysis to other well-understood QM interferometry effects, such as the Aharonov-Casher, COW and Sagnac effects.
Aharonov-Casher effect
The Aharonov-Casher effect [7] is the analogue of the AB effect for neutral particles. Specifically, it arises whenever a neutral particle with non-vanishing magnetic moment μ travels around a charged wire. In this framework, the geometric phase turns out to be non-vanishing since the electric field E generated by the wire induces a variation of the particle momentum according to [7,54] from which we derive δp = μ × E. If the wire is sufficiently long, the electric field in cylindrical coordinates can be approximately written as where λ is the linear charge density of the wire, whereas the particle is displaced in such a way that μ = μẑ. By resorting to Eq. (21), one can show that the phase shift for a test particle moving around the line charge is Clearly, similarly to the Aharonov-Bohm effect, Δg (Q M) will be insensitive to the first GUP model, since it does not depend on the momentum of the particle.
Conversely, the corrected phase shift within the second GUP framework takes the form where we have used Eq. (29) to make the dependence of the linear charge density on the physical length of the wire explicit. Following the same reasoning as in Sect. 3, we can now extract a constraint on the deformation parameter in relation to the precision with which laboratory tests of the AC effect are carried out. According to the available data [54,55] the most accurate measurements of the AC phase shift are characterized by an error of about 24%. If we consider neutrons as test particles, we get which improves of several orders the previous bound.
COW effect
In 1974, Overhauser and Colella proposed to detect the QM phase shift caused by the interaction of particles with the classical Earth's Newtonian potential V = mφ by devising a specific laboratory test [8]. The idea was to perform neutron interferometry between coherently split beams traveling at different heights and, thus, with different velocities. A year later its theoretical prediction, such an effect was experimentally verified [9], providing one of the first examples of how gravity appears in the realm of quantum theory. With reference to the setting depicted above, it is possible to derive the COW phase shift as a function of the surface S enclosed by the arms of the interferometer and the gap δp = p 0 − p u between the momenta of the lower and upper beams. On the basis of straightforward considerations on the energy conservation, we obtain [56] which explicitly depends on the momentum p 0 of the test particle. As a consequence, in this case both the GUPs (3) and (22) will induce non-trivial corrections on the QM phase shift. In particular, for the first deformation, GUP effects can be estimated by expressing the low-energy momentum p 0 in terms of the physical one P through Eq. (4). To the leading order in β, we have Now, since the COW experiment deals with neutrons, the available data are the same as the ones considered for the AC effect, the only difference being the estimated error on the measurements of the phase shift, which is of the order of 1% [54]. Then, for typical velocity v n 10 3 m/s of neutrons involved in interferometry tests, by estimating the bound as seen in Eq. (38) we get the following constraint on β: β 10 47 (47) which is of the same order as other constraints derived from both gravitational and condensed matter experiments [30,57].
Let us now turn our attention to the second GUP framework. By taking into account Eq. (24) and the redefinitions (27)-(29) of the physical mass and spatial dimensions of the system, we are led to where Σ is the rescaled surface enclosed by the interferometer. From this equation, it follows that which is the most stringent bound on this parameter to the best of our knowledge.
Sagnac effect
Soon after the discovery by Colella, Overhauser and Werner, Page observed that the rotation of the Earth could induce corrections to the phase shift elicited by the Earth's Newtonian potential of the same order as the COW term [10]. Since a similar effect had previously been analyzed by Sagnac in the context of the interferometry between light signals around a rotating ring, this phenomenon is commonly regarded as the counterpart of the Sagnac effect for matter waves. Following Refs. [43,44,54], it can be shown that the variation of the momentum between two beams propagating in opposite directions along the arms of a rotating interferometer is where ω is the angular velocity of the apparatus. In a setup in which the angular motion traces a circle, we can set ω = ωẑ in cylindrical coordinates. Accordingly, the result of the integral in Eq. (21) gives with S being the surface enclosed by the path. By implementing the prescriptions (27)- (29), straightforward calculations lead to the following GUP-corrected phase shift: where we have denoted by Σ the rescaled area of the surface bounded by the interferometer, as before. In Ref. [58], it is possible to find the technical specifications of an experiment aimed at detecting the Sagnac effect by use of an electron biprism interferometer rotating on a turntable. By observing that the experimental error for this test is about 30% [54], we infer for α α 10 44 .
In the next section, we summarize the constraints on the GUP parameters derived in the previous examples and discuss how they can in principle be improved.
Conclusions and discussion
We have analyzed the effects of two GUPs with a minimal uncertainty in position on some well-known interferometry phenomena, such as the Aharonov-Bohm, Aharonov-Casher, COW and Sagnac effects. By using a properly modified Schrödinger equation, we have computed the lowest-order correction to the phase shift of the interference pattern. For the first GUP model, which predicts a redefinition of the high-energy momentum alone, we have shown that a non-trivial correction only appears in the COW experiment, as the characteristic QM phase shift explicitly depends on the momentum of the test particle. Conversely, within the second GUP framework, we have found that the non-commutativity of spacetime coordinates along with the reparameterization of both the mass and momentum result in GUP-corrected phase shifts for all the considered phenomena.
As remarked in Ref. [11], these GUP corrections can be interpreted in two complementary ways: from the phenomenological point of view, one can say that they are too much small and, thus, out of reach of current experiments. Notwithstanding this, at the theoretical level their role may be highly non-trivial, since they could pave the way to analyze how gravity influences quantum mechanical systems when approaching Planck scale. From the latter perspective, the above analysis provides us with the possibility to predict upper bounds on the GUP deformations parameters. The obtained values are summarized in the following table: We note that although the bound derived on β does not improve the constraints already existing in the literature [30], the one on α is of several order smaller than the values found in Ref. [42] in different physical scenarios. Furthermore, the advantage of our approach is that it is based on interferometry measurements, which are getting more and more refined in recent years in various contexts and by use of varied techniques [59][60][61][62][63]. This may allow for a direct test of our predictions via future high-precision interferometry experiments. Hopefully, more accurate measurements of the phase shifts should either be able to test these predictions or further improve the above constraints.
Clearly, in order to better understand the scale on which quantum gravity effects should start to become relevant, the GUP framework must be further analyzed to infer the exact value of the deformation parameters. Given that GUP physics is largely heuristic, one should keep any scenario open, including the possibility that these parameters are dynamical functions (rather than constants), as proposed in Ref. [64] to preserve the black hole complementarity principle. Waiting for definitive answers from experiments, more work is inevitably required at theoretical level in order to search for viable frameworks where distinctive signatures of quantum gravity effects do arise. Interferometry, for instance, may be one of this.
Funding Open Access funding provided by Universitá degli Studi di Salerno.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2021-05-10T00:03:50.506Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "a7f4ac79fa089e0fb802c57bf7409b6d423e15b9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjp/s13360-021-01161-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "123a19ba9e9c0e652b5809e3bae7c3f389ea634e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218919983 | pes2o/s2orc | v3-fos-license | Specific aspects of the state regulation of the national market inclusive tourism
. The article is devoted to the actual topic of inclusive tourism development in Ukraine. According to various sources, the number of tourists with special needs in developed countries is an average of 20%. Trends in the tourism industry development indicate an increase in this type of tourism, therefore, inclusive tourism has significant financial and economic potential. World tourism trends indicate that inclusive tourism shows the highest growth rates. The concept of "tourism for all" has a global character and are declared in the provisions of the international documents of the United Nations. A significant contribution to the development and promotion of inclusive tourism is made by the World Tourism Organization. The article analyzes the experience of Ukraine in the implementation of inclusion projects. The main regulatory documents, governing norms and standards to ensure barrier-free access are defined. Based on the analysis, the proposals for improving the effectiveness of state regulation of the inclusive tourism development in Ukraine based on the best international practices, considering problematic aspects and prospects for the introduction of inclusion in Ukraine, have been developed.
Articulation of issue
The tourism industry is actively developing. The number of outbound tourists exceeds one billion people every year. In 2018, the tourism industry reached a record increase in tourist flows, which is 12% of the previous year. Today, most countries consider the prospects for the development of tourism not only as a separate sector of the national economy but as a strategic vector for the development of the national economy. One of the actual concepts of tourism development at the state level is to ensure the inclusiveness of the tourism spacethe availability of tourist services for all. Among the features of the tourism industry, which, in particular, is typical for the service sector in general, there is a need for tourists to ensure safety and comfort in receiving tourist services, while maintaining the economic balance and availability of such services for different segments of the population is important. Therefore, the problem of finding ways to implement the concept of "tourism is accessible to all" is relevant and requires further consideration.
Basic tenets of inclusive tourism development at the state level are incorporated in the basic provisions of the United Nations Development Program, in particular, "tourism accessibility for all" are defined in the Bali Declaration [1,2], where inclusion is defined as one of the priorities of the strategic development of mankind, considering the need to provide mass tourism, including its accessibility for inclusive categories of the population. One of the organizations, which actively promotes the barrier-free concept is the World Health Organization, which defines the priorities of infrastructure development to ensure comfort for people with disabilities [3]. The study results of the UN Department of Social and Economic Affairs, where the number of potential tourists of social categories in the trend until 2050 was determined, are interesting from the point of view of determining the prospects for the development of inclusive tourism in the countries [4]. The trend indicates a steady increase in the number of social categories of tourists. The basic tenets of the rights of people with disabilities are combined in the framework of globalization processes into a single concept, which, among other things, determines the need to ensure the availability of tourism for all categories of the population [1].
Analysis of the latest researches and publications
The works of many scientists are devoted to the issues of ensuring the inclusiveness of the infrastructure and barrier-free environment. In particular, Laurie Dyer and Simon Darcy conducted a study on the profitability of inclusive tourism projects. The accessible tourism concepts were identified as financially attractive because they determine the everincreasing volumes of the state and the private sector profitability due to the popularization of this type of tourism. The authors have defined the main postulates of financial and economic support for inclusive tourism and problems hindering its further development [5]. Dimitrios Buhalis, Simon Darcy and Ivor Ambrose have identified promising areas for inclusive tourism development and the world's best practices for creating a barrier-free environment [6]. The most inclusive-oriented are the areas of tourism in the United States and Western European countries. Other regions are less equipped for this type of tourism. Shakespeaare T. has explored the problematic aspects and prospects for the inclusive tourism development and has identified the main aspects of the inclusiveness development for developing and underdeveloped countries [7]. Walker N.K.G. and Chen Y. have analyzed the rights of people with disabilities and problematic aspects within which these rights may be limited, e.g., with insufficient legislative support, insufficient technical or technological component of the inclusiveness development in a particular country, region or even at a separate local level [8]. Despite the significant attention of the authors to the problems of the inclusive tourist space development, the rapid development of tourism and innovative technologies determine the relevance of continuing research on improving inclusive tourism effectiveness.
Purposes and objectives
The paper aims to identify the ways of inclusive tourism development in Ukraine, taking into account the world experience in the implementation of inclusion projects. According to the aim, there are a number of the following tasks: -to analyze trends of the inclusive tourism industry development; -to identify categories of people who may require the creation of an inclusive space; -to analyze the international experience of inclusive tourism development; -to identify the specific aspects of the normative and legal support for the national inclusive tourism development;
Statement of basic materials
Tourist trip involves many aspects related to possible inaccessibility for certain tourists. Tourist "inaccessibility" can be associated with the technical characteristics of accommodation, with the implementation of the transportation of tourists, objects of the restaurant industry, information limitations and many other factors. The World Tourism Organization (WTO) deals with the issues of ensuring the availability of tourist services, and which has an appropriate profile and ensures the development of barrier-free at the global level. For the first time, the issue of the availability of tourist services was considered at the General Assembly of the WTO in 1991 [3]. The growing demand for tourism services has led to the differentiation of persons who use them by age, gender, social and other levels. Due to the increase in the number of travellers, the requirements for ensuring the availability of tourist services began to grow in accordance with the needs of consumers. In 2007, the Convention on the Rights of Persons with Disabilities, the main postulate of which was the determination of equal rights for people, regardless of the presence or absence of a person's disability, became the initial impetus to the development of an inclusive tourist space and ensuring barrier-free access in all sectors of the service provision. The concept of "socially responsible tourism" emerged as a result of the implementation of the Convention on the Rights of Persons with Disabilities and defined the objectives of providing opportunities for comfortable travel for all tourists without exception [1,2,4].
Among the main aspects of the provision of inclusive tourism services are the following priorities [1,3]: -the inclusive infrastructure development; -providing service processes with specially trained personnel who have not only sufficient qualifications but also relevant skills to work with people with limited mobility; -ensuring transport accessibility for people with special needs; -ensuring the availability of inclusive information; -inclusive marketing development.
These key elements of ensuring the inclusiveness of the tourist services provision are important for the further tourism industry development because the trends in the tourist market indicate the urgency of finding ways to develop inclusiveness.
According to statistics [3], more than 1 billion of the world's population (approximately 15%) have a disability. There are population ageing trends, which have a significant dependence on the geographical factor of a particular regional location. If in 2000 the population of the Earth over 60 years has made more than 580 million people, now this figure has grown by more than 20%. According to forecasts, in 2050 the percentage of the population over 60 years old on Earth will be 20%, one-fifth of which will be over 80 years old. This trend is most visible in the countries of North America (USA, Canada) and Western Europe, and to a lesser extent in Asia. Population ageing is associated with the development of the health care system, which provides an increase in human longevity, and changes in the worldview of the urbanized world, in which young people increasingly prefer the birth of at most two children at a later age, giving priority to career building [9]. These indicators show that inclusive tourism in the near and strategic perspective will have sustainable growth indicators. Consequently, the provision of an inclusive tourist space is not simply defined as the ethical norm of modern society but also determines the prospects for increasing profitability from this type of business. Statistics on the number of tourists with disabilities who travel to different countries are presented in Fig. 1 The statistics indicate that the issue of inclusion is relevant and requires the reorientation of the world tourism industry to the "accessibility for all" format in order to realize the social function of tourism and opportunities for further growth in the volume of incomes in the industry [8][9].
It is advisable to identify categories of persons who may need an inclusive infrastructure. These categories include people with disabilities who may not be able to move freely, to hear information or visualize it. The category of inclusiveness includes social tourists, as well as persons of retirement age. Also, an inclusive infrastructure may be needed by pregnant women and children. In the case when a person visits another country and, for example, does not speak the local language, he may be in a situation of limited access to tourist services due to the inability to perceive written information, unlike the cases where infographics are used to explain and provide tourist information [10].
World experience in the development of inclusive tourism shows the following trends. One of the countries with the highest income from inclusive tourism and the number of tourists who belong to the limited categories of the population is the United States. The volume of income from inclusive tourism in this country is more than 17 billion dollars. Additionally, it is necessary to take into account the income from the provision of tourist services to persons who accompany an inclusive tourist. The most popular form of recreation among inclusive tourists in the United States is cruise tourism, which is defined as one of the safest in the world. This is due to the high technological and technical equipment of tourism facilities. More than 30 electronic resources in the United States dedicated to the provision of information, which can be useful for tourists with special needs [11].
The UK ranks second in terms of revenue from inclusive tourism. Total revenue is 12 billion dollars. The concept of inclusive tourism in this country is resolved to a comprehensive strategy for tourism development. Inclusion is recognized as one of the priorities for the development of tourism business at the state level. The number of online information resources on the inclusive infrastructure in the UK is more than 60, which provides the maximum tourist information on the provision of inclusive services [5][6].
Australia ranks third in terms of revenue from inclusive tourism. Average revenue is $ 8 billion a year. At the same time, the percentage of people with limited mobility in the structure of the country's population is quite high, which is 20%. At the same time, more than 85% of the Australian population, which has inclusive needs, actively travel without barriers due to the provision of comfortable travel conditions and inclusive space [12].
According to statistics of the tourism development in the European Union countries, the volume of tourist flows of inclusive tourism is growing rapidly. The total number of people who require the creation of conditions for inclusion in the EU is more than 140 million, and revenues from inclusive tourism are more than 780 billion euros annually [6][7].
The specific features of inclusive tourism in the European Union include the following [13][14][15]: -expenses of tourists with special needs are greater than the average expenditures of tourists in this region; -the development of inclusive tourism smooths the seasonality factor, because tourists with special needs prefer recreation not in the "peak season", thereby allowing travel companies to smooth out their own profitability during the year and minimize the risks of a tourist business; -persons with special needs mainly travel within the country, that is, they increase the volume of domestic tourists, which allows keeping the revenues in own budget of the country in which favourable and comfortable conditions for the development of inclusive tourism are created.
One of the peculiarities of the inclusive tourism development in the EU countries is that the barrier-free space is mainly offered in accommodation facilities, represented mainly by large chain hotels, which follow international standards for ensuring the availability of services. Issues related to the development of inclusive infrastructure outside hotels, transport infrastructure and transport accessibility remain problematic. Information support for an inclusive space is provided by the website of the European Network for Accessible Tourism (ENAT), which combined all the necessary information for tourists with special needs, and which can be used when planning a trip or during it. Currently, only 10% of tourism companies offer inclusive tours to their clients, while the demand for them in EU countries, on average, is more than 20% [16]. Tourist inclusive infrastructure is more developed in Western European countries, while in Eastern Europe, the problem is more complicated. Scandinavian countries pay special attention to both inclusive tourism infrastructure and providing opportunities for the development of social tourism in the country.
The development of inclusive tourism in Ukraine is in the initial stages. Although some elements of inclusion are beginning to be actively implemented, however, there are still no comprehensive approaches to systematization mechanisms for the provision of inclusive tourism. This situation is associated with a number of factors, among which are the following [16]: -the limited financial capacity for creation of an inclusive infrastructure, limited government investment in the tourism industry and the lack of government programs to increase the investment attractiveness of the industry, which needs modernization, reconstruction and renewal of the existing tourism infrastructure to ensure its inclusiveness; -information vacuum caused by the lack of inclusive information, which must accompany the tourist, including information that should be provided by state, regional or local authorities, as well as the practical absence of national online and offline resources for inclusive tourism (except for information resources created by international organizations in Ukraine), lack of information on tourist offers of inclusive tourism by travel agents and tour operators; -the need to develop and implement state projects of infrastructure renewal and modernization due to the advent of new technological solutions and innovative barrier-free facilities; -the lack of a regulatory framework at the state level that clearly regulates the provision of limited mobility categories of tourists, as well as appropriate conditions for the availability of tourist services and related services; -the lack of qualified personnel capable of providing competent assistance and services to persons with limited mobility and the absence of state programs to facilitate the training of such specialists.
These problems significantly impede the development of inclusive tourism in Ukraine. However, among the positive developments, it is advisable to note that the Ministry of Regional Development, Building and Housing and Communal Services of Ukraine has introduced new State Construction Standards, amended to the development of an inclusive infrastructure. A big advantage of these standards is their development in conjunction with representatives of public organizations of people with disabilities and the involvement of scientists who can justify the feasibility of introducing these building codes to ensure inclusiveness. Within the "Illusion" festival, which is held in Kharkiv (Ukraine), the conditions for the availability of museum premises for people with disabilities are created. The number of such tourist facilities of an inclusive nature is constantly growing.
To ensure the inclusive space of tourist infrastructure facilities, the authors have developed proposals for assessing the facility according to accessibility characteristics, based on the State Building Standards, and it is proposed to consider the following indicators: -surface analysis of sidewalks (steps, slippery, roughness, etc.); -the presence of ramps; -installation of curb height in accordance with state building codes; -lowering the curb at the exit points at pedestrian crossings, departures from parking lots, etc.; -the arrangement of parking spaces for people with disabilities; -equipping lifting devices for people with disabilities; -the convenient location of buttons in elevators, door handles for accessibility for wheelchair users and small children; -the availability of space for turning wheelchairs in vestibules, toilet rooms; -the setting of automatic door closing system on the time required for the passage of persons with disabilities; -the presence of shockproof glass; -the availability of tactile information system; -the presence of visual information system; -the availability of interior items; -the presence of light, tactile and graphic symbols; -special conditions for the design and equipment of toilet and shower rooms; -the availability of information that may be needed during consumption of tourist services (such as Info-graphic representation of information, its audible and tactile duplication).
Thus, after analyzing the main trends in the development of inclusive tourism, it can be determined that this development has social, financial and economic aspects. World practice shows that inclusive tourists increasingly travel and spend more money than the average tourist. So, inclusive tourism has significant prospects for further growth, and with state support for the development of tourism infrastructure, it can grow by more than 35% in the next five years. As a result of the study, the article proposes to define a list of indicators, which can serve as a basis for developing criteria for analyzing the provision degree of the inclusive infrastructure of tourist facilities at the state level.
Conclusions
Inclusive tourism is becoming increasingly popular in the world. In developed countries, the number of inclusive tourists on average is 20%. The development of inclusive tourism has a steady upward trend. The categories of inclusive tourists include persons with disabilities, social categories of tourists, and those tourists who have become inclusive due to lack of information for familiarization or technical features of tourist infrastructure facilities. In Ukraine, the issue of inclusive tourism is under development, has no complex and systemic character. However, an essential step towards increasing the effectiveness of an inclusive infrastructure is the development of new state building codes, which are based on international practices of inclusion. In order to develop the proposals for improving the E3S Web of Conferences 164, 07003 (2020) TPACEE-2019 https://doi.org/10.1051/e3sconf /202016407003 efficiency of ensuring inclusiveness in the field of tourism in Ukraine, the indicators, which can form the basis of criteria for evaluating tourist infrastructure facilities for ensuring barrier-free access for various categories of the population have been developed. | 2020-05-07T09:16:05.055Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "66e5e9486ac9ed333eb1c31220d48b8d16e6d32d",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/24/e3sconf_tpacee2020_07003.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "352678adb7f7819f4bd2c8d4a8ed032ba2381c18",
"s2fieldsofstudy": [
"Business",
"Law"
],
"extfieldsofstudy": [
"Business"
]
} |
237731824 | pes2o/s2orc | v3-fos-license | Biobased aliphatic polyesters from a spirocyclic dicarboxylate monomer derived from levulinic acid
Ethyl levulinate is readily ketalized with biobased pentaerythritol to form a spirocyclic diester monomer with low GHG emissions to produce a series of fully aliphatic processable polyesters.
Figure
Figure S10. 1 H NMR spectra of Monomer L (as-obtained after isolation) after stirring at different temperatures during 5 h.
Figure
Figure S11. 1 H NMR spectra of Monomer L (pre-dried under vacuum) after stirring at different temperatures during 5 h.
Figure S12 .
Figure S12. 1 H NMR spectrum of crude product from reaction of Monomer L and neopentyl alcohol.
Figure S13 .
Figure S13. 1 H NMR spectrum of crude product from reaction of Monomer L and 1-butanol.
Figure S14 .
Figure S14.Photographs of polyester samples after precipitation.
Figure S15 -
Figure S15-S16.SEC traces of polyesters in THF recorded by a differential refractive index (dRI) detector.
Figure S41 .
Figure S41. 1 H NMR spectra of PCycL sample before and after heating at 200 °C.
Figure S42 .
Figure S42.Photographs of a solution-cast film of polyester PNeoL.
Figure S43 .
Figure S43.Photographs of hot-pressed samples of PCycL used in DMA and rheology.
Figure
Figure S44. 1 H NMR spectrum of PNeoL sample after time sweep rheology measurement.
Figure
Figure S45. 1 H NMR spectrum of PCycL sample after time sweep rheology measurement.
Figure S2 :
Figure S2: The formation of transketalized (Product A), semi-transketalized (Product B) and monoketal ester (Product C) side products and their respective structural analogues produced during the polycondensation of Monomer L and NPG.
Figure
Figure S4. 1 H NMR spectrum of Monomer L recorded in CDCl3.
Figure
Figure S5. 1 H NMR spectrum of Monomer L recorded in DMSO-d6.
Figure
Figure S11. 1 H NMR spectra of Monomer L (pre-dried under vacuum) after stirring at different temperatures during 5 h.
Figure
Figure S12. 1 H NMR spectrum of crude product from reaction of Monomer L and neopentyl alcohol showing formation of corresponding transesterification product.
Figure
Figure S13. 1 H NMR spectrum of crude product from reaction of Monomer L and 1-butanol showing formation of corresponding transesterification product.
Figure S16 .
Figure S16.SEC traces of polyesters (synthesized by conventional melt polycondensation method) in THF recorded by a differential refractive index (dRI) detector.
Figure S42 .
Figure S42.Photographs of a solution-cast film of polyester PNeoL, showing the transparency and flexibility.
Figure S43 .
Figure S43.Photographs of hot-pressed samples of PCycL used in DMA and rheology.
Figure S45. 1 H
Figure S45. 1 H NMR spectrum of PCycL sample after time sweep rheology measurement. | 2021-08-27T16:46:47.818Z | 2021-08-02T00:00:00.000 | {
"year": 2021,
"sha1": "9a7125d664ab4aafaee65a741d228da5c3a1e12e",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/gc/d1gc00724f",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3209d7f7ce717cbf7d29e7c8090e579cd3fc565e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
10928460 | pes2o/s2orc | v3-fos-license | Entanglement dynamics for the double Tavis-Cummings model
A double Tavis-Cummings model (DTCM) is developed to simulate the entanglement dynamics of realistic quantum information processing where two entangled atom-pairs $AB$ and $CD$ are distributed in such a way that atoms $AC$ are embedded in a cavity $a$ while $BD$ are located in another remote cavity $b$. The evolutions of different types of initially shared entanglement of atoms are studied under various initial states of cavity fields. The results obtained in the DTCM are compared with that obtained in the double Jaynes-Cummings model (DJCM) [J. Phys. B \textbf{40}, S45 (2007)] and an interaction strength theory is proposed to explain the parameter domain in which the so-called entanglement sudden death occurs for both the DTCM and DJCM.
Introduction
Entanglement is not only a key concept to distinguish between the quantum and the classical worlds, but has also been viewed as an indispensable resource to perform various intriguing global tasks in quantum computing and quantum information processing [1]. However, a notable characteristic of entanglement is its fragility in practical applications due to unavoidable interaction with the environment. It is therefore of increasing importance to understand entanglement from its dynamical behaviors in realistic systems. As a rule for a global task, entanglement should be shared between different remote parties who participate in the task.
There are cases like teleportation [2], remote state preparation [3], etc., in which each particle of a multipartite entangled state is distributed to a separate location. There are also cases in which the entangled particles should be distributed so that each location contains several particles. For example, in quantum secret communication protocol between Alice and Bob [4], an ordered N Einstein-Podolky-Rosen (EPR) pairs are to be shared in such a way that Alice and Bob each holds one half of the pairs. That is, at Alice's location there are N particles which interact with one environment while the other N partner-particles at Bob's location collectively interact with another environment. This scenario results in two independent local environments but each of them is common for one half of the N EPR pairs. A natural question arises as to how such kind of particle-environment interactions degrade the originally prepared global entanglement. This question is of fundamental interest because any quantum protocol depends essentially on the quality of the shared entanglement. As a first step to the problem, in this paper, we consider the case of N = 2 with two pairs of entangled two-level atoms AB and CD prepared in one of the two types of Bell-like states, namely, |ψ(0) IJ = cos(α)|10 IJ + sin(α)|01 IJ , and |ϕ(0) IJ = cos(α)|11 IJ + sin(α)|00 IJ , where IJ ∈ {AB, CD} and |0 (|1 ) is the atomic ground (excited) state.
For the simplest case of N = 1, i.e., either state (1) or state (2) is concerned for the initial state of a single atom-pair, the so-called double Jaynes-Cummings model (DJCM) [5][6][7][8][9][10][11][12] has been extensively adopted to study this problem because it yields exact analytical results. In the DJCM, each of two entangled atoms is embedded in an independent cavity and locally interacts with it. The results obtained within the DJCM for the initial empty cavities are that for any value of α state (1) loses its entanglement only at discrete time moments t l = (l + 1/2)π/g with l = 0, 1, 2, ... and g the atom-cavity coupling constant, but for a certain domain of α state (2) may become separable at times smaller than t l and remains unentangled for some duration of time [6]. The latter phenomenon is referred to in the current literatures as entanglement sudden death (ESD) [13], which has been experimentally observed in [14,15]. An entangled state with ESD in evolution is less robust than states without it, since ESD puts a limitation on the application time of entanglement.
Therefore, studying ESD, especially conditions and parameter domains for its occurrence, is important from both theoretical and practical points of view. In Ref. [10] the DJCM is considered again and it is found that if the cavity fields are initially in Fock states with nonzero photon numbers then both atomic states |ψ(0) and |ϕ(0) would suffer from ESD for all values of α. The DJCM was also investigated from other perspectives and it was shown that the entanglement evolution of atoms is closely related to their energy variation [9] and there is a natural entanglement invariant demonstrating the entanglement transfer among all the system's degrees of freedom [7].
For the case of N = 2 involving two pairs of entangled atoms, the situation would become more complex than that of N = 1, because in each local environment there are two atoms simultaneously interacting with it. When there are many atoms interacting resonantly with a single-mode quantized radiation field of one and the same cavity the exact solution can be obtained by means of the so-called Tavis-Cummings model (TCM) [16]. Such a single TCM was used in Refs. [17] and [18] with and where ω 0 (ω) is the frequency of the atom (cavity field mode), a (a + ) is the annihilation (creation) operator of the field in cavity a, b (b + ) is the annihilation (creation) operator of is the rising (lowering) operator for the transition of atom i and g is the atom-cavity field coupling constant. Here, we are interested in the resonant case with ω 0 = ω [16]. The initial cavity fields are assumed to be either in the vacuum state, the Fock state with a non-zero photon number or the thermal state. The general thermal field with its mean photon number n is a weighted mixture of Fock states whose density operator ρ F can be represented as with |n the Fock state of n photons and P n is given by By virtue of the general thermal field defined above, through setting P n = δ nl in Eq. (6), we can also study the vacuum state (l = 0) as well as any Fock states (l > 0) of the fields.
As for the initial states of atom-pairs AB and CD, we assume both of them to be either in state (1) or state (2). At t = 0 the total state involving the four atoms and two cavities reads where (2). The evolution operator for the local interaction of atoms AC (BD) with cavity a (b) was derived exactly in Ref. [17]. At any time t > 0 the state ρ(0) evolves into which can be represented as Using the analytical expression of U ACa(BDb) (t) in [17] we have for U ACa |ik, m ACa (similarly where the functions X ik,pq (m, τ ) with τ = gt are given in Appendix A for various possible i, k, p, q. These functions satisfy the normalization condition for any i, k, m and τ.
The reduced density matrix ρ ABCD (t) of the atomic subsystem can be obtained by tracing out ρ(t) over the cavity fields, i.e.
The explicit expressions of E c XY (|ik XY XY jl|) are given in Appendix B for various possible i, k, j, l.
Atomic entanglement dynamics
With the formulae derived in the previous section we are now in the position to analyze the entanglement dynamics of any atom-pair. By using Eq. (12) we can readily get the reduced density matrix of any pair of atoms by tracing out ρ ABCD (t) over the degrees of freedom of the remaining atoms. In two-qubit domains, there exist a number of good measures of entanglement such as concurrence [19] and negativity [20]. Although the various entanglement measures may be somewhat different quantitatively [6], they are qualitatively equivalent to each other in the sense that all of them are equal to zero for unentangled states. Here we adopt Wootters' concurrence [19] because of its convenience in definition, normalization and calculation. The concurrence C for any (reduced) density matrix ρ of two qubits is defined as where λ i (λ 1 ≥ λ 2 ≥ λ 3 ≥ λ 4 ) are the eigenvalues of the matrix ζ = ρ(σ y ⊗ σ y )ρ * (σ y ⊗ σ y ), with σ y a Pauli matrix and ρ * the complex conjugation of ρ in the standard basis. For separate states C(ρ) = 0, whereas for maximally entangled states C(ρ) = 1. In particular, if ρ is of the X-form [21], where ̺ IJ kk are real positive and ̺ IJ kl = ̺ IJ lk * are generally complex, then the concurrence (14) simplifies to Since both states (1) and (2) of the atoms take on and preserve the X-form in their evolution, Eq. (16) is very useful throughout this work.
|ψ(0) type initial state for atom-pairs AB and CD
We first consider the case when both the atom-pairs AB and CD are initially prepared in state (1). In accordance with Eq. (12) the reduced density matrix of the atomic subsystem at any time t is ρ ABCD which can be evaluated straightforwardly via the map (13). Then the reduced density matri- have the X-form so the corresponding concurrences are determined by Eq. (16). In the following we study the time dependence of these concurrences for the fields in cavities a and b being initially in the vacuum state, the Fock state with a non-zero photon number or the general thermal state, respectively.
In may be populated in state |1 , the system-environment interaction can be classified into two regimes, "strong" and "weak" interaction regimes, depending on relative magnitudes of P ≥ and P < , where P ≥ (P < ) is the probability that N |1 ≥ N c (N |1 < N c ) with N c the number of cavities. In the DTCM considered here and the DJCM considered in [6,7] it is clear that We define the following convention: the strong interaction regime corresponds to P ≥ > P < , while P ≥ ≤ P < implies the weak interaction regime. In the DJCM the total system state of two atoms A, B and two cavities a, b at t = 0 reads whereas in the DTCM the total system state of four atoms A, B, C, D and two cavities a, b at t = 0 reads |ψ(0) AB |ψ(0) CD |00 ab = cos 2 α |110 ACa |000 BDb + cos α sin α |100 ACa |010 BDb + sin α cos α |010 ACa |100 BDb + sin 2 α |000 ACa |110 BDb .
From Eq. (18) it follows that there is always only one atom (namely, either atom A in the first term or atom B in the second term) being in state |1 regardless of the value of α. That is, P < = 1 < P ≥ = 0, resulting in the weak interaction regime in the DJCM for the whole range of α. However, what is followed from Eq. (19) is that for any value of α there are always two atoms (namely, either atoms A and C in the first term or atoms A and D in the second term or atoms C and B in the third term or atoms B and D in the fourth term) being in state |1 . That is, P ≥ = 1 > P < = 0, resulting in the strong interaction regime in the DTCM regardless of the value of α. Therefore, it can be said that, when the cavities are initially prepared in the vacuum state, |ψ(0) type initial state of atoms exhibits ESD in the strong interaction regime (i.e., in the DTCM) but it does not in the weak interaction regime (i.e., in the DJCM), independent of the parameter α. direct interactions between them during the entire course of evolution, in accordance with the problem Hamiltonians (4) and (5). However, an effective (indirect) atom-atom interaction is induced for t > 0 thanks to the coupling of both atoms with a common environment.
Such an effective atom-atom interaction could nontrivially affect their global behaviors. In fact, as investigated in Ref. [17], if the initial atoms are prepared either in state |01 or |10 (|11 ), then they always get entangled with each other (remain unentangled) regardless of the nature of the cavity fields. But, if the atomic initial state is |00 , then the field in the vacuum state leaves the atoms unentangled and the field in a Fock state with a nonzero photon number or thermal state can entangle them. Here, in the DTCM, at variance with the situation considered in Ref. [17], at t = 0 the atoms in a cavity, though being independent of each other, are entangled with other atoms in another cavity. That is, we have at t = 0 in cavity a (b) a mixed state ρ AC I (0) =Tr BD ρ ABCD instead of a pure state as in Ref. [17]. Figure 6 plots the concurrence C BD I as functions of gt and α with the initial fields in both cavities containing just one photon. This figure shows that the entanglement dynamics of the atoms is sensitive to α, as it should be. For example, in the region of α ∈ [0, 0.29π] atoms B and D can get entangled, but for α around π/2 no entanglement is generated through the whole evolution. These results are in full agreement with those reported in Ref. [17] where α = 0 (i.e., ρ BD I (0) = |00 BDBD 00|) and α = π/2 (i.e., ρ BD I (0) = |11 BDBD 11|) are concerned. To get more insight into the effect of α on atomic entanglement generation we show in FIG. 7 a 2D plot of C BD I as a function of gt with the initial cavity fields in the Fock states |1, 1 ab for various values of α. When α = 0 (i.e., ρ BD I (0) = |00 BDBD 00|), the entanglement of B and D emerges immediately from t = 0. Nevertheless, when α > 0 the atoms remain unentangled for some initial period of time and suddenly become entangled at some later time. The larger the value of α the longer the delay time of entanglement generation. Such phenomena of delayed entanglement during the time evolution can be called "entanglement sudden birth" (ESB) [22]. The effect of thermal fields on inducing entanglement between atoms B and D is drawn in FIG. 8 with the cavity mean photon numbers m = n = 1, which agrees well with the result in Ref. [17] for α = 0. Since the thermal state is a weighted mixture of Fock states (see Eq. (6)), it is a chaotic state with minimum information and so its effect is generally irregular. In comparison with the case of "corresponding" Fock states |1, 1 ab one sees that the region of α allowing entanglement of atoms is much shrunk and the amount of generated entanglement is very small. The plots of C AC I can be obtained from those of C BD I by making a change α → α + π/2.
|ϕ(0) type initial state for atom-pairs AB and CD
We next consider the case when both atom-pairs AB and CD are initially prepared in state (2). In accordance with Eq. (12) the reduced density matrix of the atomic subsystem at any time t is ρ ABCD In FIG.9 we plot C AB II (the same for C BD II due to symmetry) versus gt and α for the initial empty cavity fields. It is visual from this figure that ESD occurs but not in the whole range of α, in clear contrast with the case shown in FIG. 2 when both the atom-pairs AB and CD are initially prepared in state (1). To derive the constraint on α that triggers ESD let us look at the total system state at t = 0 : |ϕ(0) AB |ϕ(0) CD |00 ab = cos 2 α |110 ACa |110 BDb + cos α sin α |100 ACa |100 BDb + sin α cos α |010 ACa |010 BDb + sin 2 α |000 ACa |000 BDb .
Obviously, the probability that all the four atoms are in state |1 is cos 4 α, the probability that only two atoms (namely, either atoms A and B or atoms C and D) are in state |1 is 2 cos 2 α sin 2 α and the probability that none of the atoms are in state |1 (i.e., all the atoms are in state |0 ) is sin 4 α. That is, P ≥ = cos 4 α + 2 cos 2 α sin 2 α and P < = sin 4 α. As mentioned in the previous subsection, the condition for the occurrence of ESD is that the interaction regime is strong, i.e., P ≥ > P < . So, the values of α for which ESD occurs should satisfy the constraint Noticeably, this constraint is not coincident with that one in the DJCM for which the initial total system state reads |ϕ(0) AB |00 ab = cos α|10 Aa |10 Bb + sin α|00 Aa |00 Bb .
As followed from Eq. (23), the probability that the two atoms are in state |1 is cos 2 α and the probability that none of the atoms are in state |1 is sin 2 α. That is, P ≥ = cos 2 α, P < = sin 2 α and thus the values of α, for which the system-environment interaction regime is strong (i.e., ESD occurs) in the DJCM, satisfy the constraint The constraints (22) and (24) imply that the α-parameter domain in which the atoms suffer from ESD is wider in the DTCM than in the DJCM.
The case for the initial cavity fields being in a Fock state |11 ab is plotted in FIG. 10.
A remarkable feature as compared with the vacuum fields case in FIG. 9 is that here ESD occurs in the whole range of α. Again, the physical reason for this is that in the presence of initial photons all the atoms are in interaction with the cavity fields (i.e., not only atoms in state |1 but also those in state |0 interact with the cavity fields).
In FIG. 11 we plot C AB II as a function of gt for the initial fields in a thermal state with different mean photon numbers for a given value of α. Comparing FIG. 11 with FIG. 5 signals that with relatively small mean photon numbers (e.g., m = n = 0.1) the signature of ESD is less pronounced for the case when the initial atoms are prepared in state (2) than in state (1).
The entanglement generation dynamics of the atomic pairs AC and BD is similar to the case considered in the preceding subsection and thus will not be iterated here.
Conclusion
In conclusion, we have, by means of concurrence, studied the entanglement dynamics of the DTCM motivated by certain realistic quantum information processing. The system is composed of four two-level atoms A, B, C, D and two spatially separated single-mode cavities For the vacuum fields the |ψ(0) type initial state of atom-pairs AB and CD displays ESD for the whole value range of the parameter α which represents the initial entanglement degree of AB and CD. This result is in sharp contrast with the DJCM for which ESD does not occur at all for whatever values of α [6,7]. As for the |ϕ(0) type initial state of atom-pairs AB and CD, ESD only occur for the value of α such that sin 2 α < 1/ √ 2, which is wider than that in the DJCM where ESD occurs just for α such that sin 2 α < 1/2 [6,7].
Physically, these results (i.e., the domain of α for which ESD occurs) in both the DTCM and DJCM can be explained via the interaction strength theory according to which ESD occurs (does not occur) in the strong (weak) system-environment interaction regime. The interaction regime is identified by the number of atoms that can have interaction with the cavities, which is determined by the relative magnitudes of P ≥ and P < defined in subsection 3.1. Remarkably, the interaction strength theory turns out to apply also for the so-called triple Jaynes-Cummings model [23] for GHZ-like atomic states as well as for the case of multiple dissipative environments with multiqubit GHZ-like atomic states [24,25].
We have shown that the non-vacuum environments of cavities have great effects on the appearance of ESD for atoms. That is, when the cavity fields are initially in the Fock state with a non-zero photon number or the general thermal state, ESD always happens for atom-pairs AB and CD regardless of the entanglement type they are prepared. Moreover, the more photon number in the Fock state or the greater the mean photon number in the thermal state the quicker the entanglement decay rate, i.e., the sooner the time of ESD occurrence. In terms of the interaction strength theory, these properties are explained by the physical fact that in the presence of nonzero (mean) photon number the interaction regime is always strong because all the atoms (i.e., not only those in the excited state as in the case of empty cavities) can interact with the fields. Thus, the actual system-environment interaction strength is now identified by the number of excitation which in these cases is proportional to the total number of both atoms and photons.
We have also studied creation of entanglement between initially uncorrelated atoms A and C in cavity a (B and D in cavity b). Compared to the case of α = 0 considered in Ref. [17] here we showed that for α = 0 there appears the so-called entanglement sudden birth, i.e., the formation of atomic entanglement does not take place at once as the system evolves but emerges suddenly at some delayed time, which is dependent on the value of α.
The DTCM presented in this work could be extended to the general multiple case where two groups of multipartite entangled atoms are distributed in such a way that every two atoms from different group are located in the same environment. In this way, we can study not only the pairwise entanglement of atoms between any two nodes (cavities or local environments) via concurrence but also the entanglement of any atomic bipartition by means of negativity.
These studies can reveal the degraded properties of various multipartite entangled state and thus be useful for the large-scale quantum information processing. | 2009-02-14T01:35:15.000Z | 2009-02-14T00:00:00.000 | {
"year": 2009,
"sha1": "9a70386cce175eee89851538bd84460c88aa4b08",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0902.2421",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9a70386cce175eee89851538bd84460c88aa4b08",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
28025017 | pes2o/s2orc | v3-fos-license | Synergetic Evaluation of Project Portfolio Configuration Based 2 on Data Envelopment Analysis 3
Project portfolio configuration (PPC) is an important approach to maintain the sustainable 12 development of enterprises and achieve organizations’ strategy. However, the synergetic efficacy 13 of PPC which determines the degree of the project's strategic objectives achieved is a fuzzy problem 14 and hard to be measured. To solve this problem, this paper takes the data envelopment analysis 15 (DEA) as the tool to measure the efficacy of PPC under deterministic conditions. First, a portfolio 16 evaluation index system which takes financial indicators and non-financial indicators into 17 consideration is developed based on the review of the literature; Second, an evaluation model based 18 on DEA is built to reduce the number of decision making-unit with the perspective of synergetic 19 theory; Then, a computational experiment is studied to verify the feasibility of this proposed model. 20 The results of this computational experiment show that this model can effectively narrow scope of 21 decision-making, improve the decision-making level and provide a reference to decide the DEA 22 effective project portfolio decision-making unit. To our knowledge, this study is the first time to 23 apply the notion of synergetic efficacy and DEA to the PPC domain. It is hoped that this paper may 24 shed lights on any further study about PPC and enterprise competitiveness of sustainable 25 development. 26
Introduction
Sustainable development is a new management paradigm, one whose principles can be used to improve how practitioners of all levels manage the complexity and dynamics of projects [1][2][3].
Following rapid economic development and growth, multi-projects executed in parallel has become the new norm for corporate business operations in a market of limited resources.The ability to reasonably combine and implement multiple projects simultaneously in order to strike a balance between efficiency and quality has become an increasingly important issue for enterprises adapting to market demand and sharpening its competitive edge.Project Portfolio Management (PPM) refers to methods and patterns that combine different components to achieve strategic objectives under certain constraints [4] (such as limited resource constraints and possible target conflicts between different projects) in order to maximize the effectiveness and profitability for the said organization.
One of the most challenging tasks in project portfolio management is PPC, which is a guiding philosophy that aims to implement synergistic configuration and dynamic optimization for project components that ultimately delivers strategic organizational impact [5].PPC is not only affected by various resource distribution constraints and internal conflicts between project objectives [6], but also a combination of factors such as post-benefit correlations and the relevance of the results [7].
Since early 2001, both domestic and overseas scholars have paid more and more effort into the study of PPC, which focused not only on theoretical research but also on a wide range of business practices [8][9][10].Based on a review of the literature, many studies have been reported on the methods of maintaining a high degree of correlation between PPC and strategy, resources allocation across different projects and schedule optimization of the PPC [11], providing a basis for decision making when managers implement portfolio management [12][13][14].Sascha [15] and his fellow scholars constructed a comprehensive conceptual model to ensure the realization of strategic organizational objectives which takes into account factors such as business strategy, the structure of project components, etc; these then further clarified the influence of synergetic effects of the project components on the organizational strategy.From the perspective of the dynamic environment, Petit [16] studied how uncertainties may effect project portfolio management, and then summarized the sources and distribution of uncertainties, thus contributing ideas to the study of uncertainties in project portfolio management [17][18][19].
The PPC co-management refers to the method and mode of synergistic management of different resource components, the resource constraints, the mutual influence of the multi-project objectives, the synergistic sharing of the project resources, etc., in order to achieve benefits that are greater than the sum of individual projects [20], However, as mentioned above, there are many factors to be considered in the project portfolio management, complicated cooperative relations among the factors [21], and difficulties in quantifying synergistic effects.Hence, there are relatively few research results from the field of studying project portfolio management.
DEA is an effective and objective method for evaluating production efficiency from the same type of decision-making unit [22].Wang et al. [23] established an integrated approach that combines the grey forecasting model GM and DEA to evaluate the comparative efficiencies of 16 Green Logistics Providers and select the righter partner for sustainable development of enterprise that will lead to improved business performance and reduce carbon dioxide (CO2) emissions.From the prospect of sustainable development of real estate industry, DEA has been successfully used for predicting one company bankruptcy and the operation efficiency which emphasized on the levels of several financial indexes and the effect the strict regulating policies [24].Both domestic and foreign research have documented different forms of DEA evaluation models that have been widely used in various fields of study [25][26][27] , which further proved its importance of sustainable development of enterprises and its effectiveness in analyzing the technological and scale efficiency for the multiinputs and multi-outputs.Hence this paper intends to establish a synergistic evaluation index system and model of PPC to analyze, evaluate and select the best project portfolio.This paper is structured as follows: Section 2 establishes a scientific and rational evaluation index system of PPC which takes both financial and non-financial objectives into consideration, via an extensive literature review.Section 3 evaluates the value of the index and Section 4 proposes a synergetic evaluation model of the PPC based on the DEA.Section 5 verifies the effectiveness and feasibility of the model using a computational experiment.Section 6 draws conclusions.
Construction of Synergetic Evaluation Index System of the PPC
The success of a project's implementation is affected by not only the amount of capital investment but also the decision-maker's maturity, managerial skill-sets within the organization, as well as the components of the project itself.The organization will benefit from financial gains, further development promotions as well as positive social influence.On the basis of a review of the existing literature, this paper puts forward the synergetic evaluation of the PPC management index system, shown as Fig. 1 [10, 11][25-27]: The organization currently has available projects to choose from, which is noted as project set , = , , … … , , randomly taking z( 2 ≤ z ≤ n) projects from project set will forms a new project portfolio .The explanations to the indices and the interaction between the indices under the synergies are as follows.
Capital Cost
Investment funds play the most important role in the implementation of any project.Due to the synergies between the resources of the project, the capital costs ( ) of the project portfolio is not equal to the sum cost of each individual project.Under the management of project portfolio synergistic configuration, the cost of capital formula is shown as below: Equation (1) shows that under the influence of synergies, by carrying out each project separately, project portfolio demands less capital towards the aggregate capital cost .In the formula, represents the amount of investment capital for investing project which is part of the project portfolio .represents the total amount of capital cost for investing project portfolio , (negative value) represents the synergistic effect of the cost of capital under the PPC synergistic management.
Organizational Capacity
Organizational management is the most direct and effective management model within project management.Organizational capacity (X ) refers to the project manager's management skill which includes management of funds, manpower, and machinery, etc.
Equation (2) shows that due to the synergistic effect, rather than carrying out each project Z separately, project portfolio K requires less organizational capacity ( ) towards aggregate organizational capacity ( ) .In the formula, represents the effort of organizational capacity for investing project which is part of the project portfolio .means the total amount of capital cost for investing project portfolio , (negative values) represents the synergistic effect of the organizational capacity under the synergistic management of the PPC.
Component Function
While there are strategic objectives for the organization, the objectives of different projects are not always the same.For instance, some projects are for financial gains, while others are for obtaining market share.Component function ( ) refers to project component's contribution to the overall strategic target of the organization.
Equation (3) shows that due to the synergistic effect, rather than carry out each project separately, project portfolio requires less organizational capacity towards aggregate organizational capacity ( ) .
represents the amount of component function involved in investing project which is part of the project portfolio .means the total amount of component function required for investing project portfolio K, (negative values) represents the synergistic effect of the organizational capacity under the synergistic management of the PPC.
Economic revenue
Economic revenue ( ) refers to the net present value of the project, which is the core for organization development and smooth implementation of projects.The basic requirement for the PPC is to achieve higher profits from the synergies of the project portfolio rather than from the profits of individual projects.
Equation ( 4) shows that due to the synergistic effect, the increase in the total economic revenue generated by the projects in the form of project portfolio .represents the revenue generated by the project in the project portfolio , refers to the total revenue generated by project portfolio K.
(positive value) means the synergistic effect of the value of economic revenue under the influence of the PPC synergies.
Strategic fit
Strategic management theory can effectively improve the performance of project portfolio management [28], Through building conceptual model which is based on the analysis of interaction among project portfolio, corporate strategy and business success, Sascha Meskendah [9] [15] proved strategy does effect the project portfolio and business success; Corporate strategic fitness has become an international important index for evaluating the effectiveness of the project portfolio [25, 26] [29].
Equation (5) shows that due to the influence of synergies, there is improvement of strategic fit by undertaking projects in the form of project portfolio K.
shows how project in the project portfolio may affect organizational strategic fit, refers to the aggregate strategic fit that project portfolio K may generate.(positive value) represents the synergistic effect of strategic fit under the influence of the PPC synergistic.
Social Satisfaction
Social satisfaction ( ) has a direct influence on the long-term development of project portfolio and the corporate social image.Those projects that pursue social satisfaction can help achieve or even exceed customer expectations, and this may further improve social recognition for current and future projects.By establishing customer loyalty and by improving consumer expectations for future projects, there will not only be increases in economic revenue as well as benefits to strategic fit, but will also garner the support of government and other organizations.The equation (6) shows that due to the influence of synergies, there will be improvement of social satisfaction by undertaking the projects in the form of project portfolio .indicates how project in the project portfolio may affect social satisfaction, refers to the aggregate social satisfaction that project portfolio may bring.(positive value) represents the synergistic effect of social satisfaction under the influence of the PPC synergies.
Determination the value of evaluation index
Since the evaluation indexes have different characteristics, the data of the six indexes cannot be in the same numerical range.Normalization is a simplified method that can eliminate the difference of the numerical differences between the different evaluation indexes.Based on data normalization method such as linear normalization, energy normalization and component whitening, this paper adopts the improved energy normalization method to deal with the index value.
Numbers 1 to 6 are respectively assigned to capital cost, organizational capacity, component function, economic revenue, strategic fit, and social satisfaction.As mentioned above, project portfolio has projects, each project has 6 indices, α represents the value index of J (1 ≤ J ≤ 6)project from project portfolio .stands for the index value of J from project portfolio after normalized.The energy normalization can be calculated by the following equation 7 [30]: Where has z projects and J is one of the indictors.In equation ( 7), ∥ α ∥ is the vector norm of J, and ∥ ∥= + + ⋯ + , where represents the value index of J (1 ≤ J ≤ 6) for project .As the traditional energy normalization method ignores the actual situation of the corporation and their reasonable expectations of the project, equation 7 cannot be directly applied to the DEA efficiency evaluation of project portfolio.To take the capital cost as an example, according to equation 7, ∥ ∥represents the total investment of project portfolio K.The traditional energy normalization method ignores the financial affordability of the enterprise; in other words, by assuming the enterprise's highest capital investment is ⌒ , When the project portfolio K's capital investment is greater than ⌒ , the data processing is meaningless.Therefore, the synergetic evaluation of the PPC should always consider the organization's situation in reality as well as its reasonable expectations of the project.
Determination the value of quantitative index
Capital Cost and Economic revenue are quantitative indices and the values of Capital Cost and Economic revenue for the Decision Making Unit can be estimated accurately.The value of the capital cost and economic revenue after normalized can be obtained by equation ( 8)-( 9): In equation ( 8), represents the total number of investments to the project portfolio K, ⌒ is the enterprise's highest capital investment, α ' is the value of capital cost after normalized.In equation 9, refers to the revenue generated by project portfolio K, ⌒ is the income criterion and α ' is the economic revenue after normalized.Then, the normalized value of these quantitative indices can be calculated accurately.
Determination the value of qualitative index
The values of the qualitative indices such as organizational capacity, component function and strategic fit are with a very strong sense of uncertainty, it is difficult to get the accurate value for them.
In order to get them, the peer experts are invited to score the value of the input indices ( , ) and output indices ( , ) and assess the highest capital investment of ( , ) and the optimum value of ( , ) by the Likert Scale Method [31,32].The values of qualitative indices are shown as In Table 1, the ρ represents the number of times that the index J has be scored, where = (1,2,3,4,5), and the score of this indices are equation to 1 or 3 or 5 or 7 or 9 according to their actual situation.Then, on the basis of equations ( 8)-( 9) , the values of these qualitative indices after normalized can be calculated as equations ( 10)-( 11): In equations ( 10)- (11), and α are the score of the qualitative input and output indices, ⌒ represents the enterprise's highest capital investment for the qualitative input indices and ⌒ is the optimum value of the qualitative output indices, α ' and α ' are the final value of these qualitative input and output indices after normalized.Here, the quantized and normalized values of these quantitative indices can be calculated.
Construction of the Evaluation Model
CCR model is a typical DEA analysis method with the advantages of simple, practical, operable, and the results in many applications show that it has a strong advantage in analyzing the technological and scale efficiency for the decision-making units of multi-inputs and multi-outputs [33], which is highly suitable to evaluate the synergetic of PPC.So, this paper puts forward a synergetic evaluation model of PPC from the perspective of inputs as following steps: (1) Absolute effect of single project.
(2) Absolute effect of project portfolio.
(3) Build the synergetic evaluation model of the PPC.
The absolute effect of single project and project portfolio could ensure the feasibility of the decision-making units, which may not only enable enterprises do not make wrong decisions in the non-effective projects, but also can target to carry on an DEA analysis for the project portfolio in the feasible solution region .In order to ensure all selected projects in keeping with corporate strategic objectives, the absolute effect of single project before proposes a project portfolio is necessary.Assuming the number of decision-making units is , the input and output indices of each decision-making unit is known, is the ℎ input of (the decision-making unit of project portfolio K ) and is the maximum of the ℎ input of , ( = 1,2, … … ) is the ℎ output of and is the minimum expectation of the ℎ output of in the premise of given amount of investment resources.The amount of investment resources is not more than the and the amount of outputs is not less than .Therefore, the equation of absolute effect of project can be shown as equation (12):
Absolute effect of project portfolio
Assuming the number of decision-making units by absolute effect of single project is , the amount of the possible portfolios which needed to through the absolute effect is 2 .It is obviously that the demand amount of investment resources for the project portfolio is not more than total amount of investment resource and the amount of outputs for project portfolio is not less than the minimum expectation .Therefore the equation of absolute effect of project portfolio can be shown as equation (13): In equation ( 13), represents the total amount of investment resource for project portfolio , the represents the total amount of investment resource , refers to the aggregate amount of ℎ out put that project portfolio may bring and is the minimum expectation for the ℎ output in the premise of given amount of investment resources that project portfolio may bring.
Evaluation model of PPC
Assuming the number of decision-making units by absolute effect of project portfolio is , the amount of the possible portfolios is 2 .According to equation 13, represents the total amount of investment resource for project portfolio and refers to the aggregate amount of ℎ output that project portfolio may bring, the vector of input indices and the output indices are = ( , , ) and = ( , , ) respectively.According to the DEA principles, the weight vectors of input and output indices are = ( , , ) , = ( , , ) , the effectiveness of decision-making units is ℎ = .Take the effectiveness of all decision-making units as the restrictions, a model to measure the relative efficiency of the decision-making units could be constructed, as equation ( 14) [34]: In equation ( 16), ( , ) is the solution set of ∑ ≤ ∑ ≥ , Ɵ is the objective function represents the minimum of the investment if the output constant.According to DEA principles, only Ɵ=1, = 0, = 0, the project portfolio K in this model is valid and the vector of input indices is optimal.Otherwise, the DEA is invalid.Then, the synergetic evaluation model of the PPC is built, which will provide a reference for project portfolio decision making.
Computational Experiment and Results
In order to prove the effectiveness and scientific of the above methods, this paper takes company A as a case study.The company is required to choose three out of nine available projects to run simultaneously with a RMB 5.9 billion budget while aiming to find the optimal combination of the PPC.Based on the annual corporate strategic objectives, the experts' reviews as well as the company's input Indices (capital cost X , organizational capacity X and component function X ) and output Indices (economic profit Y , enterprise adaptation Y and social satisfaction Y ), the probability of success for the project, and the synergistic effect between projects under synergistic configuration are shown in the Based on the basic conditions of the company such as capital limit and reasonable yield thresholds, resource constraints, 10 kinds of portfolios through the absolute effect can be obtained according equation 12-13, and the normalized data and the efficiency of decision-making units can be calculated by equation 10-16, and the results of them are shown in Table 3: In Table 3, the efficiency of PP-DMU1, PP-DMU5,PP-DMU8 are 1.000, so these DMUs are valid,and the others are invalid .Analyze these invalid DMUs can obtain the obstacles to the further development of the company, which will provide the decision-making advice for managers.
Analysis of invalid DMUs
By analyzing the slack of input and output indices will contribute to find the reasons why the decision-making units are invalid and provide a direction for adjusting the input-output scale.The slack of inputs and outputs for DMUs are shown as Table 4: Compared to other project portfolio decision making units, the organizational capacity in PP-DMU9 and PP-DMU10 has exceeded the optimal actual demand, however, this is not a bad thing to the management process as it reflects the fact that the managers in PP-DMU9 and PP-DMU10 could manage more complex projects, which puts forward some favorable suggestions for manager to allocate its' the human resource.
Analysis of valid DMUs
From the economic interpretation of this CCR model, the PP-DMU, PP-DMU 5 and PP-DMU 8 are both valid in technological and scale efficiency.These 3 valid DMUs which composed of 3 different projects could help the company managers to narrow the scope of decision-making and improve the level of decision-making.However, the amount of funds (59 million yuan) cannot meet the demand of three valid projects running at the same time.Therefore, based on the needs of the company's development at this stage, it is very necessary to select the optimal project portfolio from these valid DMUs based on the needs of the company's development at current stage.
According to the nature of valid DEA, just consider the situation that different output in the same input vector could meet the needs of the analysis of valid DEA.This paper analyzes the valid items of DEA from three different situations, and determines the optimal project portfolio, the results are shown in Table 5: indices, the economic revenue, strategic fit and social satisfaction can be calculated to reflect the DEA efficiency of project portfolio respectively.When the output was considered as the study object, the PP-DMU 5 is relatively valid, which reflects that the project portfolio is composed of project 1, 3 and 5 is the best when the economic revenue is considered as the main measurement indices.
Therefore ,when take the output and as study objects, the best project portfolio in different situation such as take strategic fit and social satisfaction as main measurement indices could be selected in the same way ,which will provide decision basis for the managers to select the best project portfolio according to the different requirements.
Conclusions
This paper studies the efficiency of the PPC synergistic management by using the DEA method under a certain environment.For the selection of Indices, we built up a scientific and rational evaluation index system based on the existing research results which takes both financial and nonfinancial objectives into consideration.The scope of this system covers the input indices such as capital cost, organizational capacity component function and the output indices includes economic revenue, strategic fit and social satisfaction, as well as the auxiliary index: success probability.And this system makes it easier to build a synergetic evaluation model of the PPC based on DEA.For data processing, an improved energy normalization method is used to eliminate the numerical differences between different evaluate Indices.Prior to the evaluation of project portfolio coordination, reasonable yield thresholds, resource constraints, and other conditions are set.Subsequently, absolute effect on single projects and project portfolio can effectively screen out alternative options and further reduce the number of decisions required.Then, a model for selecting the optimal DEA effective project portfolio which will provide insights to decision-makers and improve the decisionmaking is proposed, and the results of a computational experiment suggest that this model is reasonably for selecting the optimal project portfolio.
To our knowledge, this is the first time to apply the notion of synergetic evaluation and DEA to the PPC domain, which enriches the theories of project management and makes an important contribution to integrating a group of projects into a project portfolio in the synergistic perspective and helping an organization optimize its multi-project management.This model is verified by a computational experiment from the database of the Chinese firm, and provides a basis for selecting the best project portfolio, which contributes to decisions making on the PPC.Practitioners may benefit most from applying the finding that the synergetic relationship must be considered and in an integrated fashion to achieve the optimal PPC.
There are also some shortcomings in this study.for resources also varies.Therefore, the PPC multi-stage efficiency analysis will be the authors' future direction of research.(3) the effectiveness and feasibility of this proposed model can be verified by a computational experiment, but the selected projects to be implemented only consistent for the problem of synergetic evaluation and the results of the computational experiment initially cannot be generalized.These limitations would be further study in the future.
Figure 1
Figure 1 Synergetic Evaluation of the PPC Index System PPC synergistic management evaluation index system consists of input indices, output Indices and auxiliary Indices.By considering the uncertainties both within the organization and the external social environment, it is too difficult to be absolutely sure about the success of any projects.Therefore, this paper introduces supporting Indices in the evaluation index system that will indicate the probability of a project's success under the influence of such uncertainties.The figures of the supporting Indices were based on the experts' review as well as conclusions from their years of experience.
= , the equation 14 could be converted into a linear programming problem through the Charnes-Cooper transform, shown as equation 15: Solving the dual programming of equation 15, bring the slack variables into this problem, the synergetic evaluation model of the PPC could be proposed, shown as equation (16): Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: (1) the systematic deficiencies of the indices are induced by the negative synergistic relationship between indices having not been taken into account, and might affect the scientific nature of the evaluation results; (2) The evaluation method of PPC synergistic management based on DEA only considers the project at a very specific period of time.However, in reality, synergistic effect of projects appears differently at different stages, while demand Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 18 September 2017 doi:10.20944/preprints201709.0078.v1
Table 2 below : Table 2 .
Initial data of different projects
Table 3 .
Values of indices and results for DMU of project portfolio
Table 4 .
Slack of inputs and outputs for DMUs | 2017-09-23T07:29:43.336Z | 2017-09-18T00:00:00.000 | {
"year": 2017,
"sha1": "cc5a8bf212cd29bc60d5f46ee81686d5c6893e93",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/201709.0078/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "cc5a8bf212cd29bc60d5f46ee81686d5c6893e93",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
127954663 | pes2o/s2orc | v3-fos-license | Effect of RE 3 + on Structural Evolution of Rare-Earth Carbonates Synthesized by Facile Hydrothermal Treatment
Department of Civil andMechanical Engineering, University of Cassino and Southern Lazio, Via G. Di Biasio 43, 03043 Cassino, FR, Italy Center for Hydrogen-Fuel Cell Research, KIST-Korea Institute of Science and Technology, Hwarang-ro 14-gil 5, Seongbuk-gu, Seoul, Republic of Korea Graduate School of Energy and Environment, Seoul National University of Science and Technology, 232 Gongneung-ro, Nowon-gu, Seoul, Republic of Korea Department of Applied Science and Technology, Politecnico di Torino, Corso Duca degli Abruzzi 24, Turin 10129, Italy INSTM-National Interuniversity Consortium of Materials Science and Technology, Via G. Giusti 9, 50121 Florence, Italy
Introduction
Rare-earth-based materials have drawn attention in the past years due to their wide range of applications, in the lighting industry [1], in electrochemical energy devices [2], in catalysis [3], and in biological [4] and magnetic [5] applications.eir very interesting properties are largely due to the unique 4f electron orbitals having highly localized electronic states and very predictable electronic transitions [6], weakly influenced by either the coordination environment or the crystal field.Among rare-earth materials, the rareearth carbonates, both amorphous and crystalline, have recently attracted considerable research work to search for potential materials for specialized industrial applications [7].
As a further evidence of the great technological interest in this field, in the last few years, Kaczmarek et al. [8] and Kim et al. [9] authored two systematic reviews concerning rareearth carbonates and hydroxycarbonates.At room temperature and pressure (25 °C and 1 atm), the rare-earth carbonates are divided in two groups: the hydrated normal rare-earth carbonates (RE 2 (CO 3 ) 3 •xH 2 O) and the rareearth hydroxycarbonates (RE(OH)CO 3 •xH 2 O) [9].Nowadays, these materials, beside their intrinsic properties, are also very appealing for their potential as precursors for nano-and microsized rare-earth oxides [8,10].
In recent scientific literature, various synthesis methods have been proposed to produce different rare-earth carbonates, and among them, the most common ones are precipitation [11] and homogeneous precipitation [12,13], sonochemical synthesis [14], and hydrothermal treatment [15,16].In particular, the latter can be considered an effective and cheap route, thanks to low synthesis temperature, high powder reactivity, and shape control [17,18], and therefore, it can be frequently used for large-scale production [8].Indeed, the hydrothermal technique has been becoming one of the most important tools for advanced ceramic processing, particularly in nanostructured materials applications, mainly because it is easy to obtain monodispersed and highly homogeneous nanocrystals [19].
Among the rare-earth carbonates, cerium-based carbonates account for a prominent role, and several reports can be found in literature about the hydrothermal synthesis of cerium carbonates.For example, Nakagawa et al. [20] obtained spherical nanoparticles of hexagonal CeO 3 OH treated at 140 °C for 96 h and cerium oxycarbonate particles with different morphologies by adding laurylamine in the mineralizer solution; Sun et al. [21] synthesized flower-like particles of mixed orthorhombic and hexagonal CeOHCO 3 at 180 °C for 72 h; and Hrizi et al. [22] obtained various morphologies of orthorhombic CeOHCO 3 by varying the cetyltrimethylammonium bromide/Ce ratio in systems formed by Ce(NO 3 ) 3 and urea treated at 180 °C for 3 h.On the contrary, Han et al. [23] reported that, by hydrothermal treatments below 150 °C, amorphous or poorly crystallized cerium carbonate powders are obtained.us, very different results starting from hydrothermal treatments have been reported in terms of formed phases and morphologies, depending on temperature and duration of the hydrothermal method and mineralizer solution composition.
e aim of the present work is to synthesize, via a facile hydrothermal treatment, rare-earth carbonate-based nanopowders to be used as precursors for oxide powders characterized by very good sinterability after a mild calcination step.Focusing on many potential practical applications, the adopted hydrothermal process was kept as simple and cheap as possible, i.e., by applying low temperature (120 °C), by using cheap raw materials without any additives.Finally, the work was mainly hinged on the hydrothermal synthesis of cerium carbonate-based materials and on finding the optimal synthesis conditions to produce them.In addition, the same hydrothermal conditions were also applied by using other rare-earth precursors to study their influence on both structure and microstructure of the formed phases.
Our results show that the hydrothermal transformations of cerium compounds are rather complex, involving several phases evolution before achieving an equilibrium state.Furthermore, significant differences emerge for samples containing different rare-earth compounds, in terms of both phases and morphologies, probably due to the lanthanides contraction law.Finally, some recipes for preparing rareearth carbonate-based powders via facile hydrothermal synthesis are proposed.
Materials and Methods
Cerium(III) nitrate (Ce(NO 3 ) 3 •6H 2 O 99.0% Sigma-Aldrich, Italy) as the cerium precursor and ammonium carbonate ((NH 4 ) 2 CO 3 with NH 3 > 30% Fluka, Italy) as the precipitating/mineralizing agent were used as starting chemicals for the hydrothermal syntheses.For the other rare-earth-based materials, the corresponding hydrated nitrates (i.e., RE(NO 3 ) 3 •xH 2 O 99.9% from Sigma Aldrich, Italy, x � 5 or 6, depending on different rare-earth precursors) were used.All the chemicals were used as-received without any further purification.
Independently of the final composition, the procedure for the hydrothermal syntheses was the following: (a) A proper amount of rare-earth nitrate was dissolved in deionized water to obtain a 0.1 M solution (solution A), and ammonium carbonate was dissolved in deionized water up to 0.5 M (solution B).Both solutions were vigorously stirred for 1 h to favor the homogenization.(b) A proper volume of solution B was quickly added to the selected volume of solution A, maintained under mild stirring, in order to reach R � 2.5, where R is the molar ratio between carbonate ions and total metal cations.Some experiments were also carried out at R � 10.When the solution B was quickly added to solution A, a white precipitate was instantly formed.(c) e as-prepared suspensions were transferred in Teflon vessels (60 mL), which were then sealed and held in outer stainless steel pressure vessels for the hydrothermal treatment.e treatment was carried out in an air-thermostated rotating oven at 120 °C and 25 rpm to allow the complete homogenization of the system during the process.(d) After the selected reaction times, the vessels were quenched with cold water, and the resulting products were repeatedly filtered by using a vacuum pump, washed with distilled deionized water, and finally dried overnight at 80 °C in static air.e various synthesized samples and their labels are reported in Table 1.
All samples were characterized by X-ray powder diffraction (XRD) using a Panalytical X'PERT MPD diffractometer to detect the crystalline phases.e primary crystallite size was calculated by the Scherrer equation [24]: Advances in Materials Science and Engineering where K is the shape factor equal to 0.89 for spherical particles, λ is the X-ray wavelength (0.1541 nm for Cu Kα 1 ), θ is the Bragg's angle of the peak, and B is the relative full width at half maximum (FWHM) corrected for the instrumental broadening.B is calculated as where B instr is determined using standard polycrystalline silicon.Both fitting profiles were made using the pseudo-Voigt function as a mathematical model for the XRD peaks, and the calculations related to the Scherrer formula were carried out using the software X'Pert HighScore from Panalytical.e quantitative phase analysis was performed according to the method recently proposed by Toraya [25], whereas the calculations to extract the integrated intensities of the XRD peaks (up to 60 °•2θ) were carried out by using the software X'Pert HighScore from Panalytical.e specific surface area was measured by the nitrogen adsorption/desorption isotherms technique through the Brunauer-Emmett-Teller (BET) method using a Micromeritics Gemini apparatus; before the measurement, the sample was preliminary degassed under vacuum at 100 °C.
e thermal behavior of the samples was investigated through simultaneous differential scanning calorimetry and thermogravimetric analysis (DSC and TGA, ermal analyzer STA 409, Netzsch) in air, with a heating rate of 10 °C/min up to 1200 °C; α-Al 2 O 3 was used as a reference.
e morphology of the powders was observed by scanning electron microscopy (SEM) (Inspect F, FEI Co., USA).
Results and Discussion
During hydrothermal synthesis, different routes of crystallization of the samples were observed, and the phases obtained and their evolution are mostly driven by the selected rare-earth cation.Firstly, the cerium-based precipitate is mainly amorphous, as clearly evident from its diffraction pattern (Figure 1(a)), thus confirming the well-known results reported elsewhere [7,10], in which amorphous precipitates are formed from a supersaturated aqueous solution containing rare-earth cations and carbonate ions at R � 2.5.Being amorphous, a univocal phase identification by diffraction analysis is not possible; however, the thermogravimetric analysis (Figure 1(b)) along with analogous results reported in [10,11] allows supposing that it is constituted by a hydroxycarbonate (CeCO 3 OH).In fact, it is possible to exclude the normal cerium carbonate, whose minimal weight loss should be more than 25%, whereas the thermal decomposition of cerium hydroxycarbonate is expected to be 20.7% weight loss, according to the following reaction: erefore, our measured weight loss (i.e., 21%, Figure 1(b)) is in very good agreement with the theoretical one.e amorphous precipitate, instantly formed when solution B is added to solution A, might be obtained by the following twostep reactions, considering that the hydrated Ce 3+ cation can undergo hydrolysis in aqueous solution, as also proposed by Hirano and Kato [26]: During the hydrothermal treatment, the amorphous precipitate undergoes several phase transformations.e diffraction patterns of C samples, treated up to 168 h for different times, are shown in Figure 2, clearly suggesting the crystallization route during the ongoing hydrothermal treatment.After 1 h, the sample is still essentially amorphous, although several very small and broad peaks of orthorhombic CeCO 3 OH (ICDD card no.41-0013) begin to appear (Figure 2(a)), thus indicating the onset of the hydrothermal crystallization of the amorphous precipitate.
is transformation occurring very likely through a dissolution-crystallization mechanism is favored by the higher solubility of the amorphous precursors with respect to the crystalline phases.After 2 h (Figure 2(b)), the crystallization process continues, and in addition to orthorhombic CeCO 3 OH (which is the main crystalline phase), the hexagonal CeCO 3 OH phase begins to form, as showed by some small and broad XRD peaks attributable to this phase (ICDD card no.62-0031).By extending the hydrothermal process up to 8 h (Figure 2(d)), the amount of the hexagonal phase increases (Figure 2(c)).At longer times, the orthorhombic phase is not present anymore, being the ese findings suggest that the orthorhombic polymorphic form is the first one to crystallize and it is indeed a metastable phase, easily converted into the hexagonal one.is is not surprising because in the hydrothermal process, metastable phases are often firstly formed [27].However, after the formation of the hexagonal phase, the system has not still achieved the equilibrium state.e redox behavior of cerium cations can induce further transformations as long as the oxidizing conditions are preserved during the treatment, i.e., sufficient O 2 is present in the system.In fact, by a careful inspection of Figure 2(e) (related to 16 h of process), the main XRD peaks of cerianite (CeO 2 , ICDD card no.75-390) distinctly appear.e Ce 3+ oxidation with the consequent formation of cerianite is not surprisingly a slow transformation, requiring many hours, especially at the low adopted temperature.Several steps are involved in this transformation (oxidation of Ce 3+ , evolution of CO 2 and H 2 O, breakdown of hexagonal lattice, and formation of fluorite lattice) which can be generally represented by the following chemical reaction: It is worth mentioning that even though equation (3) seems similar to equation ( 6), they are completely different.In fact, the former one occurs in dried powders and it is induced by the temperature during the TGA analysis (Figure 1(b)); on the contrary, the latter one occurs during the hydrothermal treatment at 120 °C between the solid dispersed in the mineralizer solution and the O 2 dissolved in it.
By comparing the diffraction patterns of samples C48 (Figure 2(f )) and C168 (Figure 2(g)), we can note that the intensity of XRD peaks of cerianite increases and the one of XRD peaks of the hexagonal phase decreases.e estimation of both cerianite and hexagonal phase amounts in samples C48 and C168 was carried out through the method recently proposed by Toraya [25], using all the peaks up to 60 °2θ.On the basis of this procedure, cerianite is 19.1% w/w in sample C48 and 46.8% w/w in sample C168.erefore, by increasing the duration of the synthesis method, the transformation of equation ( 6) keeps going on, and after 48 h
4
Advances in Materials Science and Engineering (Figure 2(f )) and 168 h (Figure 2(g)), the amount of cerianite is significantly increased (i.e., more than doubled).To this regard, it should also be noted that the complete transformation of CeCO 3 OH in CeO 2 requires an amount of O 2 which is probably not present in the small reactors used in our experiments.Anyway, it is clearly evident that the oxidation of Ce 3+ to Ce 4+ destabilizes the hexagonal hydroxycarbonate, conversely favoring the formation of the fluorite-structured CeO 2 .As long as a sufficient duration of the treatment is guaranteed (and an adequate oxidizing condition is maintained, i.e., the available O 2 in the closed system is sufficient to oxidize all Ce 3+ cations), the stable phase of hydrothermal ageing of Ce(III) nitrate in the presence of ammonium carbonate as the mineralizing/ precipitating agent is indeed cerianite.However, the conversion into cerianite seems rationally rather weak at 120 °C, even if an increase in the operating temperature could accelerate reaction (4).In this way, the formation of fluoritestructured ceria with a very good morphology could be directly obtained via hydrothermal synthesis.Moreover, an interest towards the cerium hydroxycarbonate should lead to select a proper duration of the hydrothermal treatment, i.e., 8 h by adopting our chemical-physical conditions.ese phase transformations proceed with the corresponding morphological modifications of the powders.In Figure 3, some exemplary SEM micrographs of the samples are reported.Sample C0, i.e., the as-obtained amorphous precipitate, is constituted by relatively large agglomerates of irregular particles without a well-defined shape.ese agglomerates are gradually broken during the hydrothermal process, completely disappearing after 8 h (Figure 3(e)).Clearly, this morphological evolution can be explained only by supposing a dissolution-reprecipitation (crystallization) mechanism, thus confirming previous data reported in literature [11].During the fragmentation and dissolution of the agglomerates, some particles emerge from them, as well visible in Figures 3(b), 3(c), and 3(d), in which both elongated particles and spherical-like particles can be noticed.As the former ones are firstly obtained, they are very probably constituted by orthorhombic CeCO 3 OH, especially by considering Figure 2(a) and Nakagawa et al. results in [20], in which orthorhombic CeCO 3 OH particles show a shuttlelike (i.e., elongated) morphology.On the contrary, the spherical-like particles are reasonably constituted by hexagonal CeCO 3 OH, as sample C8 is constituted only by this phase (Figure 2(d)), showing a very homogeneous microstructure (Figure 3(e)) characterized by rounded particles whose average size is about 100 nm.
is value is also consistent with the crystallite size of 60 nm calculated by using the Scherrer formula on the peak (302) at 30.50 °2θ.
erefore, through our facile 8-hour hydrothermal treatment, monophasic hexagonal CeCO 3 OH powders characterized by an excellent morphology, nanometer size with monomodal distribution, were synthesized.
Extending the hydrothermal treatment longer than 8 h did not cause any relevant morphological evolution, and the homogeneous microstructure of spherical-like particles is maintained up to 168 h (Figures 3(f), 3(g), and 3(h)).Cerianite is practically negligible in C16, whilst it is 19.1% w/w in C48 and 46.8% w/w in C168, and the reaction (6) occurring during the hydrothermal process takes place without morphological modifications, thus leading to exclude a dissolutionprecipitation mechanism for equation (6).As a confirmation of that, a careful inspection on Figures 3(f), 3(g), and 3(h) reveals that particles size is roughly unaltered.is evidence is further confirmed by the calculated crystallite size of the corresponding samples: 65 nm for the hexagonal CeCO 3 OH of samples C16, C48, and C168 and 85 nm for the cubic CeO 2 of samples C48 and C168 (in sample C16, the (111) peak of cubic CeO 2 is too weak to calculate the crystal size by Scherrer formula).ose values are consistent with the measured surface area too.In fact, the C168 surface area is 11 m 2 /g, corresponding to approx.100 nm as the average diameter, under the hypothesis that the contents of cubic CeO 2 and hexagonal CeCO 3 OH are 46.8%w/w and 53.2% w/w, respectively.
In conclusion, based on the obtained results, we can suggest that the following sequence of transformations occurs to the cerium carbonate precursor upon the hydrothermal treatment with ammonium carbonate as the precipitating/mineralizing agent: •(slow transformation). ( e proposed rates of the transformation ( 7), (8), and ( 9) are obviously written in relative terms.In fact, based on the diffraction patterns in Figure 2, reactions (7) and ( 8) are completed within 8 h (very likely, the former is actually completed within 4 h), whereas reaction (6) requires a duration certainly higher than 48 h.As reported above, cerianite content in C48 is less than 20% w/w, which will increase significantly as the hydrothermal treatment proceeds.We can also suppose that the fast transformations ( 7) and ( 8) are based on a dissolution-reprecipitation mechanism, as the morphology of the powder is completely changed after them, whereas, the slow transformation (9) does not proceed via a dissolution-reprecipitation mechanism as before mentioned; therefore, another mechanism should be invoked.
From this background, many different rare-earth carbonates were synthetized in the same hydrothermal conditions (i.e., 16 h at 120 °C).
e selected duration is a little higher than the one needed for the hexagonal CeCO 3 OH to be formed, mainly because of the absence of multiple oxidation states in the other used lanthanides preventing a transformation like equation (9).erefore, a higher treatment duration could favor the completion of the hydrothermal transformations.
Advances in Materials Science and Engineering
In Figures 4(a erefore, samples G16, D16, H16, and E16 are all monophasic, tengerite-type, i.e., hydrated, rare-earth carbonates with an orthorhombic crystal structure.is conclusion also agrees with the general knowledge that rare-earth carbonates from samarium through thulium (plus yttrium) are isostructural to tengerite [9].Moreover, the shifts highlighted in the inset of Figures 4(b) are in perfect agreement with the contraction law of lanthanides.In fact, the ionic radius 6 Advances in Materials Science and Engineering continuously decreases from Gd 3+ to Er 3+ (from 0.1053 nm to 0.1004 nm, respectively), with a consequent decrease of the interatomic distances and, in turn, an increase of the Bragg angles.On the contrary, by using Yb as rare earth, no sign of crystallization occurred after 16 h of hydrothermal treatment, as shown in Figure 4(H).A deeper study to unravel the conditions to hydrothermally induce Yb-based precursors crystallization was outside the scope of this work, and no further investigation was carried out.
In Figure 4(a), the diffraction patterns of samples C16, N16, S16, and CS16 are reported.Sample N16 (Figure 4(B)) exhibits the same behavior of C16 (as above described).In fact, all its XRD peaks can be assigned to the hexagonal NdCO 3 OH (ICDD card no.27-1295) which is isostructural to the ceriumbased hexagonal phase.However, even if sample S16 is crystallized as well, its diffraction pattern (Figure 4(C)) appears much more complex compared to the other ones.By carefully inspecting Figure 4(C), we can identify the presence of all the main peaks of the already mentioned tengerite-type phase (marked with a "T").erefore, one of the first phase formed via hydrothermal treatment for Sm precursors seems to be an orthorhombic Sm 2 (CO 3 ) 3 •2H 2 O. Yet, in sample S16 at least a second crystalline phase is present, even if a direct identification of these Sm-based compounds has not been possible by consulting the ICDD database.However, by extending the research to a generic rare-earth element, we reasonably suppose that the additional phase could be the orthorhombic SmCO 3 OH, isostructural to the orthorhombic NdCO 3 OH (ICDD card no. .Actually, an orthorhombic SmCO 3 OH is present in the ICDD database (ICDD card no.41-663), albeit exhibiting poor quality (even lacking of the space group) and not corresponding to S16 peak positions.Definitely, we can assume that, after 16 h of hydrothermal treatment, the Sm-based precursor is constituted by hydrated samarium carbonate and samarium hydroxycarbonate.
Finally, since Ce and Sm have shown a very different behavior, a further hydrothermal treatment under the same synthesis conditions was carried out on a system formed by is particular composition has been also selected by considering that Smdoped ceria has great importance as ceramic electrolyte for IT-SOFC.e corresponding diffraction pattern is reported in Figure 4( is value is consistent with the particles shown in Figure 5(b).In this case, a dissolution-reprecipitation mechanism is involved to convert the as-synthesized amorphous precursors into a crystalline phase.Figure 5 also reports the tengeritetype phase morphology, related to samples G16 and E16, showing particles with a similar shape to the other samples.
e dramatic difference of particles shape of G16 and E16 compared to the lanthanides with lower atomic number stands immediately out.
eir morphology appears well homogeneous and characterized by acicular needle-like crystals of the tengerite type whose length is of some tenths of microns.A very similar morphology of the tengeritetype crystal was also recently reported in [7,28] for hydrothermal treatments of rare-earth-based precursors.
Summarizing all the obtained results, it can be pointed out that (i) rare earths with low atomic number (i.e., Ce and Nd) form hexagonal hydroxycarbonates with spherical, monomodal, and nanosized particles (ii) intermediate-size rare earths (i.e., Sm) form biphasic products, i.e., orthorhombic hydroxycarbonate and tengerite type 8 Advances in Materials Science and Engineering (iii) rare earths with higher atomic number (i.e., Gd, Dy, Ho, and Er) form tengerite-type phases with elongated microsized particles erefore, the contraction law of lanthanides is again confirmed, as the rare-earth ionic radius heavily influences the formed phases and their consequent morphology.
In order to accelerate the hydrothermal crystallization, some additional experiments were conducted by using cerium and samarium-doped cerium precursors at the same temperature (120 °C), for various duration, in the presence of a more concentrated mineralizer solution, as suggested in literature [29] as well.An R ratio value of 10 was selected for these experiments.Figure 6 shows the diffraction patterns of Ce-and of Sm-doped Ce treated for 4 h.It appears clearly that, with higher concentration of carbonate ions, the crystallization route is completely different, confirming the results reported in [16].In fact, in Figure 6(a), a dissimilar and more complex diffraction pattern appears with respect to the one in Figure 2(c), relative to the same sample and treated for the same duration, in which in addition to the two phases detected in Figure 2(c), at least another crystalline phase appears; furthermore, the diffraction pattern in Figure 6(b) is even more complex.As a consequence, also the morphologies of samples in Figure 6 are affected by that.
Conclusions
We found that the phase and morphology of rare-earth carbonates, hydrothermally synthesized at 120 °C and in the presence of ammonium carbonate with R � 2.5, are strongly affected by the type of rare-earth precursors.A first, possible explanation is related to the multivalent behavior of some rare-earth elements.In the case of Ce and Nd, i.e., lanthanides with a lower atomic number, the formed phase is the hexagonal RECO 3 OH (if a proper duration of the hydrothermal treatment is used).In the case of Gd, Dy, Ho, and Er, i.e., lanthanides with a higher atomic number, hydrothermally treated for 16 h, the products are completely crystallized although the formed phase (i.e., the normal carbonate RE 2 (CO 3 ) 3 •2H 2 O with the tengerite-type structure) and the morphology are completely different.In the case of Sm, i.e., a rare earth with an intermediate atomic number, a mixture of two crystalline phases is formed, i.e., the normal carbonate with the tengerite-type structure and the orthorhombic hydroxycarbonate.Finally, in the case of Yb, the analyzed rare earth with the highest atomic number, an amorphous product is obtained.is very different behavior is likely related to the contraction law of lanthanides.Since the most desired shape for practical applications is spherical-like particles from hexagonal RECO 3 OH (RE � Ce-, Nd-, and Sm-doped Ce), the tailored hydrothermal treatment designed in this work has been conducted at 120 °C by using (NH 4 ) 2 CO 3 as the precipitating/mineralizing agent, R � 2.5, and a duration between 8 and 16 h.erefore, the use of a higher ratio R, i.e., a higher CO 3 −2 concentration, or shorter times, is strongly counterproductive.e oxidation of the RE 3+ cation, i.e., in the case of Ce, is possible at very long times (one week or more) and in the presence of O 2 , which can cause the breakdown of the hexagonal lattice of the hydroxycarbonate with consequent formation of a fluorite-type lattice although this route is not practical and economically feasible.Advances in Materials Science and Engineering
) (rare earths with lower atomic number) and Figure4(b) (rare earths with higher atomic number), the diffraction patterns of all samples treated for 16 h are displayed.By analyzing samples G16, D16, H16, and E16 (Figures4(b)), we can notice that all these samples exhibit the same diffraction pattern, even if a progressive shift in the peaks position is clearly evident, as reported in the inset of Figures 4(b), showing a magnification of the most intense peak located in the range 11.5-12 °•2θ.To the best of our knowledge, in the ICDD database, there are no cards containing Gd or Dy or Ho or Er characterized by the same diffraction patterns as those in Figures 4(b).Anyway, all their peaks can be attributed to ICDD card no.81-1538, corresponding to Y 2 (CO 3 ) 3 •2H 2 O, a compound known as Y-tengerite with the orthorhombic crystal structure (despite the abovementioned progressive shift in peaks position).erefore, the corresponding phase shown in Figure 4(E) can be identified as Gd 2 (CO 3 ) 3 •2H 2 O, isostructural to Y-tengerite.Analogous chemical formulas can be used for Dy, Ho, and Er, showing identical diffraction patterns.
Figure 4 :
Figure 4: Diffraction patterns of samples hydrothermally treated for 16 h.(a) Rare earths with a lower atomic number: Ce-based sample (A); Nd-based sample (B); Sm-based sample (C); Sm-(20%-) cerium-(80%-) based sample (D).(b) Rare earths with a higher atomic number: Gd-based sample (E), Dy-based sample (F), Ho-based sample (G), Er-based sample (H), and Yb-based sample (I).e inset reports a magnification of the most intense XRD peak of the tengerite-type structure present in different samples.e main XRD peaks are labelled with H hexagonal OH, O orthorhombic RECO 3 for the tengerite-type RE 2 (CO 3 ) 3 •2H 2 O.
D), showing the presence of hexagonal CeCO 3 OH (possible composition is Sm 0.20 Ce 0.80 CO 3 OH).It is interesting to notice that XRD peaks belonging to a fluorite-like structure do not appear in Figure 4(D) with respect to Figure 2(e), where almost 20% w/w of cerianite was already formed under the same treatment duration.Clearly, the presence of Sm in the lattice of hexagonal CeCO 3 OH makes the oxidation of Ce 3+ and the simultaneous formation of the cubic CeO 2 more difficult, even if this transformation could start by prolonging the hydrothermal treatment duration.Sample's morphology was revealed by SEM analysis, and four exemplary micrographs of samples treated for 16 h are shown in Figure5.e morphology of the amorphous ytterbium-based compound (Figure5(a)) is not well defined, similarly to the amorphous cerium-based compound.Sample N16 show small rounded particles in submicrometer assembled in clusters.e sharp shape of the XRD peaks in Figure4(B) accounts for relatively large crystalline grains, whose size (calculated by the Scherrer formula) is 295 nm.
Table 1 :
Synthesized samples and synthesis conditions. | 2019-04-23T13:23:45.176Z | 2019-03-14T00:00:00.000 | {
"year": 2019,
"sha1": "2b5dd28e9de321a46d9811da89ceae106ead85ed",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2019/1241056",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2b5dd28e9de321a46d9811da89ceae106ead85ed",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
118854850 | pes2o/s2orc | v3-fos-license | Polarized quark distributions in nuclear matter
We compute the polarized quark distribution function of a bound nucleon. The Chiral Quark-Soliton model provides the quark and antiquark substructure of the nucleon embedded in nuclear matter. Nuclear effects cause significant modifications to the polarized distributions including an enhancement of the axial coupling constant.
Polarized lepton-nucleus scattering experiments are an important tool in hadronic physics. For example, in order to study the spin structure function of the neutron, one must use nuclear targets. It is already well known that there are significant differences betweeen free and bound nucleons in the unpolarized case; the famous European Muon Collaboration (EMC) effect [1] is the prime example. It is reasonable to assume that nuclear effects could appear in polarized quark distributions. Our purpose here is to calculate the analogous modification to the nucleon spin structure function g (p,n) 1 (x, Q 2 ): a 'polarized EMC effect'.
The first discussion of nuclear effects in the polarized quark distributions is in Ref. [2] in the context of dynamical rescaling. A more recent calculation [3] predicts dramatic effects for the bound nucleon spin structure function. We have shown [4,5] that sea quarks, introduced at the model scale, can have important consequences for modifications in the nuclear medium. We will use our previous work [4] as a basis for the results presented here, and provide a mechanism for the modification within the Chiral Quark-Soliton (CQS) model [6,7,8,9,10]. This relativistic mean field approximation to baryons has many desirable qualities such as the inclusion of antiquarks (which is deeply linked to satisfying sum rules and the positivity of Generalized Parton Distributions), and a basis in QCD [8]. We have previously shown how the model describes nuclear saturation properties, reproduces the EMC effect, and satisfies the bounds on unpolarized nuclear antiquark enhancement provided by Drell-Yan experiments [4]. Therefore, we expect the CQS model to produce a reasonable result for the polarized distributions.
The CQS model Lagrangian with (anti)quark fields ψ, ψ, and profile function Θ(r) is where Θ(r → ∞) = 0 and Θ(0) = −π to produce a soliton with unit winding number. The quark spectrum consists of a single bound state and a filled negative energy Dirac continuum; the vacuum is the filled negative continuum with Θ = 0. In both the free nucleon and vacuum sectors the positive continua are unoccupied. The wave functions in this spectrum provide the input for the quark and antiquark distributions used to calculate the nucleon structure function. We work to leading order in the number of colors (N C = 3), with N f = 2, and in the chiral limit. While the former characterizes the primary source of theoretical error, one could systematically expand in N C to calculate corrections. We also expect that since the nucleon size is stable in the limit N C → ∞, the quark wavefunctions, our primary focus, should be within a few percent of their N C = 3 value [11]. We take the constituent quark mass to be M = 0.42 GeV, which reproduces, for example, the N -∆ mass splitting at higher order in the N C expansion, and other observables [9]. We ignore contributions from the structure functions of pion quanta, which in this model propagate through constituent quark loops; they are suppressed by factors of O(1/N C ), and are not treated at leading order.
The theory contains divergences that must be regulated. We use a single Pauli-Villars subtraction as in Ref. [12] because we follow that work to calculate the quark distribution functions. The Pauli-Villars mass is determined by reproducing the measured value of the pion decay constant, f π = 0.093 GeV, with the relevant divergent loop integral regularized using M P V ≃ 0.58 GeV. This regularization also preserves the completeness of the quark states [12].
The results for binding and saturation of nuclear matter have been published elsewhere [4,5], but we provide a brief review for completeness. The nucleon mass is given by a sum of the energy of a single valence level (E v ), and the regulated energy of the soliton (E Θ equal to the energy in the negative Dirac continuum with the energy in the vacuum subtracted) The field equation for the profile function is where ρ q s and ρ q ps are the quark scalar and pseudoscalar densities, respectively. The dependence of nucleon properties on the nuclear medium has been incorporated in the model by simply letting the quark scalar density in the field equation (4) contain a constant, but Fermi momentum k F dependent, contribution, P N s (k F ), equal to the convolution of the nuclear scalar density with the nucleon quark density arising from other nucleons present in symmetric nuclear matter. This models a scalar interaction via the exchange of multiple pairs of pions between nucleons, and the parameter g s is varied to obtain nuclear saturation. The nucleon scalar density is determined by solving the nuclear self-consistency equation The dependence of the nucleon mass, and any other properties calculable in the model, on the Fermi momentum k F enters through Eq. (6). Thus there are two coupled self-consistency equations: one for the profile, Eq. (4), and one for the density, Eq. (6). These are iterated until the change in the nucleon mass Eq. (2) is as small as desired for each value of the Fermi momentum. We use the Kahana-Ripka (KR) basis [13] to evaluate the energy eigenvalues and wave functions used as input for the densities, nucleon mass, and quark distributions. We introduce a phenomenological vector meson (with mass fixed at m v = 0.77 GeV and coupling g v ) [14] exchanged between nucleons, but not quarks in the same nucleon (i.e. we ignore the spatial dependence of the vector field in the vicinity of a nucleon, treating only the nuclear mean field). The vector meson couples to the vector density This mechanism is a proxy for uncalculated solitonsoliton interactions used to obtain the necessary short distance repulsion which stabilizes the nucleus. The polarized quark distribution for flavor i is defined by the difference between the quark distributions with spin parallel (↑) and antiparallel (↓) to the nucleon The polarized antiquark distribution is defined analo- is the leading order term in N C , with the isoscalar polarized quark distribution ∆q (T =0) (x) = ∆u(x) + ∆d(x) smaller by a factor ∼ 1/N C and set to zero. This follows from the fact that the isoscalar combination is normalized to the spin of the nucleon, which is O(N 0 C ), while the isovector combination is normalized to the axial coupling, which is O(N 1 C ) [15]. Therefore, at the model scale M 2 P V ≃ 0.34 GeV 2 , we see that a large portion of the spin is carried by the orbital motion of the constituent quarks in the valence level and the sea [8]. We will therefore suppress the isospin superscript in the following. The distributions are calculated using the KR basis at k F = 0 and k F = 1.38 fm −1 (see Refs. [4,5]) almost exactly as in Ref. [12] where the quark distribution is given by the matrix element with the regulated sum taken over occupied states. The eigenvalues E n are determined from diagonalizing the Hamiltonian, derived from the Lagrangian (1), in the KR basis. These are also the eigenvalues that enter into Eq. (2) for the mass. The momentum sum rule (for the unpolarized distribution) is automatically satisfied as long as Eq.
(2) defines the mass in the unpolarized analog of Eq. (9), and the same eigenvalues are used in both equations [12]. It is worth noting here that these two pictures are already consistent in the free nucleon case since the parton model hypothesis that the quark transverse momenta do not grow with Q 2 is satisfied [12], and our model for medium modifications does not damage this equivalence.
The antiquark distribution is given by ∆q(x) = ∆q(−x) where the sum is over unoccupied states. The use of a finite basis causes the distributions to be discontinuous. These distributions are smooth functions of x in the limit of infinite momentum cutoff and box size, but numerical calculations are made at finite values and leave some residual roughness. This is overcome in Ref. [12] by introducing a smoothing function. We deviate from their procedure, and do not smooth the results; instead we find that performing the one-loop perturbative QCD evolution [16] provides sufficient, but not complete, smoothing. Some residual fluctuations due to the finite basis remain visible in our results, and the size of these fluctuations serve as a guide to the size of the error introduced by the method.
These distributions are used as input at the model scale of Q 2 = M 2 P V ≃ 0.34 GeV 2 for evolution to Q 2 = 10 GeV 2 . The polarized structure function to leading order in N C is given by The ratio function is defined to be 1 (x/y, Q 2 , k F ).
The nucleon momentum distribution f (y) in light polarized nuclei has been calculated in Ref. [17]. Here, the nucleon momentum distribution is assumed to be the same as the unpolarized case, as the effects of the spin-orbit force will tend to average out in nuclear matter. We can also justify this approximation in nuclear matter because the zero pressure condition P + = P − for a nucleus with momentum P in the rest frame, which implies the lightcone version of the Hugenholtz-van Hove theorem [18], is still true. Therefore, one expects a distribution f (y) that is peaked at y ≃ 1, like those in Ref. [17]. This peak location is the dominant effect on the ratio Eq. (13); the remaining details of the function f (y) have only a small effect. Following a light-cone approach valid for any mean field theory of nuclear matter for which the density and binding energy per nucleon are the only input parameters [18] one obtains latter includes all medium modifications, while the former distribution uses the medium modified energy level eigenstate, but the same free nucleon sea quark distribution for both the free and bound nucleon. This was done in order to compare our results with the model in Ref. [3], which only has valence quarks at the model scale. The single energy level actually has a contribution to the polarized antiquark distribution, so it alone cannot be considered a true valence spin structure function. However, this contribution is small, so we effectively reproduce the result of a valence quark model, especially in the region x > ∼ 0.3. In Fig. 1, one can see that there is a large depletion for 0.3 < ∼ x < ∼ 0.7 in the polarized 'valence' quark distribution. This produces a large depletion in the isovector axial coupling g A of 17.8%. This large effect is comparable to that of the calculation in Ref. [3] which only includes valence quarks at the model scale. This valence effect is mitigated by a large enhancement in the sea quark contribution, so that the full polarized distribution has only a moderate depletion in the region 0.3 < ∼ x < ∼ 0.7 of the same size as the EMC effect in unpolarized nuclear structure functions. There is a large enhancement for x < ∼ 0.3 due to the sea quarks. This large enhancement is very different from the small effect calculated in the unpolarized case [4], and seen in unpolarized Drell-Yan experiments [19]. This would suggest that one might see a significant enhancement in a polarized Drell-Yan experiment, even after including shadowing corrections (which we address later). The larger sensitivity to the lower components of the wave functions is the primary source for the greater sea quark enhancement in the polarized case, in contrast to the unpolarized case.
The axial coupling g (3) A is enhanced by 9.8% in the nuclear medium. This is in accord with an earlier finding of a ∼ 25% enhancement for g A in a different soliton model by Birse [20]. There, the effect is also seen as a competition between enhancement and depletion. In order to address the medium modification of the Bjorken sum rule [21,22] lim as an integral of the experimentally observed nuclear distribution, one must account for the effects of shadowing. This occurs when the virtual photon striking the nucleus fluctuates into a quark-antiquark pair over a distance ∼ 1/2M N x exceeding the inter-nucleon separation. This causes a depletion in the structure function for x < ∼ 0.1 and is relatively well understood [23,24,25]. Shadowing in the polarized case is expected to be larger than in the unpolarized case by roughly a factor of 2 simply from the combinatorics of multiple scattering (see e.g. Ref. [26]).
The enhancement at x ∼ 0.1 − 0.2 in Fig. 1 is comparable to that seen by Guzey and Strikman [26]; they assume that the combined effects of shadowing, enhancement, and target polarization lead to the empirical value of the nuclear Bjorken sum rule for 3 He and 7 Li. Shadowing effects become large for x < ∼ 0.05, but we ignore them as well as target polarization; such precision is not necessary for our relatively qualitative analysis. One needs ∼ 10 times the shadowing observed in the unpolarized case for Lead in order to counter the enhancement at x ∼ 0.1 − 0.2, and give the same value for the Bjorken sum rule (15) in matter and free space. This assumes that shadowing is the only significant effect neglected at small x in our calculation of the unpolarized quark distribution [4].
We also present, in Fig. 2, the results for the spin asymmetry .
The nuclear asymmetry A (p|A) 1 is defined by replacing the polarized and unpolarized quark distributions, represented generically as q, with We find that for the free case, the calculation falls slightly below the data due to the smaller value of g A in the large N C limit, and that the size of the medium modification is of the same order as the experimental error for the free proton [27,28]. Eq. (16) at scale Q 2 = 10 GeV 2 . The heavy line is for nuclear matter. The dashed line is for the free proton. The data are for the free proton from SLAC [27] (filled) for Q 2 ∼ 1 − 40 GeV 2 and HERMES [28] (empty) for Q 2 ∼ 1−20 GeV 2 The free curve falls slightly below the data due to the lower value of gA calculated in the large NC limit.
The central mechanism to explain the EMC effect is that the nuclear medium provides an attractive scalar interaction that modifies the nucleon wave function. We see this again in the polarized case. This is also the dominant mechanism in the model of Cloet et al [3], and the soliton model of Birse [20].
The present model provides a intuitive, qualitative treatment that maintains consistency with all of the free nucleon properties calculated by others [8,9]. It provides reasonable description of nuclear saturation properties, reproduces the EMC effect, and satisfies the constraints on the nuclear sea obtained from Drell-Yan experiments with only two parameters for the nuclear physics (g s and g v ) fixed by the binding energy and density of nuclear matter. Therefore, we expect the results presented here to manifest themselves in future experiments with polarized nuclei. Our conclusions differ from those in Ref. [3]; the main difference is the role of sea quarks at the model scale. Therefore, we also expect future experiments would help determine the role of sea quarks in nuclei.
for suggesting the problem to us. | 2019-04-14T03:09:22.709Z | 2005-05-16T00:00:00.000 | {
"year": 2005,
"sha1": "32357f630d8aa5e29b607d92ceb2b3a0e347d6e9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-th/0505048",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "475e4dfaef9f6cebc5ef396c6858c00cbbc54e98",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255809952 | pes2o/s2orc | v3-fos-license | EcoTILLING by sequencing reveals polymorphisms in genes encoding starch synthases that are associated with low glycemic response in rice
Glycemic response, a trait that is tedious to be assayed in cereal staples, has been identified as a factor correlated with alarmingly increasing prevalence of Type II diabetes. Reverse genetics based discovery of allelic variants associated with this nutritional trait gains significance as they can provide scope for genetic improvement of this factor which is otherwise difficult to target through routine screening methods. Through EcoTILLING by sequencing in 512 rice accessions, we report the discovery of six deleterious variants in the genes with potential to increase Resistant Starch (RS) and reduce Hydrolysis Index (HI) of starch. By deconvolution of the variant harbouring EcoTILLING DNA pools, we discovered accessions with a minimum of one to a maximum of three deleterious allelic variants in the candidate genes. Through biochemical assays, we confirmed the potential role of the discovered alleles alone or in combinations in increasing RS the key factor for reduction in glycemic response.
Background
Rice is the most important cereal staple for more than half the world's population. As a primary dietary source of carbohydrates, it plays an important role in meeting energy requirements and nutrient intake among the rice eating populations [1]. Cooked rice is readily digested because it contains higher proportions of digestible starch (DS) and a lower RS [2]. RS has been reported by many studies to play an inhibitory role in the interaction of α amylase a predominant starch metabolising enzyme in human gut, with the carbohydrates in many cereals including rice resulting in slow digestibility of starch [3]. RS in cereal grains is reported to be the functional equivalent of dietary fibre through many animal studies [4][5][6][7].
In the past, dietary carbohydrates have been derived from whole coarse grains of rice, which were loaded with sufficient dietary fibre. At present, they are replaced predominantly with milled white rice carbohydrates devoid of any dietary fibre [8][9][10]. Studies involving human subjects related to the assessment of the causative factors for high prevalence of type II diabetes in Asia had indicated the consumption of milled white rice as one of the major factor [11][12][13]. The uninhibited interaction of α amylase with the carbohydrates from milled white rice leading to rapid release of glucose in the blood stream was demonstrated as the mechanism for diabetes incidence in many animal studies [6,[14][15][16][17].
Increasing the RS levels in the endosperm of cereal staples including rice is envisaged as an essential target for quality improvement of their starch in the context of human health [18]. Characterisation studies of cereal starches with high RS had indicated two major biochemical components to be positively associated with this desirable fraction. Studies of Miller et al. [19], Leeman et al. [20] and Lehman and Robin [21] had provided conclusive evidences for positive correlation of amylose with RS enhancement. While other characterisation studies in cereals had demonstrated that increased proportion of short chains and decrease in intermediate and long chain amylopectin also play a vital role for increase in RS content [22].
In rice, it is surprising to note that many of the indica varieties in spite of their intermediate to high amylose content (AC) (20-30%) in their grains do not show much reduction in their starch digestibility and remains rapid in their glycemic response [23]. Findings of Chung et al. [24] based on their study of rice varieties with varied amylose contents clearly indicated that apart from AC the higher proportion of short chain amylopectin is also a critical factor for reduction in digestibility of starch. This warrants the need for exploration of rice varieties with high AC along with increased proportion of short chain amylopectin to reduce its glycemic response.
Natural allelic variants are more stable in their expression as compared to induced mutations, as they are generated and stabilised over their long course of evolution [25]. The classical example of isolation and use of path breaking natural gene variants is the discovery of dwarfing genes such as Dee-geo-woo-gen in rice and Norin 10 in wheat which led to the green revolution during 1960s [26]. Recently, the isolation of sub-1 gene leading to the development of submergence tolerant rice varieties is also a demonstration of the discovery and use of natural allelic variants from germplasm [27]. As the natural variants occur in an extremely low frequency, the power of allele mining to discover them has to be enhanced by applying modern genomic tools. Genomics assisted allele mining approaches when applied in reverse genetic mode results in enhanced power of detection and provides scope for high throughput screening of large germplasm in a short time frame [28]. Isolation of natural sequence allelic variants in targeted candidate genes has been successfully demonstrated through EcoTILLING in many plants such as Arabidopsis [29], banana [30], Populus [31], field bean [32], mung bean [33], barley [34], potato [35], Cucumis spp [36], tomato [37], Sugar beet [38] and also in rice [39].
The conventional TILLING and EcoTILLING methods using CELI endonuclease based heteroduplex cleavage are less effective and labour intensive, hence very challenging in employing them in large mutant and germplasm DNA pools. To overcome the difficulties of conventional TIL-LING approach, Tsai et al. [40] demonstrated TILLING by high throughput sequencing in large mutant populations of rice and wheat. Recently, TILLING by sequencing was also been employed for the identification of allelic variants responsible for abiotic and biotic resistance in peanut [41].
In the present investigation, we employed EcoTIL-LING by sequencing of candidate genes for the discovery of potential nucleotide variations associated with low glycemic response in rice. Our candidate gene selection was based on the studies of Sestili et al. [42], Regina et al. [43] and Satoh et al. [44] in wheat, barley and rice mutants generated through gene silencing and knock out technologies. These studies reported many potential loss of function mutations in the genes coding for Starch Synthases (SS) and Starch Branching Enzymes (SBEs) associated with the enhancement of RS.
Variant discovery through EcoTILLING by sequencing
To identify the natural allelic variants in the starch biosynthesis genes of rice, we performed EcoTILLING by sequencing in 512 indica rice germplasm accessions representing landraces, breeding lines, cultivars and exotic collections (Additional file 1: Table S1). The identified EcoTILLING regions in all the six candidate genes with high probability to harbour variants as indicated by their high Position Specific Scoring Matrix (PSSM) difference were presented in Table 1. The position and the length of the EcoTILLING fragments of all six candidate genes were indicated in Fig. 1. EcoTILLING fragments were successfully amplified using targeted primers (Additional file 1: Table S2) through touch down PCR to minimize the off target amplifications as recommended by Don et al. [45] (Fig. 2). Various cycling conditions and master mix combinations were optimised for different candidate genes (Additional file 2: Table S4, Additional file 3: Table S5). The amplified PCR products were cleaned up and pooled to produce 16 libraries. The libraries were individually bar-coded, pooled and sequenced to assess the variants.
The average reads generated by Ion Proton sequencing from 16 super pooled DNA libraries varied from 2.20 to 6.96 million, with average read length varying from 81 to 98 bp. (Additional file 4: Table S6). The average depth of coverage per accession was 264.09, which had surpassed the suggested minimum reads of 10 X [40] per base indicating the variants discovered in this investigation possess very high confidence limits.
From 20.4 kb of EcoTILLING regions spanning in six candidate genes, 72 (60 SNPs and 12 single base Indels) natural variants were discovered (Additional file 5: Table S7). Out of the 60 SNPs, transitions accounted for 13 numbers each of T → C and G → A, followed by eight numbers of A → G, and six numbers of C → T. Seven transversions each of T → G and C → A followed by three numbers of T → A, two of G → C and one of A → C were observed. All the 12 single base Indels discovered were deletions. Amylopectin synthesis Branching pattern of amylopectin is likely to be changed, no effect on amylose 3.
Prediction of deleterious variants
The positional analysis of the nucleotide variants indicated that 23.6% of them were in the exons and 76.4% were present in introns. Further functional analysis of the exon mutations indicated that 64.8% were silent and 35.2% were deleterious variants. The predicted deleterious variants along with their deconvolved accessions were furnished in Table 2 and Fig. 3. Four sequence variants observed in the GBSSI gene were regarded as null mutants as they were synonymous for amino acid changes. Seven variants of SSI gene were exon residing SNPs, which included two missense and five silent variants. The amino acid substitutions predicted viz., Glycine → Serine at 319th residue in the accessions Os-578 and Os-631 and Tyrosine → Histidine at 420th residue in the accessions Os-076, Os-468 and Os-678 resulting from single base substitutions G3538A and T4127C, respectively were found to be deleterious with SIFT scores of 0.00. Two out of the four SNPs discovered in SSIIa gene (G3797A and G4196A) were missense variants and both were predicted as deleterious with SIFT score of 0.00 and they resulted in amino acid changes Glycine → Serine at 604th residue in accessions Os-211 and Os-468 and Valine → Methionine at 737th residue in the accessions Os-365 and Os-495, respectively. Furthermore, a single base deletion (G3761-) was also found to be deleterious in the accession Os-351 which resulted in frame shift. In the gene SSIIIa, a single nucleotide variant (T3559A) borne by the accessions Os-468, Os-495 and Os-578 resulted in the alteration of amino acid Valine to Glutamic acid at 843rd position of the protein was also deleterious. Even though there were eight sequence variants observed in SBEIa and SBEIIb, none of them were predicted to be deleterious to protein function by SIFT analysis. All the deleterious variants in this investigation were predicted with SIFT (Sorting Intolerant from Tolerant), a powerful bioinformatic pipeline that predicts whether an amino acid substitution affects protein function or not. It works with an algorithm which accounts for the tolerance of amino acid substitutions with relation to their physical properties. The predicted SIFT score ranges from 0 to 1. The amino acid substitution is predicted to be damaging if the score is < 0.05, and tolerated if the score is > 0.05.
Biochemical characterisation
Grains from germplasm accessions carrying deleterious variants along with two positive control mutants (RSM 271 and RSM 311) and negative control rice cultivar Pooja were subjected to biochemical analysis. Results pertaining to the parameters related to starch digestibility are presented in Table 3. The cultivar Pooja, with no variants in all the SS genes, recorded lowest RS content of 2.5% and highest HI of 58.2%. The RS content of accessions carrying SNP variants in a single SS gene (Os-076, Os-211, Os-351, Os-631, Os-363, and Os-678) varied from 4.1 to 6.1% and was found to be moderately high in their HI (40.8 to 47.7%). Accessions with variants in two SS genes (Os-495, Os-578 and RSM 271) registered higher values of RS (6.8 to 7.4%) and relatively lower HI (42.3 to 46.5%). The accessions with SNP variants in all the three SS genes (Os-468 and RSM 311) were found to possess highest RS contents (7.5 to 7.6%) and registered very low HI values (36.3 to 37.8%).
Discussion
In this investigation, we attempt to unravel genetic factors responsible for slow digestibility of rice starch in order to utilise them in breeding this popular cereal for health benefits. Recently the reverse genetic approach, TILLING when performed with high throughput sequencing was very effective for detection of mutations in large rice and wheat mutant populations [40]. Eco-TILLING, also a reverse genetic method derived from the principles of TILLING is very useful for high throughput discovery of rare alleles in naturally evolved populations [46]. In this study, we employed EcoTILLING by sequencing for the first time in rice germplasm to discover rare alleles associated with slow starch digestibility.
In this investigation, we had discovered 72 natural variants representing 60 SNPs and 12 single base indels by exploring 20.4 kb of target gene sequences in 512 germplasm accessions. Among the candidate gene targets, we observed remarkably higher number of sequence variants (64) in the genes coding for starch synthases than that of starch branching enzymes (8). Similar trend in variant frequencies was reported by Kharabian-Masouleh et al. [47] wherein 286 variants in starch synthases and only 94 variants in starch branching enzymes were discovered in 233 rice breeding lines. High frequency of natural variants observed in starch synthases is postulated to the ability to compliment the loss of function of mutant forms by a wild type allele and vice versa. In contrary, the genes coding for starch branching enzymes possess nonredundant function hence lack the potential for complementation was demonstrated in Arabidopsis [48] and wheat [49].
Enhanced expression of short chain amylopectin was demonstrated to be associated with low glycemic response in many cereals [22]. An earlier study in rice revealed that a knock out mutant of SSI gene was observed to produce altered amylopectin composition in rice endosperm with a tendency for enhanced short chains without affecting the grain morphology and test weight [50]. Two natural SSI allelic missense variants isolated for the first time in this study, G3538A substitution with a Glycine → Serine alteration at 319th amino acid residue in the gemplasm accessions Os-578 and Os-631 and T4127C substitution with Tyrosine → Histidine at 420th residue in three accessions Os-076, Os-468 and Os-678 are expected to carry potential for altered short chain amylopectin composition. These natural allelic variants of SSI gene could be deployed for development of non-transgenic rice cultivars with lower glycemic index (GI).
In a comparative study between indica and japonica cultivars, Nakamura et al. [51] found that all the japonica accessions carried a serine residue instead of a glycine residue found in indica types at the 604th amino acid position resulting from a G3797A substitution in SSIIa gene. Upon characterization for their length of amylopectin, they found an increased proportion of short chain of DP 6-12 and decreased longer amylopectin chains with DP13-24 in all the japonica cultivars carrying this variant. The same G3797A substitution was discovered in the indica accessions of Os-211 and Os-468 for the first time in this study. This allele could also be deployed in indica rice breeding programmes for reducing GI in rice. Furthermore, the potential missense single base deletion variant (G3761-) resulting in a frame shift leading to loss of glycine residue at 592nd amino acid position could also be a potential allele for altering the glycemic response in rice.
The gene expression pattern analysis in many studies using japonica rice suggest that SSIIIa plays an important role during the starch filling phase of the developing endosperm by its contribution towards amylopectin synthesis [52][53][54]. It has been reported that the deleterious mutations in this gene can cause inefficiency in grain filling which results in loosely packed starch with high chalkiness [55]. In contrary, Fujita et al. [56] characterized two mutants of SSIIIa in japonica background through protein quantification studies. They found that the reduced activity of SSIIIa in the mutant endosperm was accompanied with a compensatory enhancement of GBSSI and SSI activities in both the mutants. In these mutants, they also reported a significant increase in the molar ratio of short chain amylopectin in comparison to their longer counter parts. In the accessions Os-468, Os-495 and Os-578, we had discovered a missense variant (T3559A) which resulted in the alteration of amino acid Valine to Glutamic acid at the 843rd position of the protein.
These accessions were characterized to be free from chalkiness (data not shown). Lack of chalkiness in these accessions could be postulated to the compensatory mechanism of GBSSI and SSI which are reported to exhibit multi-fold expression in indica varieties leading to no or less yield penalty. Such a compensatory mechanism is also evident in the control mutants RSM 271 and RSM 311 with normal grain size and morphology without chalkiness in spite of being carriers of three and four deleterious variants in SSIIIa gene, respectively.
The grains from 12 germplasm accessions carrying deleterious variants were subjected to biochemical analysis for determination of RS content and digestibility of starch through in vitro enzymatic studies (Table 3). Amylose content, an important parameter positively associated with RS expression, varied from intermediate to high (22.8 to 27.2%). Absence of low and waxy amylose types can be attributed to the lesser or no deleterious variants in the GBSS I gene which is commonly observed in indica rice varieties. As GBSS I is the only gene postulated to govern amylose synthesis in rice [57] hence complementation for loss of function mutations is remote unlike in the case of other starch synthases (SSI, SSII and SSIIIa) governing amylopectin synthesis.
Test accessions in this study revealed considerable variation for RS (4.1 to 7.6%) and HI (37.8 to 47.7%) in spite of the lesser variation in AC. In contrary to many investigations in germplasm of cereals [58][59][60] which had indicated positive correlation of AC and RS, their association in this study was negative (r = -0.316). The reason may be that the previous studies had representative accessions in all AC classes including low amylose and waxy types.
It is interesting to note that amylose independent variation observed in the RS and HI among the intermediate and high AC types was found to be dependent on the number of variants harboured in each of the SS coding genes and also on number of genes that carry the variants. For example, in the control cultivar Pooja which do not harbour any variant in the SS coding genes recorded lowest RS content (2.5%) and highest HI (58.2%). The six accessions (Os-076, Os-211, Os-351, Os-363, Os-631 and Os-678,) carrying variants in a single gene expressed moderately higher values of RS (4.1 to 6.1%) and HI (43.9 to 47.7%), whereas the accessions (Os-495, Os-578 and RSM 271) with variants in two genes expressed high values of RS (6.0 to 6.8%) and relatively lower HI (42.3 to 42.5%). The accessions (Os-468 and RSM 311) with variants in all the three SS coding genes were found to possess very high RS value (7.6%) and very low in HI (37.8%). The hydrolysis index (HI) is an in vitro biochemical determinant that estimates the rate of starch digestion of starchy food stuffs [61]. Various authors have suggested in vitro starch hydrolysis methods can be useful for predicting in vivo glycemic response of starchy staples [62,63].
An earlier study in rice had indicated that each SS coding gene plays a partially overlapping role in the synthesis of amylopectin fraction of starch. Zhang et al. [64] by repression of genes through RNAi established that SSIIa and SSIIIa interact with each other during starch synthesis leading to accumulation of amylopectin with variable molecular forms. In this investigation, we have isolated, to the best of our knowledge, for the first time a genotype Os-468 carrying mutations in all three SS coding genes viz., SSI, SSIIa and SSIIIa which also exhibited very high levels of RS (7.6%) and extremely low HI (37.8%) with a possible predominance of short chain amylopectin. This has to be proven by determination of the degree of polymerization (DP) of amylopectin of this elite germplasm line. The DP of amylopectin is a numerical indicator of chain length in terms of the number of constitutive monomeric glucose molecules. It determines many physico-chemical properties of grain starch which includes retrogradation behaviour, pasting and swelling properties, gelatinization temperature along with enzymatic digestibility [65][66][67]. Many studies had indicated that the fine structure of amylopectin can alter the digestibility rate of starch in rice. Yang et al. [68] in their study with rice mutants high in RS was found to exhibit an increased proportion of short chain amylopectin as compared to the proportion of long chains. Shu et al. [22] based on their study with six rice mutants with altered fine structure of amylopectin also established the similar relationship between RS content and increased proportion of short chain amylopectin with DP ≤ 12. Critical analysis of the structural chemistry of amylopectin in the genotype Os-468 will also provide concrete evidence for the postulated relationship between amylopectin fine structure with RS and starch digestibility.
Conclusion
We conclude that EcoTILLING by sequencing is a robust tool to survey allelic variants in target genes across large germplasm panels in rice. Our discovery of accessions with multiple missense variants in genes encoding starch synthases has the potential to reduce the glycemic response of rice starch.
Plant materials
Seeds of 837 Oryza sativa germplasm accessions from 5 different continents (Asia, Africa, North America, South America and Australia) representing18 countries were obtained from two different sources viz., Paddy Breeding Station, Tamil Nadu Agricultural University (TNAU), Coimbatore, Tamil Nadu, India and Ramiah Gene Bank, Department of Plant Genetic Resources, TNAU, Coimbatore, India. Two high RS expressing mutants viz., RSM 271 and RSM 311 isolated recently at our laboratory through gamma irradiation were included as positive controls. A rice cultivar Pooja with very low RS (unpublished) was included as a negative control for comparison. These accessions were raised in a single row trial. Based on the observations on flowering, seed set and plant morphology (data not shown), a total of 547 accessions were found to be photo insensitive and suitable for further multiplication. Out of 547 accessions, we randomly selected 512 accessions belonging to indica type for EcoTILLING by sequencing (Additional file 1: Table S1).
DNA extraction and normalization
Total genomic DNA from chosen 512 accessions was extracted from the leaf tissues using DNeasy 96 Plant kit (Qiagen, Valencia, CA, USA) following the manufacturer's protocol. The DNA concentration was measured with Tecan Infinite M200 pro multimode reader (Tecan, Switzerland) using a nano quant plate. After assessment of the concentration, DNA samples were normalized by dispensing different volumes of water in DNA samples using a Tecan Freedom Evo75 robotic liquid handling system (Tecan, Switzerland).
Pooling and super pooling of genomic DNA
Bidimensional pooling strategy of Tsai et al. [40] was adopted with slight modifications. We combined equivalent amount of concentration normalized DNA from eight germplasm accessions to make one 64 well pool plate in a symmetrical 8 × 8 well format instead of the regular 8 × 12 (96 well) microplate format. Genomic DNAs were further pooled by collapsing rows (8 wells × 8 individuals = 64 individuals) and column (8 wells × 8 individuals = 64 individuals) of this plate which resulted in 16 template super pools.
Selection of candidate genes and their sequences
Through literature search, we identified the putative candidate genes associated with RS expression in rice [54,[69][70][71][72]. The list of chosen genes with their putative functional effects on RS was presented in Table 1. The nucleotide sequences of gDNA and full length cDNAs of candidate genes were retrieved from the NCBI Genbank. Sequences of these genes were utilized for building up gene models and designing primers.
Discovery of EcoTILLING fragments, designing primers and PCR amplification
The EcoTILLING gene regions with maximum probability for missense variants were fixed using CODDLE bioinformatics pipeline (http://blocks.fhcrc.org/proweb/). The primers for PCR amplification of EcoTILLING fragments were designed with PRIMER 3 software (Additional file 6: Table S2).
Equimolar pooling of PCR products and sequencing of libraries
The concentration of PCR products were quantified using the Qubit dsDNA BR assay system (Invitrogen, Carlsbad, CA) to eliminate over-estimation resulting from free nucleotides in the PCR products. The amplified products of the EcoTILLING fragments were normalized and equimolarly pooled gene wise maintaining the super pool identity.
Sequencing library preparation was carried out using the Ion Xpress™ Fragment Library Kit, with 100 ng of super pooled DNA. Adapter ligation, size selection, nick repair and amplification were performed as per manufacturer's instructions ((Ion Xpress™ Fragment Library Kit -Part Number 4469142Rev.B). Size selection was executed using the Lab Chip XT (Caliper Life Sciences, USA) and the Lab Chip XT DNA 750 Assay Kit (Caliper Life Sciences, USA), with collection between 175 bp and 220 bp. The Agilent 2100 Bioanalyzer (Agilent Technologies, USA) and the manufacture recommended high sensitivity DNA kit (Agilent Technologies, USA) were used to determine quality and concentration of the libraries. Emulsion PCR and enrichment steps were carried out using the Ion Xpress™ Template Kit adopting its associated protocol (Part Number 44 69004 Rev. B). Individual libraries were barcoded by using Ion Xpress™ Barcode Adapters Kit. Sequencing was carried out using Ion Proton™ with 10 GB data output by using Ion 316™ Chip. The Ion Sequencing Kit v2.0 was used for sequencing reactions of all 16 libraries as per manufacturer's instructions.
SNP calling and mutation discovery
After the sequencing of libraries, filtering, trimming and aligning of sequence information were carried out by using Torrent Suite 1.5 with their reference sequences. After the alignment, Variant Caller was used for filtering the SNPs from the aligned sequence contigs in comparison with their corresponding reference sequences. The parameters such as min-max distance, mismatch cost, length fraction and similarity were selected in order to minimize reads alignment ambiguities as well to detect rare SNPs. The minimum variant frequency and minimum coverage were set 0.5 and 20, respectively which gives variations on or above 0.5% from the pools which were considered as SNPs. The candidate gene sequences of Pooja (a line with lowest RS content of 2.5%) were used as reference for variant calling.
Functional analysis of SNP variants
Discovered sequence variants were analysed by the PAR-SESNP program (http://blocks.fhcrc.org/proweb/) which provides information on the location along with the details about amino acid changes. The severity of mutations was analysed by SIFT (Sorting Intolerant from Tolerant) (http://sift.jcvi.org/) with default parameters [73]. Amino acids with substitutions probabilities <0.05 are predicted to affect protein function.
Deconvolution fromEcoTILLING pools
To identify the individual germplasm accessions carrying natural allelic variants from the prospective pools, individual genomic DNA of the eight constituent accessions was subjected to PCR amplification with primers designed for short target (~600 bp) spanning the variant region for each target gene. Sanger sequencing was performed using BigDye® Terminator version 3.1 cycle sequencing kit (Applied Biosystems, USA) on an ABI3730L (96 well) sequencer (Applied Biosystems, USA) according to the manufacturer's protocols. By comparing the gene sequences of the individual PCR amplicons after alignment with their reference sequence, the positive variant carrying accessions were identified for subsequent characterization.
Biochemical characterization of variants
Biochemical traits measured were total starch content, amylose content (AC), resistant starch (RS) and hydrolysis index (HI). Total starch contents were determined on the basis of the AACC International (AACC Method 76-13.01) method. AC was determined through high performance size exclusion liquid chromatography as described by Demeke et al. [74]. The RS content was estimated on dry weight basis following Goni et al. [75] using the Megazyme RS assay kit (Cat#K-RSTAR; Megazyme International Ireland Ltd., Ireland). In vitro starch hydrolysis rate and HI were determined according to Goni et al. [61]. In vitro enzymatic hydrolysis with different time points (0, 30, 60, 120 and 240 min) were carried out to predict the rate of starch digestibility which is measured as HI by comparing the rate of digestibility of white bread. This is considered to be in vitro equivalent of GI estimate. All determinations were done in three biological replicates and two independent observations for each replicate.
Statistical analysis
Duncan's multiple range test (DMRT) was carried out using MINITAB 16 to distinguish the mean differences between the accessions. | 2023-01-15T15:07:01.832Z | 2017-01-14T00:00:00.000 | {
"year": 2017,
"sha1": "09683358ac125ae14daad89547bb88c1ff2c6c4b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12870-016-0968-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "09683358ac125ae14daad89547bb88c1ff2c6c4b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": []
} |
53473746 | pes2o/s2orc | v3-fos-license | An approximate method for calculating transfer integrals based on the ZINDO Hamiltonian
In this paper we discuss a method for calculating transfer integrals based on the ZINDO Hamiltonian which requires only a single self consistent field on an isolated molecule to be performed in order to determine the transfer integral for a pair of molecules. This method is compared to results obtained by projection of the pair of molecules' molecular orbitals onto the vector space defined by the molecular orbitals of each isolated molecule. The two methods are found to be in good agreement using three compounds as model systems: pentacene, ethylene and hexabenzocoronene.
I. INTRODUCTION
In a disordered material -such as a glass of small molecules or a conjugated polymer film-charge transport can be modeled as a series of discrete hops on an idealized lattice; rates are controlled by parameters distributed according to some empirical distribution chosen to fit the experimentally measured field and temperature dependence of mobility [1,2,3] . The fundamental mechanism underpinning charge transport in many disordered organic solids is thought to be small polaron hopping, which -in the high temperature limit -can be described by the Marcus Equation [4]: where J represents the transfer integral, ∆E represents the difference in site energies, λ is the reorganization energy and all other symbols have their usual meanings. In our opinion, it would be a significant improvement if the parameters of this equation could be calculated for realistic morphologies, helping to clarify the relationship between chemical structure and charge mobility and reducing the number of free parameters in the modeling of data: the difficulty lies in the fact that simulation volumes can contain millions of molecules and therefore these parameters must be calculated using efficient, fast algorithms. In this paper we wish to discuss a computational prerequisite to solving the dynamics of electron motion in a disordered medium: the design of efficient algorithms for computing the transfer integral J.
The method is based on the ZINDO hamiltonian [5] and makes some approximations to allow calculation of transfer integrals without the necessity for performing self consistent field (SCF) calculations on pairs of molecules: only a single SCF calculation on 1 isolated molecule will be performed. It will require only the calculation of atomic overlap and can be thought of as being based on the calculation of molecular orbital overlap: for this reason we dub the method Molecular Orbital Overlap (MOO). The ZINDO Hamiltonian has been used extensively to calculate transfer integrals [6], even though recently it has become apparent the simply taking the splitting of the top two molecular orbitals is not always accurate because of polarization effects; another method which has been used by Siebbeles and coworkers exploits the molecular fragment capabilities of ADF to calculate J [7], although it has been pointed out by Valeev and co-workers that this ought to be corrected for molecular overlap [8]. In this paper we will show how to rewrite the Fock matrix from the ZINDO method in terms of localized monomer orbitals by orbital projection to obtain results similar to those obtained by the molecular fragment method. We will compare the results with those from MOO, showing how the agreement is very good. The model systems we study are ethylene, pentacene and hexa-benzocoronene (HBC). It should also be noted that we have carried out these test only for pairs of identical molecules and that the following derivations are labelled accordingly; extension to the general case is trivial.
II. METHOD
The definition of the transfer integral for charge transport from molecule A to molecule B is: where H represents the Hamiltonian for the system, Φ represents the multi-electron wavefunction of the molecule and the labels A and B denote whether the charge is localized on molecule A or B. Assume the multi-electron wavefunctions are described by single Slater determinants and invoke the frozen orbital approximation to argue the Φ A and Φ B differ only by the highest occupied molecular orbital (HOMO) on molecule A and molecule B, which will be singly occupied in molecule A and B respectively. If we were interested in transport of negative charge, we would obviously use the lowest unoccupied molecular orbital (LUMO). Using Slater Rules [9] we can evaluate the previous equation as: where F represents the Fock matrix and φ homo A and φ homo B represent the HOMOs of molecule A and B respectively. We will always consider the case of calculating transfer integrals for two Slater determinants which differ in one molecular orbital only, therefore evaluating equation 3 is always going to be our task.
A. Projective Method
In order to solve equation 3 we will invoke the spectral theorem, project the molecular orbitals (MOs) of the dimer onto a basis set defined by the MOs of the individual molecules, then -knowing the eigenvalues of the MOs of the dimer -we will reconstruct the Fock matrix in the basis set of the MOs of individual moleculs and simply read off J from the appropriate indeces of our new Fock matrix. The basis set defined by the MOs of the individual molecules C loc will be defined as: where φ i j labels the component in terms of atomic orbital (AO) j of molecular orbitals i. The AOs are numbered so that the first N 2 orbitals are localized on molecule A, and the second N 2 are localized on molecule B, similarly the molecular orbitals localized on molecule A are labeled with the first N 2 labels and the ones on molecule B are labeled with the second N 2 labels. The localized orbitals are deduced from SCF calculations on the isolated molecules respectively, seeing as how the two molecules are identical only one SCF calculation is required, the other set of orbitals can be obtained by rotating the orbitals of the first molecule according to the spatial orientation of the second.
In order to project the MOs of the dimer C dym onto C loc and obtain the orbitals of the dimer in the localized MO basis set all we have to do is invoke the spectral theorem and obtain: where the superscript t denotes transposition and C loc dym represents the orbitals of the dimer in the localized basis set. All that is left to do is use the dimer eigenvalues ǫ dym and rewrite the Fock matrix F in the new basis set to obtain the Fock matrix in the localized basis set: where F loc represents the Fock matrix in the localized basis set and the eigenvalues ǫ dym have been written in diagonal matrix form. Now transfer integrals can simply be read from the off-diagonal elements of this matrix, it we are interested in the transfer integral between the HOMO on molecule A and the HOMO on molecule B -and assuming that the HOMO is the i th orbital of molecule A -we would simply read the element F loc
B. Molecular Orbital Overlap calculation of J
In this section we will explain how to evaluate directly equation 3 in the ZINDO approximation and how further approximation can be used to make the running of a SCF calculation on a dimer unnecessary. If we write the HOMOs on molecule A and B, labelling the AOs in the same fashion we used in the definition of C loc , we can see that the only elements of the Fock matrix which we need to calculate are the off-diagonal blocks of the Fock matrix connecting the AOs on molecule A with the AOs on molecule B. These elements will necesseraly involve AOs on different centres, therefore their form will be: whereS represents the overlap matrix of atomic overlaps with σ and π overlap between p orbitals weighed differently, A and B labels the two atomic centres that the µ and ν atomic orbitals are centred on, β A labels the ionization potential of molecule A , P µν labels the density matrix and finally γ is the Mataga-Nashimoto potential. We assume that P µν is block diagonal and therefore does not contribute to the elements of the Fock matrix we are interested in calculating. This assumption for the dimer orbitals will hold both if the dimer orbitals are identical to the monomer ones or if each pair of orbitals of the dimer can be written as a constructive/destructive combination of pairs of monomer orbitals. To see why the latter is the case consider two particular dimer occupied orbitals φ i and φ i+1 which are formed from a bonding and anti-bonding combination of the occupied monomer orbitals φ Aj and φ Bj . The contribution of these two orbitals to the density matrix will be of the form: Where P i,i+1 represents the contribution of dimer orbitals φ i and φ i+1 to the density matrix. Because all monomer orbitals φ A and φ B are localized on one molecule only, this contribution will be block diagonal, also because all contributions to the density matrix will be of this form, the density matrix will be -overall -block diagonal. The task of determining values for the Fock matrix has therefore been reduced to the comparatively simple task of determininḡ S, the weighed atomic orbital overlap. Atomic overlaps between 1s, 2s and 2p orbitals can be determined analytically using the expressions derived in [10], the π and σ components of the < p|p > overlaps must be weighed according to the appropriate proportionality factors, in accordance with the scheme devised by Zerner and coworkers. This can be done without the need to perform a SCF calculation on the dimer, thereby achieving our set goal of estimating transfer integrals for dimers whilst performing only one calculation on the monomers to obtain the orbitals φ homo A and φ homo B .
III. COMPUTATION DETAILS
For both the projective and MOO methods, some information has to be extracted from a SCF calculation: in the case of the projective method we need the monomer MOs, the dimer MO and the dimer eigenvalues. For the MOO method, all we need are the monomer MOs for which we want to evaluate the expectation of the Fock matrix for. All information from self consistent field calculations is extracted from g03 [11]. The matrix operations for the projective method and the analytic solution to the AO overlaps for the MOO method are all computed with in-house code. The MOO libraries are written for row 1 and 2 atoms and will soon be realised on Gnu Public Licence. Both methods require starting geometries for each monomer, these were computed with g03 and the the B3LYP/6-31g* level.
IV. RESULTS
In this section we will show the results of comparison of the results from the projective and MOO methods. We will use ethylene and pentacene as examples of a conjugated molecule and HBC as an example of a high symmetry conjugated molecule where we will show that it is necessary to calculate different transfer integrals to define an effective transfer integral. The geometries which we use to compare these methods are shown in figure 1, these are: rotation around the C=C bond for one of the two molecules in an ethylene dimer, slip along the long axis of one of two pentacene molecules and x,y,z displacement of a HBC molecule in a dimer.
The results for ethylene are shown in figure 2, as it is expected from the planarity of the molecule, the transfer integral falls to 0 for perpendicular. These results are in qualitative agreement with the DFT results of Valeev and co-workers [8], even though the value of the transfer integral from ZINDO is roughly half of that from DFT. Certainly the projective and MOO methods are in excellent agreement, with a discrepancy between the two methods of approximately 10%. The results for pentacene are shown in figure 3 and -again-are consistent with each other.
Before considering the case of HBC, let us make a few comments on how to approach the problem of determining transfer integrals for molecules with symmetry induced degeneracy of the frontier orbitals. The physical phenomenone which one would expect to occur in such a situtation is that -upon charging -the molecule will lose its symmetry by Jahn-Teller distorsion and that therefore charge transfer will occure between non-degenerate orbitals. In order to avoid having to calculate many different transfer orbitals for the different possible distorsions, an approach which has been used in the literature -section IV D of [12] -is to simply take the root means square value of the four possible integrals between the two degenerate orbitals, which in our case would be J homo A homo B , J homo A homo−1 B , J homo−1 A homo B and J homo−1 A homo−1 B . Let us justify this approximation by generalizing equation 3. Assume that Φ A and Φ B are linear combination two Slater determinant, each corresponding to either the HOMO or the HOMO-1 of either molecule being simply occupied. Label these two Slater determinants Φ A1 and Φ A2 respectively for molecule A and similarly for molecule B. The linear combination Φ A can then be written: where χ represents the mixing angle for the two configurations. A similar equation can be written for the localized state on molecule B. When equation 2 is estimated for the form of Slater determinants from 9 one obtains form of 3 which involve the mixing angles for molecule A and molecule B and 4 expectation values for the Fock matrix: If one squares this expression to obtain the form of the observable |J| 2 and averages this equation over the two mixing angles, one obtains an expression for the effective transfer integral |J ef f | 2 as the average of the other 4 transfer integrals squared. In the case of z displacement, J homo−1 A homo B and J homo A homo−1 B are both 0, and the other two terms are the same, in this case, we will plot J homo A homo B as a function of distance. This quantity would be the same as half the splitting between the top 2 MOs and the next 2 MOs of a dimer and is equivalent to J ef f (2). A plot of this transfer integral calculated using MOO and deduced from a ZINDO calculation with the projective method is shown in picture 4. Again it can be seen that the two methods are in very close agreement, with the exception of the pair of molecules at 2.5Å , we postulate that at this distance the assumption of block diagonal density matrix brakes down. The value obtained for this geometry can also be compared to some from the literature: in [13] a quantitavely similar curve is reported from the splitting of the frontier orbitals of HBC. If the dimer is displaced in the xy direction, the terms J homo−1 A homo B and J homo A homo−1 B are no longer 0 and J homo A homo B is no longer equal to J homo−1 A homo−1 B , in this case we will plot J ef f as calculated from the transfer integrals using either the projective or the MOO methods. Figure 5 shows how -also for this case -the two methods are again in excellent agreement.
V. CONCLUSIONS
We have shown how to use the spectral theorem to project orbitals of a dimer onto the localized basis set of MOs of the constituent monomers. We have argued that this method can be used to obtain results similar to those obtained by the method of fragment orbitals and have shown that in certain cases these results can be compared to those obtained by looking at the splitting of the frontier orbitals. We have shown that in all these cases the MOO method can be used to obtain results which are essentially the same as for the projective method, achieving our goal of determining transfer integrals whilst performing only 1 SCF calculation. | 2018-10-26T06:15:35.229Z | 2006-10-31T00:00:00.000 | {
"year": 2006,
"sha1": "150df70c963b64e2651cf75227fc04bfca40b4ea",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/0610288",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "150df70c963b64e2651cf75227fc04bfca40b4ea",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
236181392 | pes2o/s2orc | v3-fos-license | EGFL6 regulates angiogenesis and osteogenesis in distraction osteogenesis via Wnt/β-catenin signaling
Background Osteogenesis is tightly coupled with angiogenesis during bone repair and regeneration. However, the underlying mechanisms linking these processes remain largely undefined. The present study aimed to test the hypothesis that epidermal growth factor-like domain-containing protein 6 (EGFL6), an angiogenic factor, also functions in bone marrow mesenchymal stem cells (BMSCs), playing a key role in the interaction between osteogenesis and angiogenesis. Methods We evaluated how EGFL6 affects angiogenic activity of human umbilical cord vein endothelial cells (HUVECs) via proliferation, transwell migration, wound healing, and tube-formation assays. Alkaline phosphatase (ALP) and Alizarin Red S (AR-S) were used to assay the osteogenic potential of BMSCs. qRT-PCR, western blotting, and immunocytochemistry were used to evaluate angio- and osteo-specific markers and pathway-related genes and proteins. In order to determine how EGFL6 affects angiogenesis and osteogenesis in vivo, EGFL6 was injected into fracture gaps in a rat tibia distraction osteogenesis (DO) model. Radiography, histology, and histomorphometry were used to quantitatively evaluate angiogenesis and osteogenesis. Results EGFL6 stimulated both angiogenesis and osteogenic differentiation through Wnt/β-catenin signaling in vitro. Administration of EGFL6 in the rat DO model promoted CD31hiEMCNhi type H-positive capillary formation associated with enhanced bone formation. Type H vessels were the referred subtype involved during DO stimulated by EGFL6. Conclusion EGFL6 enhanced the osteogenic differentiation potential of BMSCs and accelerated bone regeneration by stimulating angiogenesis. Thus, increasing EGFL6 secretion appeared to underpin the therapeutic benefit by promoting angiogenesis-coupled bone formation. These results imply that boosting local concentrations of EGFL6 may represent a new strategy for the treatment of compromised fracture healing and bone defect restoration. Supplementary Information The online version contains supplementary material available at 10.1186/s13287-021-02487-3.
Background
Bone repair is a highly complex process of bone formation that recruits a diversity of cells and signaling pathways to achieve fracture healing and bone remodeling [1,2]. As bone is highly vascularized connective tissue, it is not surprising that its vascular network serves both as a structural template and a key regulator of bone homeostasis [3,4]. As with other tissues, blood vessels provide bone with nutrients and remove metabolites, but they also may be involved in the molecular signaling that occurs between angiogenesis and osteogenesis [4][5][6][7][8].
Recent studies found that functional epithelium of a specific capillary subtype called type H, which expresses high levels of CD31 and endomucin (CD31 hi EMCN hi ), mediates bone homeostasis in the bone microenvironment [9][10][11]. Accordingly, uncovering the potential molecular pathways that enhance type H vessel formation and osteogenesis can shed new light on the process of bone regeneration and repair [12].
To gain insight into natural fracture healing and to better understand large congenital bone defects, researchers have studied the molecular processes underlying distraction osteogenesis (DO). The temporal and spatial bone remodeling process of DO make it an ideal system to study the roles that angiogenesis and osteogenesis play in bone healing [8,13]. DO is an innovative technique used by orthopedic surgeons to fix bone defects, and its validity has been verified in clinical and basic research [14][15][16].
DO comprises three phases: latency, distraction, and consolidation [17]. Following osteotomy and implantation of a distraction device, in the latency phase, the bone fragments are left undisturbed for 5-7 days during which a hematoma forms and bone regeneration begins. In the distraction phase, the device is engaged to gradually and continuously distract the bone segments until the desired length is achieved. In the consolidation phase, the distraction device is left in place to stabilize the bone, while the gap between bone fragments fills in with new bone and the resulting bony callus mineralizes until a sufficient level of bone regeneration is achieved [18,19]. As successful bone regeneration largely depends on the blood supply [20,21], it is not surprising, then, that DO is a highly vascular-dependent process that involves known and unknown angiogenic factors, including VEGF-A and epidermal growth factors (EGFs) among other known factors [22].
One factor that has recently captured researchers' attention for its role in angiogenesis is epidermal growth factor-like domain-containing protein 6 (EGFL6) [23]. EGFL6 is a member of the EGF superfamily of proteins; it is also upregulated in tumorigenesis and epithelial-tomesenchymal transition [24][25][26][27]. Xu et al. found that osteoblast-like cells secrete EGFL6 in a paracrine manner, triggering EC migration and angiogenesis through activation of the ERK pathway [28]. This suggests that direct crosstalk occurs between osteogenic cells and vascular ECs in the local bone environment. As osteoblasts are derived from bone mesenchymal stem cells (BMSCs), the crucial role of BMSCs in these processes has been proposed. However, little evidence exists for the angiogenesis-related mechanism of EGFL6's action during the regulation of BMSC osteogenic differentiation.
In the present study, we tested the hypothesis that EGFL6 plays a central role in angiogenesis-associated osteogenesis. We observed that EGFL6 enhances angiogenesis through EC proliferation, migration, and vessel tube formation and that application of recombinant EGFL6 increases CD31 and EMCN-markers for type H blood vessels-expressed in human umbilical cord vein endothelial cells (HUVECs). We also provide evidence that EGFL6 could act directly in BMSC osteogenic differentiation to further support osteogenesis/angiogenesis. Finally, we show that EGFL6 functions partly via activation of the Wnt/β-catenin pathway. These results support our hypothesis that EGFL6 plays a key role in angiogenesis-associated osteogenesis during bone healing and that it could represent a new therapeutic target for facilitating bone repair and regeneration.
HUVEC cultures and functional assays
Recombinant EGFL6 protein was purchased from R&D Systems (Cat no.8638-EG-050, R&D Systems Inc., Minneapolis, MN, USA). HUVECs, a primary cell type used for in vitro studies of angiogenesis, were obtained from ScienCell Research Laboratories, Inc. (Catalog #8000; Carlsbad, CA, USA). HUVECs were cultured in endothelial cell medium (Catalog #1001; ScienCell Research Laboratories, Inc., Carlsbad, CA, USA) containing endothelial cell growth supplement (Cat #1052; ScienCell Research Laboratories, Inc., Carlsbad, CA, USA) and fetal bovine serum. HUVECs were maintained at 37°C in a humidified incubator with an atmosphere of 5% CO 2 /95% air. EC proliferation assays were performed in 96-well culture plates using a cell proliferation assay kit (Cell Counting Kit-8 ; Dojindo Molecular Technologies, Inc., Rockville, MD, USA). CCK-8 is a colorimetric assay that measures the activity of cellular dehydrogenases, which are representative of overall cellular metabolic activity [29].
HUVECs (2 × 10 3 cells/well) were seeded in medium supplemented with different concentrations of recombinant EGFL6 (0, 50, 200, 500 ng/ml). From day 0 to day 5, 10 μl of CCK8 solution were added to each well, and the samples were incubated for 2 h. Absorbance was read on a microplate reader at 450 nm, and optical density values were taken as a proxy indicator of cell proliferation.
EC migration assays were conducted in 24-well transwell culture plates having 8-μm pore filters (model no. 3428; Corning, Tewksbury, MA, USA). Briefly, HUVECs underwent serum starvation for 2 h, and then 8 × 10 4 cells/well were seeded into the upper chamber of the transwell plate and incubated at 37°C for 24 h. Cells remaining on the surface of the upper chamber were carefully scraped away with cotton swabs. Cells that had migrated to the lower chamber surface were fixed with 4% paraformaldehyde (PFA) for 30 min, stained with 0.1% crystal violet for 25 min, and then the stain was eluted briefly with 33% acetic acid. Absorbance at 570 nm was measured using a microplate reader.
For the scratch-wound assay, HUVECs were plated in 6-well culture plates and grown to confluence in EC medium (ScienCell Research Laboratories, Inc, Carlsbad, CA, USA) for 24 h. Then, the confluent monolayer was "scratched" with the same yellow plastic pipette tip (200 microliter). The scratch produced an initial cell-free gap over which cell migration could be monitored. After the scratch was made, the cultures were washed gently with PBS to remove non-adherent cells. The HUVEC cultures were then maintained in serum-free EC medium. The rate of cell-scratch-wound closure was determined by capturing images of the entire scratch at the indicated times using a CCD camera connected to an inverted phase-contrast microscope (Nikon Instruments Inc., Melville, NY, USA). Images were acquired at × 10 magnification and analyzed with ImageJ software (National Institutes of Health, Bethesda, MD, USA) [30].
The EC tube-formation assay (ECTFA) was conducted with HUVECS (3 × 10 3 cells/well) seeded into 96-well culture plates precoated with Matrigel™ Matrix Growth Factor Reduced (Cat #356230; BD Biosciences, Franklin Lakes, NJ, USA). After incubation at 37°C for 6 h, we used the Angiogenesis Analyzer plugin for ImageJ (National Institutes of Health, Bethesda, MD, USA) [30] and ImageJ to quantify the characteristics of the pseudocapillary networks in the ECTFA [31]. We used an inverted phase-contrast microscope (Leica Microsystems GmbH, Wetzlar, Germany) to count the number of tube branches at × 4 magnification, measure the tube length (pixels), and count the numbers of capillary network meshes, nodes, and branches in five random fields per culture plate well.
BMSC cultures and osteogenic differentiation assays
BMSCs were isolated from femurs of 4-week-old female Sprague-Dawley rats as previously described [32]. BMSCs were cultured in T25 tissue culture flasks. The culture medium was Gibco® α-MEM medium (Thermo Fisher Scientific, Waltham, MA, USA) containing 10% FBS and 1% penicillin/streptomycin. The BMSCs were incubated at 37°C in a humidified incubator with a 5% CO 2 /95% air atmosphere. Cells from passage numbers 4-10 were used for all BMSC experiments.
In order to determine whether WNT/β-catenin signaling was involved, OIM was supplemented with 200 ng/ ml EGFL6 in the presence or absence of 0.3 μg/ml dickkopf-related protein 1 (DKK1; PeproTech, Cranbury, NJ, USA), an antagonist of Wnt/β-catenin signaling [34]. For staining extracellular mineral deposits, cells were fixed with 4% PFA, and then stained with Alizarin Red S (AR-S) for 10 min. To assay alkaline phosphatase (ALP) activity, osteoblasts were fixed with 4% PFA for 15 min, and then incubated with BCIP/NBT ALP Color Development Kit (C3206; Beyotime Biotechnology, Shanghai, China) according to the manufacturer's protocol.
Quantitative real-time PCR analysis
Total RNA was extracted using an EZ-press RNA Purification Kit (B0004D-100; EZBioscience, Roseville, MN, USA), and reverse transcription was performed with a cDNA Reverse Transcription Kit (EZBioscience, Roseville, MN, USA) according to the manufacturer's protocol. We performed quantitative analysis using SYBR Green I Master Mix (EZBioscience, Roseville, MN, USA) and a LightCycler® 480 Real-time PCR system (Roche, Basel, Switzerland). The qPCR primers provided by BioTNT (Shanghai, China) are listed in Table 1.
Western blot analysis
Cells were diluted at a 1:4 ratio with loading buffer (5X), and then heated at 95°C for 5 min. Protein extracts were separated by 7.5%, 10%, or 15% SDS-PAGE and blotted onto PVDF membranes (Millipore, Billerica, MA, USA). Subsequently, the membranes were blocked with 6% nonfat milk for 2 h. The PVDF membranes were then incubated overnight with primary antibodies against Hif1a Table 2). The membranes were then washed three times in TBST buffer, and then incubated with species-appropriate HRP-conjugated secondary antibodies for 1 h at RT. The immunoreactive bands were visualized using ECL kit (no. SQ201, EpiZyme Biotechnology Ltd., Shanghai, China) and detected using a ChemiDoc Imaging System (BioRad, Hercules, CA, USA). GAPDH was used as the protein loading control. All immunoblots presented in the figures were cropped from the originals.
Rat distraction osteogenesis model
Procedures for the animal distraction model were approved by the Animal Care and Use Committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital. Twenty-eight male Sprague-Dawley rats were equally divided in control and EGFL6 groups. Tibia DO surgery was performed according to previously established protocols [19]. Briefly, the rat was anesthetized with 4% chloral hydrate (0.7 ml/100 g), surgical area was shaved, cleaned with 75% alcohol, and a 30-mm incision was made over the middle part of the tibia. At the midshaft of the tibia, a 5-mm wide defect was made, producing two bone segments. A monolateral external fixator (Xinzhou Company, Tianjin, China) was mounted onto the proximal and distal bone segments with four stainless steel pins. The incisions were then sutured closed layer by layer. The timeline of events is shown in Fig. 5a.
Distraction was performed in three phases: (1) a 5-day latency phase, (2) a 10-day distraction or active lengthening phase, and (3) a 4-week consolidation phase. In the latency phase, the defect in the tibia was left undisturbed in order to initiate the early stages of bone healing. In the distraction phase, the distraction gap was infused with 0.5 ml of recombinant EGFL6 protein (200 ng/ml) or an equivalent volume of sterile PBS (control) every 2 days. Tibia specimens were harvested in the second and fourth week of the consolidation phase (n = 7 per group).
Digital radiography and micro-computed tomography
Starting with the first week of the consolidation phase, animals underwent weekly X-ray imaging of the distraction gap. The rats were anesthetized with general anesthesia during the X-rays. At the end of the consolidation phase, the rats were killed with an overdose of 4% chloral hydrate, and the tibias were harvested for threedimensional (3D) reconstructions using micro-CT analysis. We used a micro-CT in-Vivo SkyScan™ (SkyScan-1176; Bruker Corporation, Billerica, MA, USA) and a voxel size of 18 μm for all three spatial dimensions. Bone volume/total volume (BV/TV) and bone mineral density (BMD) were analyzed using CTan software (v1.13.2.1, Skyscan, Bruker Corporation, Billerica, MA, USA) and CTvol software (v2.4.0, Skyscan, Bruker Corporation, Billerica, MA, USA).
Statistical methods
All quantitative data are presented as means ± standard deviation (SD). SPSS 22.0 (IBM Corp. Released 2013. IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY: IBM Corp.) was used for statistical analyses. Two-tailed Student's t tests were used for comparisons of two groups, and one-way ANOVA was used for
EGFL6 acts on endothelial cells to promote CD31 hi EMCN hi type H endothelium formation in vitro
Angiogenesis involves proliferation and migration of ECs and ultimately capillary tube formation and upregulation of some angiogenesis-related factors in vivo [35]. We used HUVECs to determine how EGFL6 affects HUVEC proliferation; we measured cell proliferation in vitro with the CCK-8 assay. EGFL6 promoted cell proliferation in a mostly concentration-dependent manner. After 5 days of EGFL6 treatment, 50 and 200 ng/ml EGFL6 were the most effective concentrations to promote cell proliferation (Fig. 1b). Next, we determined whether EGFL6 influences cell migration. Cell migration as assessed in the scratchwound assay (Fig. 1a, c) and transwell assay (Fig. 1d, e) indicated that the migration of EGFL6-treated HUVECs was significantly enhanced compared with that of untreated control cells. In addition, EGFL6-treated HUVECs displayed an enhanced ability to induce capillary tube formation (Fig. 1f, g), supporting the finding that EGFL6 is a proangiogenic factor [28]. We also noticed differences in the optimal EGFL6 concentrations between assays, with 200 and 500 ng/ml EGFL6 being more effective in the migration assays (scratch assay and transwell assay) and the 50 ng/ml concentration, and to a lesser extent 200 ng/ml EGFL6, being more effective for capillary tube formation.
To investigate the mechanism of EGFL6's angiogenic effects on HUVECs, we measured the expression of VEGF-A, one of the master regulators of vascular growth [1,36,37]. In HUVECs maintained in culture, RT-PCR and western blot analyses of cell lysates of EGFL6-treated cells demonstrated that VEGF-A mRNA and protein levels were both elevated by EGFL6 (Fig. 1h-j). VEGF-A expression levels increased significantly with increasing culturing time (Fig. 1i, j). In addition, RT-PCR analysis for CD31 and EMCN, two markers for type H vessels [12], revealed that CD31 and EMCN mRNA expression levels were also upregulated 24 h after treatment with EGFL6 (Fig. 1h).
Western blot analysis revealed a similar trend, with EGFL6-treated HUVECs expressing higher levels of CD31 and EMCN proteins, particularly when treated with a higher concentration of EGFL6 (200 ng/ml) (Fig. 1k, l). These results indicate that EGFL6 has specialized functional properties in promoting angiogenesis and inducing the expression of CD31 and EMCN, which characterizes type H vessels [12].
EGFL6 enhances osteogenic differentiation of BMSCs
EGFL6 has been shown to be highly expressed in osteoblastic-like cells [28,38]. This, together with our findings that EGFL6 enhances angiogenesis, prompted us to investigate whether EGFL6 could directly increase the osteogenic capacity of BMSCs. To address this possibility, we treated BMSCs maintained in culture with OIM supplemented with different concentrations of EGFL6. As measured in the CCK-8 cell proliferation assay, EGFL6 treatment (50,200, or 500 ng/ml EGFL6 for 1-5 days) failed to affect cell proliferation compared to the untreated control (p > 0.05) (Fig. 2a). Thus, EGFL6 at the tested concentrations did not affect BMSC proliferation, and it had no significant cytotoxicity.
Next, we assessed the ability of EGFL6 to increase mineralization deposits by measuring ALP activity and calcium deposition in BMSCs by staining the cells for ALP and with AR-S. Compared with untreated control cells, EGFL6-treated BMSCs had more abundant mineralized nodules and significantly denser AR-S staining, indicating increased accumulation of calcium (Fig. 2b, c). Calcium deposition was EGFL6 concentration dependent, with more deposition at higher EGFL6 concentrations.
To further investigate how EGFL6 affects osteogenic differentiation of BMSCs, we measured the expression of osteogenic-related genes and their proteins in cultured BMSCs 5 and 10 days after EGFL6 treatment by RT-PCR and western blotting (Fig. 2d-f). After 5 days of EGFL6 treatment, the expression levels of VEGF-A and osteogenic markers (RUNX2, CXCR4, and BMP2) were all significantly increased. EGFL6 concentrations of 200 ng/ml and above had a stronger effect. This increasing trend was also observed after 10 days of EGFL6 treatment. Notably, there was a less obvious dose-dependent relationship at 10 days. The possible reason was that the osteogenesis process develops faster with higher concentrations of EGFL6. Immunofluorescence staining for RUNX2 confirmed this trend, as higher expression levels of RUNX2 was detected after EGFL6 treatment (Fig. 3a, b).
We also measured the expression of Wnt pathwayrelated markers in BMSCs following EGFL6 treatment for 5 or 10 days (Fig. 2e, f). It is well known that an increase in phosphorylated β-catenin levels means that it is being targeted for ubiquitin-associated degradation; this then inhibits the canonical Wnt pathway. Thus, we quantified active β-catenin and pGSK3β levels in EGFL6-treated BMSCs by western blotting and found higher expression after EGFL6 treatment (Fig. 2e, f). These results provide additional evidence that EGFL6 acts through the Wnt pathway in BMSCs. Collectively, these results provide supporting evidence for the hypothesis that EGFL6 stimulated osteogenic differentiation along with angiogenesis. Values are relative to control values. g Phase-contrast images of HUVECs cultured with EGFL6 in the tube-formation assay. h Expression levels of Hif1a, VEGF-A, CD31, and EMCN genes in HUVECs treated with EGFL6 for 1 day, as evaluated by RT-PCR. The housekeeping gene GAPDH served as an internal control. i, j Quantitation of VEGF-A protein concentration in HUVECs treated with EGFL6 (200 ng/ml) for the indicated times. k, l Western blots of lysates from HUVECs treated with EGFL6. Blots were probed with antibodies against angiogenesis markers (Hif1a, VEGF-A, CD31, EMCN) and pathway markers (β-catenin, pβ-catenin, active β-catenin, and pGSK3β). GADPH is the loading control. Significant differences among groups were determined by one-way ANOVA and post hoc Dunnett's test; *p < 0.05; **p < 0.01; and ***p < 0.001. All immunoblots were cropped from the original here and in subsequent figures. Experimental HUVECs were treated with the indicated EGFL6 concentrations. Control and experimental conditions for all functional assays were the same, except controls lacked EGFL6. Histogram values are based on three replicated experiments, and error bars are SD here and in all subsequent figures. Scale bars for a, e, g, 250 μm EGFL6-mediated Wnt/β-catenin signaling may regulate angiogenesis and osteogenesis One potential mechanism through which EGFL6 promotes angiogenesis and osteogenesis is Wnt/β-catenin signaling [39,40]. The Wnt/β-catenin pathway has been shown to control the cell fate of mesenchymal stem cells, causing them to become osteoblasts [39,40]. Also, Wnt/β-catenin signaling appears to play an important c Images of alkaline phosphatase (ALP)-stained BMSCs treated with EGFL6. Osteogenic differentiation of BMSCs was examined on day 3. Insets in b and c show low-magnification images of entire culture well. Scale bars, 250 μm. BMSCs were treated with different concentrations of EGFL6 for 5 or 10 days. d Expression levels of angiogenesis-and osteogenesis-related markers in BMSCs following treatment with/without EGFL6 for 5 days, as evaluated by RT-PCR. The housekeeping gene GAPDH served as an internal control. e Western blots of lysates from cultured BMSCs treated with/without EGFL6 for 5 or 10 days. Blots were probed with antibodies against different markers for angiogenesis (VEGF-A), osteogenesis (BMP2, CXCR4, RUNX2), and the Wnt/b-catenin signaling pathway (b-catenin, pb-catenin, active β-catenin, and pGSK3β). f Quantitation of expression of angiogenesis-, osteogenesis-, and pathway-related marker proteins in panel e. Significant differences among groups were determined by one-way ANOVA and post hoc Dunnett's test; *p < 0.05; **p < 0.01; and ***p < 0.001 role in tumor angiogenesis [41][42][43]. Thus, we investigated the expression of key factors involved in several signaling pathways after EGFL6 treatment.
Western blot analysis revealed that EGFL6 treatment enhanced the expression of β-catenin and active βcatenin in cultured HUVECs and BMSCs (Figs. 1k and 2e, f). Furthermore, in vitro experiments of BMSC osteogenic differentiation showed that β-catenin and active βcatenin immunofluorescence staining was significantly enhanced 5 days after osteogenic induction with EGFL6 at concentrations greater than 200 ng/ml (Fig. 3c-f). This result suggests that the Wnt/β-catenin pathway may be involved in EGFL6-mediated angiogenesis and osteogenesis. In addition, western blot analysis suggested that EGFL6 treatment altered the expression levels of Akt, p-Akt, and P-ERK1/2 to varying degrees in HUVECs and BMSCs (Additional file 1a-c), indicating that other signaling pathways may also be involved in EGFL6-mediated angiogenesis and osteogenesis.
DKK1 partially inhibits BMSC osteogenesis activated by EGFL6
To further investigate the involvement of the Wnt/β-catenin signaling pathway in EGFL6-enhanced BMSC osteogenesis, we evaluated how inhibition of this pathway with DKK1, a Wnt signaling pathway inhibitor, affects osteogenesis in BMSCs treated with EGFL6 in OIM medium. ALP and AR-S staining of BMSC cultures revealed higher ALP expression and calcium deposition, respectively, in EGFL6-treated cultures than in those treated with EGFL6 and DKK1 (Fig. 4a, b). Also, DKK1 application significantly decreased β-catenin and active β-catenin expression levels in BMSCs enhanced with EGFL6, compared to BMSCs cultured without EGFL6 and DKK1 (control) and BMSCs cultured with EGFL6 alone. Blocking the Wnt/β-catenin pathway with DKK1 partially reversed the EGFL6-mediated upregulation of osteogenic proteins (CXCR4, RUNX2), as demonstrated by western blotting (Fig. 4c, d) and immunofluorescence Immunofluorescent images of EGFL6-treated BMSCs stained for the osteogenic-specific protein RUNX2 (a), and pathway-specific protein β-catenin (c) and active β-catenin (e). Cells were counterstained with the nuclear stain DAPI (blue) and the cytoskeleton stain phalloidin (red). Scale bars, 100 μm. b, d, f Quantitation of mean relative levels of RUNX2 (b), β-catenin (d), and active β-catenin (f) in BMSCs treated with EGFL6 (200 ng/ml). Significant differences between experimental and control groups were evaluated by Student t tests; *p < 0.05; **p < 0.01; and ***p < 0.001 Scale bars, 250 μm. c Western blots showing the expression of osteogenic-specific and Wnt/β-catenin signaling-related proteins in BMSCs treated with/without EGFL6 and with/without DKK1. GADPH is the loading control. d Quantitation of osteogenic-specific and Wnt/β-catenin signalingrelated proteins normalized to control condition (NS; black-colored bars). e, f, g Immunofluorescent images of BMSCs stained for RUNX2 (green) or active β-catenin (green). BMSCs were cultured with 200 ng/ml EGFL6 to enhance BMSC osteogenesis, and then treated with/without 0.3 μg/ml DKK1, an antagonist of Wnt/β-catenin signaling. Scale bar, 100 μm. Quantitation of RUNX2 or active β-catenin immunofluorescent staining showing mean relative fluorescence of DKK1 + EGFL6 (blue-colored bars) and EGFL6 alone (pink-colored bars) conditions normalized to control fluorescence (no DKK1, no EGFL6; gray-colored bars). Significant differences were evaluated by one-way ANOVA and post hoc Dunnett's tests for all panels; *p < 0.05; **p < 0.01; and ***p < 0.001 staining ( Fig. 4e-g). These results indicate that the EGFL6-enhanced differentiation of BMSCs into osteoblast-like cells is mediated by Wnt/β-catenin signaling.
EGFL6 accelerates osteogenesis and promotes the formation of CD31 hi EMCN hi -positive type H vessels in a rat DO model We used a rat DO model to evaluate the effect of EGFL6 on bone regeneration and angiogenesis in vivo. The surgical procedure and treatment schedule for the rat DO model is schematically diagrammed in Fig. 5a and shown in Additional file 2.
Over the course of 4 weeks, increased callus was predominantly associated with EGFL6 infusion (200 ng/ml every 2 days) into the distraction gap during the distraction phase, as seen in radiographs of the rat tibia of control (PBS infusion) and experimental groups (Fig. 5b). The callus in the EGFL6-treated group was larger and denser than that in the PBS-treated control group. Figure 5c and d show 3D reconstructions and internal longitudinal profiles of the regenerated bone in the distracted area 2 and 4 weeks after consolidation. These morphological data were obtained through micro-CT analysis. Both BMD and BV/TV values were higher in the EGFL6-treated group than in the PBS group (Fig. 5e, f), indicating that EGFL6 enhances bone regeneration. 5 Locally applied EGFL6 accelerates bone formation and consolidation in a rat model of tibia distraction osteogenesis (DO). a Overall schematic diagram illustrating the study design. DO was performed in three phases as indicated. Midway through the distraction phase on day 10, recombinant EGFL6 (200 ng/ml), or an equivalent volume of sterile PBS (control), was infused into the distracted area and then infused again every 2 days until the end of the distraction phase on day 15. Distraction was performed at a rate of 0.25 mm per 12 h. Asterisk (*) in a indicates that the tibia bone fragments were distracted for a total of 5 mm over a period of 10 days. b X-ray images (lateral view) of the distracted bones from representative cases after 2, 3, and 4 weeks of consolidation. Bright white angular areas in images are the densities of the metal monolateral external fixator. c, d Three-dimensional reconstructions (c) and internal longitudinal profiles (d) derived from micro-CT of distracted tibia bones from representative cases of EGFL6-treated and control rats after 2 and 4 weeks of consolidation. Light areas show the increased bone-tissue mineralization. e, f Quantitation analysis of bone-tissue mineralization showing the mean (±SD) percentage bone volume/total tissue volume (BV/ TV) and mean (±SD) bone mineral density (BMD) in EGFL6-treated and control rats. Mineralization parameters were calculated from the micro-CT image data. Significant differences were evaluated by one-way ANOVA with post hoc Dunnett's tests. *p < 0.05 Histological and immunohistological sections of the regenerated bone in distracted tibias were analyzed after 2 and 4 weeks of consolidation in EGFL6-treated and control rats. During the distraction phase, a central fibrous interzone rich in fibroblasts and chondrocyte-like cells formed as the callus was stretched. In the consolidation phase, we observed a high density of proliferating osteoblasts bridging the fibrous interzone from either side of the gap, forming along capillaries and vascular sinuses (Fig. 6). These osteoblasts underwent primary mineralization, leading to the formation of columns of bone resembling the morphology of stalagmites and stalactites, as described previously [44,45].
After 2 weeks of consolidation, HE staining revealed that enhanced bone tissue had formed along the columns and in the center of the defect in the EGFL6-treated group. At 4 weeks, the bone marrow cavity had gradually recanalized in the EGFL6 group, occurring rather faster than in the control group (Fig. 6a). Similarly, Masson's trichrome staining and Safranin O/Fast green staining showed that the interzone contained more mature trabecular bone but fewer fibrous or cartilaginous tissues, respectively, in the EGFL6-treated group compared to that in the control group (Fig. 6b, c). These findings are consistent with those from histological sections immunostained for the osteogenic marker OCN and angiogenic marker VEGF-A (Fig. 7). OCN immunostaining was denser and more widely distributed in sections from the EGFL6-treated group than in those from the control group (Fig. 7a, b). In addition, active β-catenin immunostaining was denser and more widely distributed in the injury zone after EGFL6 treatment (Fig. 7c).
Closer examination of the distraction zone during the consolidation phase revealed that new vessels formed columns alongside the newly developing bone, extending toward the distraction gap. After 2 weeks of consolidation, immunofluorescence staining for CD31 and EMCN showed the presence of CD31 hi EMCN hi -positive type H vessels throughout the distraction gap in the EGFL6treated group. These findings support the hypothesis that EGFL6 plays a role in developmental and regenerative angiogenesis of type H vessels in bone. However, after 4 weeks of consolidation, in both EGFL6-treated and control rats, the density of CD31 hi EMCN hi -positive endothelium appeared to decline as mature trabecular bone increased (Fig. 7d).
Taken together, in the rat DO model, local infusion of EGFL6 into the distraction zone promoted the formation of CD31 hi EMCN hi -positive endothelial cells over time and widely accelerated bone formation. These results also provide further evidence that angiogenesis and osteogenesis are linked during bone modeling and remodeling [12] and that EGFL6 serves a molecular link between angiogenesis and osteogenesis.
Discussion
Bone formation is functionally coupled to the vascular network during skeletal system development and postnatal bone repair [3,9,46]. Some consider ECs to be important secretory cells in the bone marrow microenvironment, whereas others consider osteoblasts and osteocytes to be crucial in the regulation of bone formation and resorption [37]. However, the cellular and molecular interactions linking angiogenesis and osteogenesis remain obscured. In the present study, we discovered that EGFL6 stimulates bone healing and drives formation of CD31 hi EMCN hi vasculature, with ECs playing a central role. EGFL6 stimulates both angiogenesis and osteogenic differentiation through Wnt/β-catenin signaling.
Osteoclast lineage cells have been demonstrated to secrete platelet-derived growth factor type BB to recruit osteoprogenitors and ECs to form CD31 hi EMCN hi -positive vessels, linking angiogenesis with osteogenesis [47]. In that study, preosteoclasts were shown to secrete an angiogenic factor that stimulated not only angiogenesis but also supported osteogenesis, functionally coupling the two processes. As osteoclast bone resorption and osteoblast bone formation function cooperatively during bone remodeling [48], we reasoned that, in the same way, other factors could interact with BMSC/osteoblast cell lines and ECs. One of these factors appears to be EGFL6, an important piece of the puzzle that connects angiogenesisosteogenesis coupling in bone regeneration.
In our study, EGFL6 stimulated both ECs and BMSCs in vitro, with EGFL6 promoting EC proliferation, migration, and increased vascularization. These results are consistent with previous findings [41,49]. RT-PCR and western blot analyses showed that EGFL6 upregulates both CD31 and EMCN expression in ECs, providing evidence for its potential role in promoting the formation of CD31 hi EMCN hi -positive endothelium. As EGFL6 is expressed in osteoblastic-like cells, not ECs [28], EGFL6 Sections were immunostained for VEGF-A, a key angiogenesis marker, and visualized with peroxidase-DAB. Quantitation of VEGF-A-positive staining intensity in the distraction zone after consolidation for 2 weeks is summarized in the histograms on the right. c Immunofluorescent images of regenerated bone sections obtained from the distraction zone immunostained for active β-catenin (green). The sections were counterstained with DAPI (blue), which stains nuclei of all cells. d Immunofluorescent images of regenerated bone sections obtained from the distraction zone immunostained for CD31 (red) or endomucin (EMCN, green). The sections were counterstained with DAPI (blue). Note that CD31 hi EMCN hi (yellow) vessels in EGFL6-treated rats are densely stained (arrows) compared to vessels in the controls. Scale bars for a-d, 200 μm. Significant differences were evaluated by Student t tests; *p < 0.05 may regulate angiogenesis via a paracrine mechanism, acting between osteoblasts and ECs in the bone microenvironment [38].
In cases of bone defect repair, BMSCs could generate osteoblasts and their progenitors, which contribute to bone homeostasis and fracture healing. As BMSCs are widely used in basic research and for clinical applications [50,51], molecules that can create an optimal osteogenic microenvironment and enhance BMSC function are of great value [52]. We discovered that EGFL6 strongly induced BMSC osteogenesis in vitro in a time-and concentration-dependent manner. The osteogenic differentiation capacity of EGFL6-treated BMSCs was much greater than that of untreated BMSCs (control), as demonstrated by expression of BMP2, RUNX2, and CXCR4. BMSCs genetically engineered to overexpress BMP2 or CXCR4 increase not only bone strength but also promote bone regeneration [53,54]. These findings suggested the possibility of a positive feedback loop, wherein osteoblast-like cells secrete EGFL6 and EGFL6 promotes BMSCs differentiation. A mesenchymal stem cell (MSC) population resides in the perivascular niche of the bone marrow [55,56]. Other than their capacity to transdifferentiate or differentiate into osteoblasts, BMSCs themselves can communicate with ECs and integrate into the bone regeneration framework [57,58]. At optimal EGFL6 concentrations, VEGF-A expression in BMSCs was significantly enhanced compared to untreated BMSCs. These results indicate that via EGFL6, the interaction of BMSCs with ECs further regulated angiogenesis through the secretion of angiogenic growth factors [2].
In the tight coupling of angiogenesis and osteogenesis, accumulating evidence indicates that several signaling pathways are potentially activated, including Notch signaling, Hif1a/VEGF signaling, and TGFβ/Smad signaling [9,59,60]. In the present study, we demonstrated for the first time that EGFL6 stimulated both angiogenesis and osteogenesis through the Wnt/β-catenin pathway. This pathway is an essential signaling axis in stem cell proliferation, differentiation, and tissue homeostasis during development [61,62]. A range of Wnt ligands mediates cell-to-cell communication and adhesion, while β-catenin functions as the main downstream effector in this axis [63]. The regulation of the Wnt/β-catenin pathway is important for osteoblast differentiation and bone regeneration [39]. Once osteogenic differentiation is initiated, high β-catenin levels are needed to promote osteogenesis but prevent chondrogenesis.
In the present study, we observed higher expression of β-catenin and downstream osteogenic markers in EGFL6treated cells maintained in vitro. DKK1, an inhibitor of Wnt/β-catenin signaling, only partially blocked the EGFL6-mediated increase we observed in osteogenesis, indicating that additional pathways are likely involved in the EGFL6-mediated activity. At the same time, Wnt/β-catenin pathway signaling was also critically involved in the modulation of EC migration and vascular sprouting. Wnt/ β-catenin signaling is also fundamental to normal CNS vascularization [64], as well as vascularization in chondrogenesis [65]. Here, we observed increased expression of βcatenin in HUVECs following EGFL6 treatment in vitro, suggesting that Wnt and downstream β-catenin signaling are potential functional targets involved in angiogenesisosteogenesis coupling.
We further investigated the effects of EGFL6 in vivo in a rat tibia DO model. Following osteotomy, new bone is formed within the distraction gap, bridging the two bone segments [44,45]. Compared to other models, DO mirrors temporal and spatial bone remodeling pathology with a much greater angiogenic response, making it an attractive model for investigating the effects of EGFL6 on bone regeneration that is accompanied by improved vascularization, extensive mineralization, and eventual trabecular remodeling. Figure 8 presents our working model of EGFL6mediated signaling in bone repair. The schematic illustrates the coupling of angiogenesis and osteogenesis in a rat DO model. The direct infusion of EGFL6 into the distraction gap accelerated bone mineralization and recanalization during the consolidation phase. Micro-CT analysis indicated that newly formed bone in EGFL6treated rats was more mineralized than that in untreated control rats. Simultaneously, EGFL6 enhanced the formation of type H vessels along the primary mineralization matrix, particularly during the early part of the consolidation phase. The increase in CD31 hi EMCN hi -positive ECs indicated that EGFL6 may regulate the coupling of angiogenesis with bone formation and that EGFL6 plays a key role in trabecular bone remodeling. This finding has important implications for certain conditions like compromised fracture healing and for treatments to repair bone defects.
Our results are subject to some limitations. As we observed a highly selective distribution of CD31 hi EMCN hipositive vessels at the distal end of the arterial network in vivo in EGFL6-treated rats of the DO experiment, it is likely that these vessels influenced bone regeneration and metabolism of other cell types that we did not study. Thus, we cannot exclude the possibility that these other unidentified cell types contributed to EGFL6-associated enhancement of osteogenesis. Although we demonstrated the involvement of EGFL6 in the coupling between angiogenesis and osteogenesis, the detailed mechanisms remain to be elucidated. Also, future studies should consider using other animal models besides the tibia DO model with EGFL6 infusion, such as EGFL6-knockout mice or alternative application methods, to test the therapeutic effects of EGFL6.
Another possible issue for future study is why ECs migrated more effectively at 200 and 500 ng/ml EGFL6 but capillary tube formation was more significant at 50 ng/ml EGFL6 and to a lesser extent at 200 ng/ml EGFL6 and not near as effectively with 500 ng/ml. The optimal concentration of EGFL6 may be different depending on the cell state and the microenvironment. We also observed differences in mRNA expression and what one might expect in the corresponding protein expression. These could be accounted for by transcriptional or post-transcriptional regulation.
Conclusions
The present study suggests that EGFL6 is a key player in the tight, functional coupling of angiogenesis and osteogenesis, possibly via the Wnt/β-catenin pathway activation and stimulation of CD31 hi EMCN hi type H vessels. Boosting concentrations of EGFL6 and/or other vascular-targeted factors may be a new strategy for the treatment of compromised fracture healing and bone defect restoration. Moreover, this enhanced understanding of the role of EGFL6 angiogenesis-osteogenesis coupling in the bone microenvironment may help to develop new diagnostic biomarkers and therapies for bone pathologies like osteoporosis and osteonecrosis. Fig. 8 Working model of EGFL6-mediated signaling, illustrating the coupling of angiogenesis and osteogenesis in the rat DO model. During the consolidation phase of bone remodeling, type H vessels form alongside newly developing bone and extend toward the distraction gap. In the bone marrow microenvironment, multiple cell types secrete angiogenic factors to support type H vessel formation. Osteoblast-lineage cells and ECs secrete VEGF-A. EGFL6 secreted by osteoblasts enhances VEGF-A expression in ECs to promote cell migration, tube formation, and branching, which further stimulates the formation of type H vessels during early consolidation phase. As a key regulatory factor, EGFL6 also promotes osteogenic differentiation of BMSCs into osteoblast-lineage cells, activated by the Wnt/β-catenin signaling pathway. EGFL6 also increases expression of the osteogenic proteins RUNX2, BMP2, and OCN, leading to faster restoration of the bone defect in the DO model. Abbreviations: ECs, endothelial cells; BMSCs, bone marrow mesenchymal stem cells; EGFL6, epidermal growth factor-like domain-containing protein 6; VEGF-A, vascular endothelial growth factor; RUNX2, Runt-related transcription factor 2; BMP2, bone morphogenetic protein 2; OCN, osteocalcin | 2021-07-23T14:01:20.412Z | 2021-07-22T00:00:00.000 | {
"year": 2021,
"sha1": "c97fb3b1b5833baf21af4408c42a3682abf50538",
"oa_license": "CCBY",
"oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-021-02487-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ac4974fedbe78c086f50988d7649b58ef86eaf1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267699537 | pes2o/s2orc | v3-fos-license | Publication bias in psychology: A closer look at the correlation between sample size and effect size
Previously observed negative correlations between sample size and effect size (n-ES correlation) in psychological research have been interpreted as evidence for publication bias and related undesirable biases. Here, we present two studies aimed at better understanding to what extent negative n-ES correlations reflect such biases or might be explained by unproblematic adjustments of sample size to expected effect sizes. In Study 1, we analysed n-ES correlations in 150 meta-analyses from cognitive, organizational, and social psychology and in 57 multiple replications, which are free from relevant biases. In Study 2, we used a random sample of 160 psychology papers to compare the n-ES correlation for effects that are central to these papers and effects selected at random from these papers. n-ES correlations proved inconspicuous in meta-analyses. In line with previous research, they do not suggest that publication bias and related biases have a strong impact on meta-analyses in psychology. A much higher n-ES correlation emerged for publications’ focal effects. To what extent this should be attributed to publication bias and related biases remains unclear.
Introduction
A spectre is haunting psychology-the spectre of bias.The erroneous belief that statistically non-significant findings are uninformative incentivises researchers to publish statistically significant findings [1,2].As a consequence, researchers might selectively report those analyses and outcomes that turn out statistically significant, and they might keep their statistically nonsignificant studies in the file drawer [3,4].These biases, collectively known as publication selection bias (PSB), cause a problematic inflation of favourable evidence in the published literature [5].As a consequence, treatments might be less effective than believed and PhD students and researchers might waste their time investigating imaginary effects.
PSB is a concern across many disciplines [6,7].Multiple lines of evidence indicate that psychology is affected too.Thus, effect sizes are often substantially smaller: in unpublished than in published studies [8,9]; in replications than in the original studies being replicated [10]; and in studies with some pre-registration of hypotheses, data collection methods, and analyses (all of which limit PSB) than in studies without pre-registration [11].Also, registered reports (which preclude PSB because details of data collection, reporting standards, and publication are agreed before the study commences) find evidence in favour of their central hypothesis much less frequently than conventional studies do [12].Moreover, PSB is often suggested by various techniques developed to detect its prevalence in meta-analyses [8,13].
In light of this evidence for PSB and its negative consequences in psychology, an indicator would be desirable to reflect how serious the problem is and, perhaps more importantly, whether PSB reduces over time, e.g., due to the effectiveness of proposed counter-measures such as study pre-registration [14].As we shall describe in greater detail below, the indicators that we have discussed so far (e.g., comparison of results across studies that are more or less prone to PSB) show serious limitations for these purposes.More suitable towards these ends might be the correlation between studies' sample size n and their effect size (henceforth n-ES correlation), as we argue in detail below.Here, we present two studies that aim to better understand the validity of the n-ES correlation as an indicator of PSB in psychology.
Measures that proved valuable for flagging PSB as a potential problem might be less suitable to indicate how widespread the problem is or how PSB changes (or fails to change) over time.Effect size comparisons between published and unpublished studies are hampered by the fact that the latter are difficult to obtain without bias [8].Effect size comparisons between original studies and their replications are limited by the relatively small number of replications and the lack of representativeness in the studies chosen for replication (e.g., a short online study that shows astonishing results will be more likely to be replicated than an arduous clinical trial that finds a small effect).Given the relative novelty of pre-registered studies and registered reports, analyses reliant on them have limited value for studying change over time.Finally, techniques that have been developed to uncover PSB within meta-analyses often disagree in their conclusions, suffer from low statistical power, and generally struggle in the face of effect size heterogeneity (i.e., when the true magnitude of the effect under investigation varies across studies), which is almost ubiquitous [15][16][17].
The n-ES correlation is an alternative indicator of PSB, which avoids these problems.Its logic is best illustrated when we imagine a set of studies that investigate the same effect in the same way but differ in their ns.Let us assume that all studies with p < .05get published and all studies with p � .05get rejected.Across studies, the observed effect sizes fluctuate symmetrically around the true population effect size (with this fluctuation being stronger in smaller studies than in larger studies, see funnel plot in Fig 1).Whether a study's p-value turns out low enough to result in publication hinges on two factors: the study's n (ceteris paribus, larger ns result in smaller p-values) and the study's observed effect size (ceteris paribus, larger observed effect sizes result in smaller p-values).Consequently, the threshold for the smallest observed effect size that satisfies p < .05decreases as n increases; in the subgroup of published studies, a negative n-ES correlation therefore emerges, which is absent in the complete set of studies (see Fig 1).Consequently, a negative n-ES correlation might indicate PSB.(Our example only considered publication being contingent on statistical significance, whereas PSB encompasses additional biases such as selective reporting of outcomes and analyses [4].These additional biases, however, contribute to a negative n-ES correlation for similar reasons.) The n-ES correlation avoids the problems discussed for other PSB indicators above.This can be illustrated by an influential survey that looked at a random sample of almost 400 psychology papers from 2007 and found a strong negative n-ES correlation [18].Hereafter, we refer to samples of this type that compile data across a wide range of topics as cross-topics samples.For cross-topics samples, statistical power is not a concern because researchers can compile as large a sample of studies as is required.Also, random sampling of studies, which guarantees the representativeness of the sample for the target population, is easy to achieve.Effect size heterogeneity, however, remains a potential problem in cross-topics samples because innocuous factors other than PSB can lead to a negative n-ES correlation.A particular concern is that researchers have some understanding of the magnitude of the effect they are studying and adjust their sample size accordingly, i.e., use larger samples to study small effects and smaller samples to study large effects.This will lead to a negative n-ES correlation even in the absence of PSB, and we shall refer to this as sample-size adjustment.Although, sample size calculations are infrequent in psychology [18], this does not invalidate concerns over samplesize adjustment because researchers might have tacit knowledge about which n suffices in their field.Consequently, it is unclear to what extent the strong negative n-ES correlation found by [18] represents PSB or sample-size adjustment.In Study 1, we sought to explore this issue.We did so by analysing the n-ES correlation under a range of circumstances that differ in key aspects.Given the complexity of the issue, some readers might find Table 2 a helpful companion to the detailed account that follows below.
Aims
The first aim of Study 1 was to investigate the n-ES correlation under circumstances that make sample-size adjustment unlikely.We did this by computing the n-ES correlation for the studies combined within the same meta-analysis (henceforth, within meta-analyses) E.g., this would be r = -.59 for a fictitious meta-analysis of the white studies in Fig 1 .What drives the difference in effect sizes among studies that are combined in a meta-analysis, remains typically unclear [15,19].This suggests that researchers are unable to predict if their effect will turn out small, average, or large compared to other studies of the same topic.Consequently, diverse investigations of the same topic should be based on the same expectation of effect size, and this should largely eliminate sample-size adjustment within meta-analyses.(There are further reasons why a negative n-ES correlation might arise in the absence of PSB [20].However, these reasons concern specific characteristics of medical trials that rarely apply to psychological research, which is why we do not pursue this point further.) In order to judge if the observed (average) r-ES correlation within meta-analyses is indicative of PSB, it is important to know what to expected in the absence of PSB.Intuitively, r = .00appears correct.However, even without PSB, negative n-ES correlations might arise (see Fig 2).
Consequently, it would be helpful to compare the n-ES correlation within-meta-analyses (where PBS might be an issue) against data that are free from PBS.Many-Labs replications and Registered Replication Reports (hereafter multiple replications) present such an opportunity.Multiple replications use standardised procedures to replicate original studies across multiple sites (e.g., [21,22].Because any set of replications addresses the same original study, samplesize adjustment cannot be an issue.Additionally, because multiple replications are pre-registered, PSB can be expected to be absent, too.We therefore determined the n-ES correlation within each set of multiple replications to obtain a PSB-free comparison standard for the n-ES correlation observed within meta-analyses.If we were to find a stronger n-ES correlation within-meta-analyses than within multiple replications, this would suggest PSB within metaanalyses.Conversely, if the (average) n-ES correlation turned out to be similar within metaanalyses and within multiple-replications, this would suggest the absence of PSB in metaanalyses.
The second aim of Study 1 was to explore evidence for sample-size adjustment across topics and its impact on the n-ES correlation in cross-topics samples.Plausibly, researchers use relatively small samples to investigate topics that typically produce strong effects and relatively large samples for topics that typically produce weak effects.As we discussed earlier, this could explain the negative n-ES correlation in cross-topics samples [18], even in the absence of PSB.The average effect size in a meta-analysis reflects the typical strengths of effect sizes for the topic under investigation.We therefore correlated meta-analyses' average effect size with their average sample size.If this n-ES correlation between meta-analyses was negative, this would indicate sample-size adjustment across topics.In this case, the n-ES correlation can be expected to be stronger in cross-topics samples than within meta-analyses, because samplesize adjustment is implausible in the latter (see also Table 2).To explore this idea was the third aim of Study 1. Finally, this study provided an opportunity to further examine the distribution of empirical effect sizes.A previous study [23] evaluated 12,170 correlation coefficients and 6,447 Cohen's d statistics extracted from studies included in 134 published meta-analyses.In their terminology the 25 th , 50 th and 75 th percentiles are labelled small, medium, large and they contrasted these with Cohen's guidelines [24,25].A survey [23] found that these empirical values were considerably lower than Cohen's guidelines (d = 0.15/0.36/0.65 instead of 0.20/0.50/0.80 for small, medium, and large effect sizes, respectively).In a sample of 150 meta-analyses, we compare our empirical estimates for small, medium, and large effect sizes to previous findings [23].
Methods
Samples.Our analyses require meta-analyses that report sample sizes and effect sizes for their primary studies.We used a compilation of such meta-analyses, 50 each for cognitive psychology, organizational psychology, and social psychology [15].From the same source, we took 57 multiple replications as a comparison standard [21,22,[26][27][28][29][30].Following [18], the signed effect sizes were recoded as unsigned Cohen's d.The datasets are described in full in [15].
Data analysis
To obtain results within meta-analyses and within multiple-replications results, we computed the n-ES correlation as Pearson's r for each of the meta-analyses and for each set of multiple replications.To facilitate comparisons with [18], we also calculated Spearman ρ (r S ).Where relevant we used bootstrapping for comparing groups (10,000 bootstraps) [31].All analyses were conducted in R 4.2.1 [32].The data and analysis document (including additional analyses and robustness checks) can be found at https://osf.io/ce6v3/?view_only= 86b6b997ca52430898a6a2bdb38cf9bb.
Results
Small, medium, large effect sizes based on replication studies and meta-analyses.Following [23], we examined the 25 th , 50 th and 75 th percentiles and labelled these small, medium, large effect sizes (Cohen d).For multiple replications, the values corresponding to small, medium, and large effect sizes were 0.12, 0.33 and 0.86, respectively.
For all meta-analyses, values for small, medium, large effect sizes were 0.18, 0.42 and 0.77.Dividing these by discipline, showed some minor variations.For cognitive psychology, the corresponding values were 0.24, 0.50 and 0.90.For organizational psychology, the corresponding values were comparable to those of cognitive psychology: 0.22, 0.45 and 0.80.For social psychology, the corresponding values were notably smaller than the two other disciplines: 0.13, 0.33 and 0.64.
The n-ES correlation in the absence of sample-size adjustment: Within meta-analyses and multiple replications.Addressing our first aim, we first focus on the n-ES correlation within meta-analyses and multiple replications.As discussed earlier, both should be unaffected by sample-size adjustment.Additionally, multiple replications are also unaffected by PSB (see also Table 2).Descriptive statistics are presented in Table 1.
As can be seen, r and r S produced very similar results for the n-ES correlation within metaanalyses (Table 1).Likewise, whether the average correlation was expressed as mean or median hardly affected results.In the remainder, we follow [18] and focus on r S ; for consistency with subsequent analyses, we describe the average n-ES correlation via the median.Consistently throughout domains, negative n-ES correlations emerged, with averages ranging from small to small-to-medium in strength.All median n-ES correlations differed statistically significantly from zero (because the confidence intervals excluded zero, see Table 1).Interestingly, we found the same n-ES correlation within meta-analyses and multiple replications, median r S = -.16 (see also Table 2, which summarises key results).As discussed earlier, this similarity would be expected if meta-analyses are unaffected from PSB.
As discussed earlier, the negative n-ES correlation likely arises from our reliance on unsigned effect sizes (see Fig 2).In order to test this explanation, we re-ran the n-ES The analysis correlates meta-analyses' average effect size with their average sample size.At this level of aggregation, these aspects are immaterial.b Effect sizes in multiple replications focus on hypotheses; however, this is immaterial here because publication selection bias can be ruled out.
Exploring sample-size adjustment: The n-ES correlation between metaanalyses
Addressing our second aim, we investigated sample-size adjustment and checked if studies on topics that tend to produce relatively large effect sizes tend to have relatively small n.We therefore examined the correlation between meta-analyses' average n and average effect size.Within meta-analyses, average n tended to be strongly right-skewed (Mdn skewness = 2.38); average effect size also tended to be right-skewed, but to a lesser extent (Mdn skewness = 1.05).For each meta-analysis, we therefore expressed its average effect size via the mean and its average n via both its mean and its median.We then ran two sets of analyses, one based on mean n, and one based on median n.Both led to very similar results and identical conclusions.Here, we report the analyses based on median n.
The scatterplot for the relationship between meta-analyses' average n and average effect size showed strong outliers (see Fig 3).Consequently, we focussed on r S , which resulted in a smallto-medium negative correlation (r S = -.24,p = .003).(Note that the same correlation was statistically nonsignificant when expressed as r; r = -.14, p = .082.)As discussed earlier, this pattern is indicative of sample-size adjustment across topics.
Comparing the n-ES correlation within meta-analyses and cross-topics.Irrespective of PSB, sample size adjustment (which is plausible cross topics but not within meta-analyses) should lead to a higher n-ES correlation in cross-topics samples than within meta-analyses.Our previous analysis investigated the n-ES correlation cross topics, but at a high level of aggregation (meta-analyses' average effect size and average n).This precludes a sensible comparison with our earlier results regarding the n-ES correlation within meta-analyses, which was investigated at a more granular study level.To enable such a comparison, we pooled all primary studies across our 150 meta-analyses, treated them as a single cross-topics sample, and computed a single n-ES correlation as [18] did.
The 150 meta-analyses comprised altogether 7,227 primary effect sizes and sample sizes.Right-skew was observed for d (4.3) and particularly for n (78.2).Medians (M, SD) were 0.42 (0.57, 0.62) and 100 (438, 8131), respectively.The n-ES correlation across topics was r S = -.23,95% CI [-.21, -.25],only slightly higher than our average n-ES correlation within meta-analyses (median r S = -.16).This suggests that the effect of sample-size adjustment on the n-ES correlation in cross-topics samples is modest.
A previous study [18] found a much stronger n-ES correlation in their cross-topics sample (r S = -.45)To facilitate comparisons, we computed an estimated 95% CI [34], which was [-.36, -.53].This differs markedly from the CI for our n-ES correlation across topics [-.21, -.25].Consequently, sampling error cannot easily account for the stark difference in the n-ES correlation in our cross-topics sample and in [18].
Discussion
In a sample of 150 meta-analyses, we found, on average, a fairly small negative n-ES correlation (mean r = -.13).This is virtually the same as the mean n-ES correlation of r = -.16 previously observed, with the same methods, in another sample of 75 psychology meta-analyses [33].The authors interpreted their result as evidence that meta-analyses are frequently affected by publication bias.Our results offer a different perspective.We found the same negative n-ES correlation in multiple replications, which are free from publication bias (and PSB, more generally), and we showed that the negative n-ES correlation is mostly a statistical artifact that arises from using unsigned effect sizes.Therefore, findings regarding n-ES correlations within meta-analyses offer, in our view, little evidence for PSB in psychology meta-analyses.
Previously, a much stronger n-ES correlation (r S = -.45) was observed in a cross-topics sample [18].These authors, too, interpreted their finding as evidence for pervasive publication bias.As we argued in the introduction, their n-ES correlation might reflect (problematic) PSB, (innocuous) sample-size adjustment, or both.Our analyses found evidence for sample-size adjustment; studies that investigated stronger effects (as indicated by the overall meta-analytic effect size) tended to rely on smaller sample sizes than studies that investigated weaker effects.However, sample-size adjustment cannot fully explain the gap in the n-ES correlation within meta-analyses vs. across topics: When we combined all meta-analyses into one large cross-topics sample, our n-ES correlation (r S = -.23)remained much smaller than reported previously (see also Table 2).
Why might this be the case?The previous cross-topics sample took effect sizes from findings that directly addressed the main research question of the respective publication [18].In contrast, meta-analyses include any pertinent result, regardless of whether it was focal or peripheral to the study it emerges from.Plausibly, PSB might be stronger for results that are focal to a study and weaker or absent for results that are peripheral.This could explain the difference between the small n-ES correlation in our cross-topics analysis of meta-analyses and the previous cross-topics sample of focal findings.
Study 2
The aim of Study 2 was therefore to compare the n-ES correlation between focal effect sizes (i.e., those that address the study's central hypothesis or aim) and random effect sizes in a cross-topics sample.
As we explain in this section, such a comparison should take the design of the study (between-versus within-subjects) into account.In a within-subjects design, there are two ways to translate the difference between two means into a standardised effect size (e.g., [35]).This difference can be standardised with the pooled standard deviation across the two conditions.This is the same type of effect size that arises from between-subject designs (henceforth, ES betw- een ).Alternatively, the difference between means can be standardised with the standard deviation for participants' change scores; this approach typically results in a larger effect size (henceforth, ES within ).Especially when participants' scores correlate strongly across conditions (e.g., because the treatment effect is very homogenous across participants), ES within can be much larger than ES between .
In surveys that investigate the n-ES correlation, effect sizes from within-subjects designs will often be of the ES within type because information to compute ES between is lacking from the primary study.(If effect sizes are taken from a meta-analysis, its authors might have chosen to compute ES within .)At the same time, within-subjects designs have greater statistical power than between-subjects designs, leading researchers to choose a relatively small n.Consequently, it can be expected that ES within , compared against ES between , tends to be both large and associated with small n.This would, similar to Simpson's paradox [36], negatively bias the n-ES correlation without being indicative of PSB (see Fig 4).For this reason, it is worthwhile to take the design of the study into account.
Method
Power analysis.We sought 90% power to identify a difference between two dependent correlations, r = -.45 and r = -.16,via a two-tailed test with α = .05.Our power analysis in G*Power [37] suggested a minimum sample size of n = 157.We decided to use a sample of 160 papers.(The Open Science Framework page for this paper contains alternative power analyses with varying assumptions.In all cases, n ~150 appeared sensible for correlations with dependency, https://osf.io/ce6v3/?view_only=86b6b997ca52430898a6a2bdb38cf9bb.) Eligibility criteria and sampling of papers.To be suitable for our study, psychology papers needed to fulfil the following eligibility criteria: present original data; use inferential statistics to address the main research question; provide sufficient information to calculate relevant effect sizes; present n.We excluded papers that focused on inferential analyses for which there is no straightforward unitary effect size (multilevel models, structural equation models, time series models, cluster analysis, social network analysis, multidimensional scaling, statistical simulation models, machine learning models, exploratory factor analysis and principal component analysis).
To sample 160 papers, we (somewhat arbitrarily) decided to draw 16 papers for each year from 2012-2021.In particular, we searched for "the" in All Fields in Web of Science, restricted by target year.To focus on psychology papers, we used Web of Science Category and retained only those categories that start with "psychology".From the resulting list of hits, we selected a paper with the help of a random number generator.If the paper fulfilled our eligibility criteria, it was retained; otherwise, we moved down the list until a suitable paper was found.This process was repeated with new random numbers until all 16 papers for that year were retained.The same process was then repeated for all years.
Selection and coding of focal and random effect size.For each paper, we extracted two effect sizes, one focal and one random, as well as the sample size associated with each.The focal effect size directly addressed what the paper presented as its main hypothesis or aim.If the paper presented multiple hypotheses/aims as equally important, we used the one mentioned first in its hypotheses/aims section.One author (JH) identified the focal aim/hypothesis for all papers without knowledge of their analyses and results.In cases in which the paper later proved to have no effect size information for this aim/hypothesis, we moved to its next aim/ hypothesis.Where multiple outcome variables, samples, or analyses were relevant for the focal effect size, we used whichever occurred first (either in a table or in text) in the results section.
In each paper, we choose a second effect size at random.(By chance, this sometimes happened to be the same as the focal effect size.)We selected a page via a random number generator.We coded the first effect size information on that page that originated from the paper's study.For this purpose, we read any tables line-by-line, not column-by-column.If the page did not contain relevant effect size information, we repeated the process as required.
All effect sizes were coded as unsigned d.We used various online calculators to convert descriptive statistics, effect sizes (e.g., η 2 , R 2 , r, r S , odds ratios), and various test statistics (e.g., F-value, t-value, χ 2 ) into d.Details on extraction and conversion are provided in our pre-registration document on the OSF.
All coding of effect sizes and sample sizes was done by one author (JH).To check reliability, we selected 40 papers at random.Based on the identified focal aim/hypothesis, a second author (AL or TP) independently coded the focal effect size and its associated sample size.Again, both d and n proved strongly right skewed (see Table 3), which is why we computed r S .Correlations between first and second coder proved satisfactory, with r S = .74for d and r S = .97for n.
Analytical strategy
The design of the study was pre-registered, and the analyses were conducted in R 4.2.1 [38].We preregistered the comparison of the n-ES correlation between focal and randomly selected effects based on Pearson r.For this, we used Zou's method [39], which is based on the (non-) overlap of confidence intervals and allows for dependency between correlations.Analyses were performed with cocor [40].We used 89% confidence intervals here [41].As visual checks showed that relevant distributions were distinctly non-normal, we deviated from the preregistration and relied on r S rather than Pearson's r.(The OSF contains additional analyses, with 95% CI such as those based on Percentage Bend correlation leading the same conclusion.https://osf.io/ce6v3/?view_only=86b6b997ca52430898a6a2bdb38cf9bb)
Results and discussion
Descriptive statistics are shown in Table 3.For focal findings, we found a very strong negative n-ES correlation, r S = -.55,89% CI [-.64, -.45].In line with our reasoning, this correlation turned out to be smaller for randomly selected effect sizes, r S = -.37,89% CI [-.48, -.22].However, the 89% confidence intervals overlapped, and we therefore conclude that these correlations do not offer convincing support for our hypothesis that the n-ES correlation is stronger for focal effects than for effects chosen at random.This conclusion was not altered when we performed the analyses by type of design (between-vs.within-subjects).For between-subject designs (n = 135), we found a strong negative n-ES correlation for focal effect sizes r S = -.40,89% CI [-.52, -.27] and a smaller one for randomly selected effect sizes, r S = -.29,89% CI [-.42, -.16].For within-subject designs (n = 25), we found a very strong negative n-ES correlation for focal effect sizes r S = -.67,89% CI [-.83, -.41] and a much smaller one for randomly selected effect sizes, r S = -.17,89% CI [-.48, .17].
Our analysis of focal findings largely followed the methods in [18].We note that, in contrast to Study 1, our result (r S = -.55) was now quite similar to theirs (r S = -.45).
Further, comparisons of results across Studies 1 and 2 are instructive.The n-ES correlations in Study 2's focal effects [-.64, -.45] and in Study 1's cross-topics analysis across meta-analyses [-. 25, -.22] differed reliably.This confirms our conclusion from Study 1 that the n-ES correlation is much stronger for effects sampled from the effects publications focus on than for effects sampled from meta-analyses.Further, the n-ES correlations in Study 2's randomly selected effects [-.48, -.22] and in Study 1's cross-topics analysis across meta-analyses [-.25, -.22] failed to differ reliably.Thus, the n-ES correlation for randomly selected effects is therefore not clearly more worrying than for focal effects nor clearly less worrying than for effects in metaanalyses.In light of these inconclusive results, we struggle to understand why the n-ES correlation differs so dramatically between effects in meta-analyses and publications' focal effects.We note that meta-analyses in Study 1 stemmed from only three sub-disciplines whereas samples of focal effects stemmed from all of psychology, but it remains currently unclear if this can explain the observed differences.
General discussion
The n-ES correlation holds promise to indicate how widespread a problem PSB is and, following the introduction of counter measures, how this might change over time [14,18,33]).However, the n-ES correlation is also affected by researchers' (unwitting or deliberate) adjustments of their sample size to the expected effect size, a perfectly reasonable behavior.Using data from psychology, we therefore investigated in greater detail to what extent the n-ES correlation suggests the presence of PSB in psychological research.
In Study 1, we found a small negative n-ES correlation within meta-analyses, which is consistent with previous results [33].This proved to be virtually identical with the negative n-ES correlation that we observed in multiple replications, which are free from PSB.We also showed that a small negative n-ES correlations like these are plausible in the absence of PSB.Overall, we would therefore argue that the small negative n-ES correlation within psychological metaanalyses consistently observed by us, and by [33] suggests the absence of noteworthy PSB (at least in the three scrutinized sub-disciplines cognitive, organizational, and social psychology).(Similarly, our results suggest an n-ES correlation around r s = -.23 is no reason for concern.)This is in line with previous research which suggests that evidence for PSB in psychological meta-analyses is weak, and if PSB is present it is likely to be mild [17].Similarly, previous research indicates that applying adjustments for PSB to psychological meta-analyses results in minimal changes to effect size estimates [42].Obviously, that does not mean that PSB is never a problem in meta-analyses in psychology, and research into how best to uncover it remains important (e.g., [5,16,43]. The inconspicuous n-ES correlation for effects sampled from meta-analyses contrasts sharply with the one in cross-topics samples of focal effects (i.e., effects that take a central role in the papers they are published in): For cross-topics samples of focal effects, [18] and our Study 2 consistently found strong negative n-ES correlations.At a theoretical level, such a difference might be expected.First, across different topics researchers might (unwittingly or deliberately) adjust their sample size to the expected effect size, which induces a negative n-ES correlation.Such sample-size adjustment is less plausible to occur within meta-analyses.Here, researchers investigate the same topic and therefore would rarely have reasons to hold different expectations about the magnitude of the expected effect [15].Second, it is plausible that PSB should affect focal effects in particular.For example, researchers who fail to find an expected effect but find an unexpected one instead might shift the focus of their publication on the latter [44].By definition, cross-topics samples of focal effects consist of focal effects only, which is not true for meta-analyses.Thus, a smaller proportion of effects in meta-analyses should be affected by PSB, thus reducing the n-ES correlations within meta-analyses.
Our empirical evidence did not suggest that sample-size adjustment and stronger PSB in focal effects sufficiently account for the large difference in the n-ES correlation within metaanalyses versus cross-topics samples of focal effects.Although, we found evidence for samplesize adjustment in Study 1, this was too weak to explain the difference in the n-ES correlation within meta-analyses and across topics.Moreover, we failed to find clear evidence in Study 2 that the n-ES correlation is less pronounced for effects selected at random than for focal effects.In sum, it remains currently unclear why much stronger n-ES correlations are found in samples of focal effects than in samples of effects in meta-analyses and to what extent this reflects benign or problematic reasons.Although, our research suggests that some negative n-ES correlations might be seen as unproblematic, it currently remains unclear how strong n-ES correlations need to be to indicate nontrivial PSB effects.More research on these topics is needed.
The n-ES correlation is one among numerous indicators developed to indicate the presence of PSB (e.g., [16]).Here, we focussed on the n-ES correlation because previous surveys based on this method have fuelled concerns about widespread PSB in psychological research [18,33].Various methods have been compared regarding their ability to uncover PSB in single metaanalyses (e.g., [16,17].Whether the n-ES and other indicators differ in their suitability to describe PSB in the kind of larger surveys that we presented here is currently unclear.
Conclusion
Negative n-ES correlations have previously been described as evidence for PSB in psychological research [18,33].We demonstrated here that the negative n-ES correlations in meta-analyses from three psychological sub-disciplines were inconspicuous and do not point to worrying levels of PSB.However, alternative sampling strategies lead to much stronger n-ES correlations partly explained by benign factors.The extent to which these heightened n-ES correlations also reflect the effects of PSB is currently unclear.
Fig 1 .
Fig 1. Observed effect sizes for 100 simulated studies scatter symmetrically around the true population effect size d = 0.2.Studies with larger N have smaller standard errors and are therefore located towards the top.Only study results within the grey area are statistically significant (p < .05).If only these studies (white) are published, a correlation between sample size and effect size emerges (here, r = -.59), which is absent for the complete set of studies.https://doi.org/10.1371/journal.pone.0297075.g001
Fig 2 .
Fig 2. Fictitious results for a set of 100 multiple replications of the same study.Reflecting the absence of PSB, the n-ES correlation is r = .00 in the left panel (whereby squares/circles depict negative/positive effect sizes).However, the n-ES correlation is typically based on unsigned effect sizes [18].Once these are used (filled elements in right panel, whereby squares indicate results with changed sign), the n-ES correlation changes to r = -.07. https://doi.org/10.1371/journal.pone.0297075.g002
c
Pearson's r (all other correlations are r S ).https://doi.org/10.1371/journal.pone.0297075.t002correlations within multiple replications.This time, however, we used signed effect sizes within each set of multiple replications (akin to the left panel inFig 2).In line with our explanation, the median n-ES correlation fell to r S = -.01 [-.07, .08].The median for the signed n-ES correlation within meta-analyses was r S = -.04 [-.10, -.0003].Albeit statistically significant, because the 95% CI excludes zero, this correlation is very small.
Fig 4 .
Fig 4. Funnel plot for hypothetical results from four within-subjects studies (ws) and four between-subjects studies (bs).Overall, a strong n-ES correlation emerges, although this correlation is zero for both types of study design.https://doi.org/10.1371/journal.pone.0297075.g004
Table 1 . Descriptive statistics for the correlation between sample size and effect size within meta-analyses (MAs) and multiple replications
. 95% CI based on bootstrap. | 2024-02-17T06:17:02.048Z | 2024-02-15T00:00:00.000 | {
"year": 2024,
"sha1": "61ffee2f9d72a05dee4afc3673473fc1b9f7e662",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "43f0a321711bbc66b3cb9a323b98d5dadc996ba3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207935004 | pes2o/s2orc | v3-fos-license | Educational inequalities in health after work exit: the role of work characteristics
Background Educational inequalities in health have been widely reported. A low educational level is associated with more adverse working conditions. Working conditions, in turn, are associated with health and there is evidence that this association remains after work exit. Because many countries are raising the statutory retirement age, lower educated workers have to spend more years working under adverse conditions. Therefore, educational health inequalities may increase in the future. This study examined (1) whether there were educational differences over time in health after work exit and (2) whether work characteristics mediate these educational inequalities in health. Methods Data from five prospective cohort studies were used: The Netherlands (Longitudinal Aging Study Amsterdam), Denmark (Danish Longitudinal Study of Aging), England (English Longitudinal Study of Ageing), Germany (German Aging Study), and Finland (Finnish Longitudinal Study on Municipal Employees). In each dataset we used Generalized Estimating Equations to examine the relationship between education and self-rated health after work exit with a maximum follow-up of 15 years and possible mediation of work characteristics, including physical demands, psychosocial demands, autonomy, and variation in activities. Results The low educated reported significantly poorer health after work exit than the higher educated. Lower educated workers had a higher risk of high physical demands and a lower risk of high psychosocial demands, high variation in tasks, and high autonomy at work, compared to higher educated workers. These work characteristics were found to be mediators of the relationship between education and health after work exit, consistent across countries. Conclusion Educational inequalities in health are still present after work exit. If workers are to spend an extended part of their lives at work due to an increase in the statutory retirement age, these health inequalities may increase. Improving working conditions will likely reduce these inequalities in health.
Background
Due to the ageing of populations in Europe, many European countries have concerns about securing the financial sustainability of their welfare systems. Thus, pension reforms have been implemented in some countries that raise the statutory pension age and reduce the possibilities of receiving early retirement benefits [1]. The question of whether these reforms might be to the benefit of those most capable to work longer and to the disadvantage of those least capable to work longer, has received too little attention. Yet, studies show large educational inequalities in health [2][3][4][5], with some evidence that these inequalities have increased over the last decades [6,7]. Part of these health inequalities may be attributable to adverse working conditions, which are more prevalent among workers with lower education [8,9]. Thus, if all workers are to spend an extended part of their lives at work, this may increase health inequalities, even after exit from the workforce. Studies in Western European countries show that social inequalities in self-rated health, depression, disability in daily activities, and mortality indeed persist after retirement [10][11][12][13][14][15].
With societies being confronted with population ageing, maintaining health in later life is not only desirable from a public health perspective, but it is also becoming increasingly important to prevent health and social care costs from rising. Healthier retirees are also better able than their unhealthy peers to help care for their partners, relatives or grandchildren and to do volunteer work in the community. Therefore, healthy retirees can be an important resource for the economy and for society more broadly [16].
The potential role of work characteristics in explaining health inequalities has received increasing attention during the last decade. The literature suggests that a low educational level is associated with adverse working conditions such as high physical job demands [17,18] and low control and reward at work [19]. However, some psychosocial job demands such as cognitive demands and time pressure are more common among workers with higher levels of education [9,20,21]. Many studies suggest that poor working conditions are associated with poor health [17,18,[22][23][24][25], and there is evidence that this effect remains after work exit [26][27][28][29][30].
Little evidence exists on the role of work characteristics in educational differences in health after work exit. Previous studies that have investigated the association between work characteristics and educational health inequalities have mainly focused on the working age population [31]. Findings from these studies suggest that physical job demands, psychosocial job demands, and psychosocial resources significantly contribute to health inequalities, with these working conditions mediating approximately 25-50% of educational inequalities in health [32][33][34].
Meanwhile, most studies so far have been crosssectional. The few longitudinal studies that have investigated the association between work characteristics and health inequalities generally find that working conditions mediate a smaller proportion of the effect of educational level compared to most cross-sectional studies [31]. For example Parker and colleagues [21], who examined health inequalities after retirement, found that working conditions mediated only a small proportion of the association between educational level and self-rated health after retirement. However, the mediating effect in the study depended upon type of working condition as well as the health outcome, e.g., physical working conditions mediated up to 5% of the association between educational level and self-rated health, and 33% of the association between educational level and physical impairments. Psychological working conditions consistently explained very little of the association between educational level and the different measures of health. In contrast, another longitudinal study, by Borg and Kristensen [9], which was conducted among the working age population, found that physical and psychological working conditions together mediate as much as 59% of the association between educational level and self-rated health.
In sum, previous studies suggest that work characteristics partly mediate the association between educational level and health, but evidence remains fragmentary. In particular, there is a need for more longitudinal evidence on the extent to which working conditions mediate the association between educational level and health after work exit. In this cross-national longitudinal study we therefore examine (1) whether educational level is associated with health after work exit, and (2) whether work characteristics mediate the association between educational level and health.
Methods
EXTEND is a cross-national collaborative project which aims to examine inequalities in relation to extending working lives. We include national datasets from the five countries participating in the EXTEND project: the Netherlands, England, Germany, Denmark, and Finland, to provide a stronger evidence base for examining the role of work characteristics in explaining health inequalities after work exit. The present study adopted a coordinated analysis approach to maximize generalizability across different settings [35].
Sample
For the Dutch sample, data were used from the Longitudinal Aging Study Amsterdam (LASA). LASA is a nation-wide ongoing longitudinal study in people aged 55+, with follow-ups every three years. The sampling, data collection procedures and non-response have been described in detail elsewhere [36]. Data from the first (respondents aged 55-85 entering the study in 1992-1993), second (new respondents aged 55-65 entering the study in [2002][2003], and third (new respondents aged 55-65 entering the study in 2012-2013) cohorts were pooled for the current study (n = 555).
Denmark is represented by the Danish Longitudinal Study of Aging (DLSA), which is merged with Danish register data on labour market exit. DLSA is a longitudinal survey of people aged 52+. The study consists of four consecutive waves with five years between each wave (1997, 2002, 2007 and 2012) and with respondents born in the years between 1920 and 1960. Starting from 2002 a new cohort was added at each new wave. The study is described in more detail elsewhere [37]. In the current study data from all waves (n = 1938) were used.
The English data come from the English Longitudinal Study of Ageing (ELSA), which is a study of a large representative sample of men and women aged 50+ living in England. The study began in 2002 and the sample is re-examined every two years [38]. For the current study, data from wave 2 through 7 were used (n = 1391), as work characteristics were not measured in wave 1.
The German data come from the German Aging Study (DEAS), a longitudinal survey of the German population aged 40+, the first wave of which was conducted in 1996. Further waves followed in 2002, 2008, 2011 and 2014, with new cohorts added every six years. More detailed information on DEAS can be found elsewhere [39]. Data from four waves since 2002 were used in this study (n = 538).
The Finnish data come from the Finnish Longitudinal Study on Municipal Employees (FLAME), collected during 1981-2009. The baseline sample comprised 6257 respondents aged 44-58 and they all had been working at least 5 years in their current occupation. Four waves followed in 1985, 1992, 1997, and 2009. A detailed description of FLAME can be found elsewhere [40]. Altogether 5628 persons were included in this study.
In all datasets respondents were selected who stopped working and participated in at least one wave before and after they exited the workforce. Further inclusion criteria were: at least 50 years old at the last measurement before work exit (T0) and not older than the statutory retirement age at the moment of work exit. The health outcome was measured longitudinally after work exit, because we were interested in both the short-term and long-term health associations. Working conditions were measured at T0. Education and the control variables were not time-varying and were measured at T0.
Independent variables
Educational level The International Standard Classification of Education 2011 (ISCED 2011) was used to categorize educational level into three groups: low (up to lower secondary education), intermediate (upper secondary education or post-secondary non-tertiary education) and high (short cycle tertiary and higher).
Mediators
Because the associations between the continuous measures of the mediators and the outcome were not linear, the mediators were all dichotomized at the median, to maximize comparability between the countries.
Physical demands Data on physical work demands were available in all studies. In the Dutch study, work demands were derived from the general population job exposure matrix (GPJEM) for 55 to 65 year olds [41]. The GPJEM indicates levels of exposure probability of physical and psychosocial demands and psychosocial resources, based on job category. For physical demands, three items were used: use of force, uncomfortable work, and exposure to repetitive movements. Respondents were assigned a low, moderate or high score based on the probability of exposure to these physical demands. A sum score was calculated and dichotomized into low and high exposure to physical demands, based on the median of the sum score.
In the Danish study respondents were asked whether they thought their job requires: too much work using the body, too much lifting and carrying or too many uncomfortable or dislocated positions. Scores were dichotomized into low physical work demands ('no' on all three items) and high physical work demands ('yes' on at least one item).
In England, participants were asked which of these descriptions, ordered from least to most physically demanding, best describes the work that they do in their main job: (1) sedentary occupation: you spend most of your time sitting (such as in an office), (2) standing occupation: you spend most of your time standing or walking. However the way you spend your time does not require intense physical effort (e.g. shop assistant, hairdresser, security guard, etc.), (3) physical work: this involves some physical effort including handling of heavy objects and use of tools (e.g. plumber, cleaner, nurse, sports instructor, electrician, carpenter, etc.), and (4) heavy manual work: this involves very vigorous physical activity including handling of very heavy objects (e.g. docker, miner, bricklayer, construction worker etc.). Participants were also asked whether their job is physically demanding, with four possible responses from strongly agree to strongly disagree. These two items were summed and dichotomized at the median.
In the German study, physical demands were measured by two questions about strenuous work demands.
Respondents were asked to what extent they were stressed by strenuous or repetitive physical activities like carrying heavy objects, standing or sitting for long periods and negative environmental factors such as noise, heat, dust, gases, toxic substances or poor lighting. A sum score was calculated and dichotomized into low and high physical demands, based on the median.
In Finland, physical demands were measured with three items: repetitive work postures, bended, twisted or otherwise difficult work postures, and lifting and holding with hands. Respondents reported if they encountered these demands never, seldom, moderately, often, or very often. The sum score was categorized into low and high physical demands, based on the median.
Psychosocial demands Data on psychosocial work demands were available in all studies. In the Dutch study three items were used to measure psychosocial work demands: time pressure (work at high pace and work under high time pressure), task requirements (work fast, much work, work hard, and hectic work) and cognitive demands (intensive thinking, need to keep focused, and requiring much concentration). Using the aforementioned GPJEM, respondents were assigned a low, moderate or high score based on the probability of exposure to these psychosocial demands. A sum score was calculated and dichotomized into low exposure and high exposure to psychosocial demands, based on the median.
The Danish study used high rate of work, busyness and tight deadlines, lack of influence, and lack of recognition and respect as a measure for psychosocial work demands. Scores were dichotomized into low psychosocial work demands ('no' on all four items) and high psychosocial work demands ('yes' to at least one item).
The English study used two items to measure psychosocial work demands: working speed ('Considering the things I have to do at work, I have to work very fast') and pressure ('I am under constant pressure due to a heavy workload'). Both items were measured on a 4-point scale ('strongly agree' to 'strongly disagree'). The sum score was dichotomized using the median.
The German study used one question about pressure to complete heavy workloads or meet tight deadlines and nervous tension, which was dichotomized based on the median.
In Finland, psychosocial work demands were measured with three items: being responsible for others, complicated decision making, and urgent decision making and fast solutions. Respondents reported if they encountered these demands never, seldom, moderately, often, or very often. The sum score was categorized into low and high physical demands, based on the median.
Variation in tasks In the Dutch study variation in tasks consisted of three items: variation in work, learn new things, and work requires creativity. It was based on the GPJEM and respondents could be assigned a low, moderate or high score based on the probability of exposure to these resources. The sum score was dichotomized into low and high based on the median.
In Denmark, variation in tasks was measured with the question: 'Do you think that your work requires too many monotonous and repetitive tasks'? Respondents who answered 'No' were categorized as having variations in working activities.
In Finland, variation in activities was measured with one item ('my work is monotonous and uninteresting'). Respondents replied if this is true at their work not all, little, somewhat, or much. The variable was dichotomized into low and high variation based on the median.
In England and Germany, no measure of variation in tasks was available.
Autonomy In the Dutch sample, autonomy was measured with the following items: decide how to perform the job, the sequence of tasks, work pace, when to take time off, and need to find solutions. It was based on the GPJEM and respondents could be assigned a low, moderate or high score based on the probability of exposure to these resources. The sum score was dichotomized into low and high based on the median.
In the Danish study, autonomy was measured with the following three items: 'To what extent can you organize your own work, use your qualifications in the right way, use your experience?'. All three items were measured on a 3-point scale ('to a high degree' to 'no'). The sum score was dichotomized based on the median.
In the English study, autonomy was measured by two items ('I feel I have control over what happens in most situations' and 'I have very little freedom to decide how I do my work'). Both items were measured on a 4-point scale ('strongly agree' to 'strongly disagree'). The sum score was dichotomized based on the median.
In Germany, no measure of autonomy was available.
In Finland, autonomy was measured with three items: influence your work environment, take part in planning your work, and use your competence and knowledge. The respondents replied according to the options 'not at all', 'little', 'somewhat', or 'sufficiently'. The sum score was dichotomized based on the median.
Dependent variable
Self-rated health Self-rated health (SRH) was chosen as the health measure to distinguish between workers in good and poor health. In the Netherlands, Denmark, England, and Germany, SRH was measured with the question 'How is your health in general?' and respondents could answer on a 5-point Likert scale. In the Finnish dataset the question was 'How do you estimate your health compared to your age mates?', with response categories 'much better', 'somewhat better', 'equal', 'somewhat worse', and 'much worse'. SRH was recoded so that higher scores reflect better health.
Control variables
We controlled for age at work exit, sex, region (not available in the Danish dataset), year, number of working hours, and type of exit. Number of working hours was categorized into four categories representing the most common part-time, full-time and more than full-time working hours in each country. In the Netherlands categories were: Information on the number of working hours was not available in the Finnish dataset. Type of exit was also categorized differently across countries. Categories of work exit in the Netherlands were: regular retirement, early retirement, unemployment, disability, and other; in Denmark: regular retirement, early retirement, and unemployment; in England: (early) retirement, disability, unemployment, and homemaker; in Germany: regular retirement, early retirement, unemployment, and other; and in Finland: regular retirement, disability, and other.
Missing values
Multiple imputation was used to deal with missing values on the mediator variables, which were assumed to be missing at random. All independent, control and outcome variables were included in the imputation process and the number of imputations was equal to the percentage of incomplete cases in each country [42] (NL: 6.0%; DK: 4.7%; ENG: 17.0%; DE: 20.4%; FI: 21.1%).
Statistical analysis
We conducted mediation analyses with single-mediator models. To estimate the c paths (total effect of education on SRH) and b paths (effect of mediators on SRH, controlled for education) we used Generalized Estimating Equations (GEE) with an exchangeable correlation matrix to take into account the clustering in the data due to repeated measures [43]. To calculate the a paths (effect of education on mediators) we used simple logistic regression. The models used to estimate the b paths also yield the estimates for the c' paths (the direct effect of education on SRH, controlled for the mediator). We used the product-of-coefficients method to calculate the indirect effects [44,45]. We built separate models for each mediator. Because the effect of work characteristics on health may diminish over time, interaction with time was examined for the b path. In case of a statistically significant (p < .10 [46]) interaction, associations were reported for each time point. All models were adjusted for age at work exit, sex, region, year, number of working hours, and type of exit. These analyses were carried out in Stata version 14. The product of a and b represents the indirect, or mediation-, effect [45]. To calculate 95% confidence intervals around these indirect effects, the Monte Carlo method was used [47]. We used the R web utility developed by Selig & Preacher [48], which calculates the 95% confidence intervals around the indirect effects based on the regression coefficients of the a and b paths as well as their standard errors. A visual representation of the models can be found in Fig. 1.
Results
Characteristics of the samples can be found in Table 1. High physical demands were most prevalent in England (62.3%) and least prevalent in Denmark (32.0%). The highest percentage of workers with high psychosocial demands was found in Germany (70.3%). High variation in tasks was more prevalent in Denmark (77.0%) compared to Finland (46.0%) and the Netherlands (29.2%). High autonomy at work was most common in England (61.6%). The mean age at work exit ranged from 58.6 in Finland to 61.9 in the Netherlands. In the Netherlands and Denmark, early retirement was a common exit route, with a higher prevalence in the higher educated compared to the lower educated. Involuntary work exit, i.e. disability and unemployment routes, was generally more prevalent in the low educated group.
In all countries, those with a low educational level reported a significantly poorer health after work exit than their higher educated peers ( Table 2). These associations between educational level and SRH were strongest in England (b = −.507). Those with an intermediate educational level also had significantly poorer health after work exit than those with a high educational level. In Germany the difference between the intermediate and the higher educated group was not statistically significant.
Compared to high educated workers, low educated workers had a statistically significantly higher risk of high physical demands, and a lower risk of high psychosocial demands, high variation in tasks and high autonomy at work (Tables 3, 4, 5, 6, 7, a paths). The b paths represent the associations between the work characteristics and SRH. Interactions with time were included in the models to examine whether the associations were stable over time. If the interactions were statistically significant, coefficients were reported for each time point separately (Tables 3, 4, 5, 6, 7, b paths). In all countries high physical demands were associated with poorer health after work exit. In England, this association was found in the first years after work exit only. In the Netherlands, high psychosocial demands were associated with better health after work exit, but this association was delayed and faded after nine years. In Finland the association was stable over time. In Denmark, England, and Germany high psychosocial demands were associated with poorer health after work exit, although in England and Germany this association faded over time. High variation in tasks was associated with better health after work exit in the Netherlands and Finland with associations remaining up to 15 and 9 years after work exit, respectively, and in Denmark, where the effect was evident in the initial years after exit only. High autonomy at work was also associated with better health after work exit. This association was found in all countries, but in the Netherlands this effect was delayed and faded again after nine years.
Results suggested that all work characteristics were mediators in the association between educational level and health after work exit (Tables 3, 4, 5, 6, 7, ab). However, even after including these mediators in the models, an association of educational level with health after work exit remained (Tables 3, 4, 5, 6, 7, c' paths).
Discussion
The aim of our study was to examine whether educational level is associated with health after work exit in five Northern and Western European countries, and whether work characteristics mediate the association between educational level and health after work exit.
Consistent with earlier studies reporting educational health inequalities after work exit [10,[12][13][14][15], we found that the lower educated reported significantly poorer health than the higher educated. The association between educational level and health after work exit differed by country. We found the largest associations between educational level and health in England and Finland, and smaller, but still statistically significant associations in the Netherlands, Denmark, and Germany.
Next, we examined the associations between educational level and work characteristics, and the associations between work characteristics and health after work exit (while controlling for educational level). Consistent with the empirical literature [8,9], we found that lower educated workers had a higher risk of high physical demands, and a lower risk of high psychosocial demands, high variation in tasks and high autonomy at work, compared to higher educated workers. We also found that work characteristics were associated with health after work exit, sometimes even up to 12-15 years. The duration of these associations differed by country and by work characteristic. The negative association between physical demands and health was apparent even years after exiting the work force in all countries except for England, where this association diminished after the initial years after exit. The positive effects of psychosocial resources at work, i.e. variation in tasks and autonomy, generally were also still present many years after work exit. Results on psychosocial demands were mixed. In the Netherlands and Finland psychosocial demands were associated with better health after work exit, whereas psychosocial demands were associated with poorer health in England, Denmark, and Germany. These divergent findings may be due to differences in the constructs measured across the countries. In the Netherlands and Finland, psychosocial demands were mainly operationalized as cognitive demands e.g. having to make complicated decisions and doing tasks that require a lot of concentration. In the other countries, psychosocial demands consisted mainly of items measuring time pressure and heavy work load. This suggests that the cognitive demands can be seen more as a positive challenge at work, which is likely beneficial for your health, whereas demands such as working under time pressure are associated with poorer health. Therefore, the mediated and direct effect had opposite signs in England, Denmark, and Germany, which led to a suppression effect for psychosocial demands in these countries, i.e. the association between educational level and health was larger after including these suppressors in the models [49]. Results on the duration of the effect of psychosocial demands on health after work exit were mixed, with longer lasting effects in the Netherlands and Denmark, and more shortterm effects in England and Germany.
We found that physical demands, psychosocial demands, variation in tasks and autonomy at work all partially mediated the association between educational level and self-rated health after work exit. Although there were some country differences, these mediating effects were generally observed in all five countries. However, after including these mediators into the statistical England n/a n/a n/a n/a n/a Germany n/a n/a n/a n/a n/a Germany n/a n/a n/a n/a n/a models, substantial associations between educational level and health after work exit remained. Parker et al. concluded in their longitudinal study on post-retirement health that physical demands partially explained the association between education and physical impairment, but not between education and self-rated health. They did not find evidence for a mediating effect of psychosocial demands [21]. These differences in findings may be due to different measures and different methods to analyze the mediation effects. Parker et al. dichotomized educational level into lower education (mandatory education only) and higher education (more than mandatory education), while we used the ISCED categories low, intermediate and high educational level. Physical working conditions ('In your work situation, are you exposed to gas, dust, smoke, noise, and/or heavy lifting?') and psychological working conditions ('Is your work mentally taxing, stressful, repetitious, monotonous, or mentally Finland n/a n/a n/a n/a n/a Self-rated health after work exit, (mean (SD)) The Netherlands Also, we made full use of our longitudinal data by including interactions with time to examine changes over time in the mediation effects. Furthermore, we not only included physical and psychosocial work demands, but also included psychosocial resources: variation in tasks and autonomy, which were also found to be mediators. In view of the necessity to spend more years working due to an increase in the statutory retirement age, our results indicate that it is important to adapt working conditions to improve health and reduce health inequalities. Our study provides evidence to suggest that physical demands, psychosocial demands, variation in tasks, and autonomy are associated with health and that they partly mediate the association between education and health. Even years after work exit, associations between work characteristics and health still exist. Work place interventions improving working conditions, may improve the health of all retirees as well as decrease educational inequalities therein. Participatory ergonomics interventions, in which workers are actively involved in developing and implementing changes in the workplace, may be promising to reduce physical demands at work [50]. Measures to enhance variation and autonomy could be job rotation, which involves moving employees from job to job at regular intervals; job enlargement, which refers to expanding the tasks to add more variety; and job enrichment, which gives workers more responsibility and control over how they perform their own tasks. Because working conditions explain only part of the educational inequalities in health, inequalities are likely to be reduced but not dissolved when improving these conditions. Therefore, health interventions, especially those Table 3 Single-mediator analyses of the effect of education and work characteristics on self-rated health in the Netherlands
Low Education
Intermediate T2 T2 T3 T3 T4 T4 T5 T5 Notes: a path = effect of education on the mediators; b path = effect of the mediators on SRH; ab = indirect effect; c' path = direct effect of education on SRH. If there is no significant interaction with time, coefficients are presented at T1 only * p ≤ .05; ** p ≤ .01 a B adjusted for age at work exit, sex, year, region, number of working hours, and type of exit aimed at the lower educated, should also be implemented to promote health and reduce health inequalities. It has also been argued that education itself should be considered as a domain of public health [51,52].
The present study has some limitations. First, in all countries only characteristics of the last held job were used. However, it is possible that those with worse health already changed jobs to accommodate their health better, which may have attenuated our results If there is no significant interaction with time, coefficients are presented at T1 only * p ≤ .05; ** p ≤ .01 a B adjusted for age at work exit, sex, year, region, and type of exit [53]. Therefore, our results should be replicated by studies investigating the role of characteristics of the longest held job. Second, not all work characteristics were measured in all countries. For instance, information about variation in tasks and autonomy at work was not available for Germany. The mediating role of psychosocial resources, i.e. variation in tasks and autonomy at work, can therefore not be generalized to the German context. Third, we included only SRH as our health outcome because it was the only health measure available in all datasets. SRH can be used as a global measure of health in the general population [54]. It has previously been associated with other health measures, e.g. depression [55], inflammation [56], functional limitations [57], and mortality [58]. However, studies show that there may be educational differences in the relation between objective health and SRH, and thus results may be either over-or underestimating educational health inequalities [59]. Therefore, results should be interpreted with caution when using SRH as a proxy for objective health. In our study, however, SRH is seen as a global measure of people's perception of their health, and we refrained from making claims about associations of education and job characteristics with specific objective health outcomes [60,61]. Furthermore, because of relatively small sample sizes in some of the countries, we did not examine multiple mediators in one model. The next step would be to also examine these parallel mediation models, because the mediators are likely to be interdependent and may be together part of a causal mechanism that is more complex than what we could test in our study. Finally, differences between countries in effect sizes may be due to factors on the country level we did not control for in our study, e.g. generosity of benefits.
This study also has important strengths. Most research has focused on the working population and used crosssectional data. We included five longitudinal datasets, following respondents well into retirement and included five of the highest income countries in Europe with different welfare regimes. A further strength is that the effects found were consistent across countries, despite potential differences in how they were operationalized. The exception to this was psychosocial demands. Further work is needed given disparate measures across national datasets.
Conclusion
Our longitudinal, cross-national study demonstrated educational inequalities in self-rated health after work exit in the Netherlands, Denmark, England, Germany, and Finland. These educational inequalities were partially mediated by physical demands, psychosocial demands, variation in tasks and autonomy at work. The associations between these work characteristics and health sometimes lasted up to 12-15 years after having exited the work force. Thus, if workers are to spend an extended part of their lives at work, health inequalities may increase, not only in recent retirees, but also years after work exit. Improving these working conditions will likely reduce, but not dissolve, educational health inequalities after work exit. In addition, health interventions and promotion targeting the lower educated retirees, especially those who experienced unfavorable work demands, may prove to be important in improving health and diminishing health inequalities in older adults. | 2019-11-13T02:52:17.590Z | 2019-11-12T00:00:00.000 | {
"year": 2019,
"sha1": "440bdc97286343b7983ed634237f8b495c1e4d8f",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-7872-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "440bdc97286343b7983ed634237f8b495c1e4d8f",
"s2fieldsofstudy": [
"Education",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270960170 | pes2o/s2orc | v3-fos-license | Scoping review of qualitative studies on family planning in Uganda
Family planning (FP) is an essential component of public health programs and significantly impacts maternal and child health outcomes. In Uganda, there is a need for a comprehensive review of the existing literature on FP to inform future research and programmatic efforts. This scoping review aims to identify factors shaping the use of FP in Uganda. We conducted a systematic search of eight scholarly databases, for qualitative studies on FP in Uganda. We screened the titles and abstracts of identified articles published between 2002–2023 and assessed their eligibility based on predefined criteria. We extracted data from the 71 eligible studies and synthesized the findings using thematic analysis and the Ecological Systems Theory (EST) individual, interpersonal, community, institutional, and policy-level determinants. Findings reveal the interplay of factors at different socio-ecological levels influencing family planning decisions. At the individual level, the most common determinants related to the EST were knowledge and attitudes of FP. Interpersonal dynamics, including partner communication and social support networks, played pivotal roles. Community-level factors, such as cultural norms and accessibility of services, significantly influenced family planning practices. Institutional and policy-level factors, particularly a healthcare system’s quality and policies, also shaped use. Other themes included the intersection of HIV/AIDS on FP practice and Ugandan views of comprehensive abortion care. This scoping review underscores the intricate socio-ecological fabric shaping FP in Uganda. The findings highlight the need for targeted interventions to increase knowledge and awareness of FP, improve access to services, and address social and cultural norms that discourage contraceptive use. Policymakers and program implementers should also consider gender dynamics and power imbalances in FP programs to ensure they are equitable and effective.
Introduction
Family planning (FP) is an essential component of public health programs and has a significant impact on maternal and child health outcomes by reducing high-risk pregnancies and allowing sufficient time between pregnancies [1].Like many low-and middle-income countries (LMICs), Uganda faces a range of challenges in implementing and scaling up FP programs and comprehensive abortion care (CAC) services.In Uganda, more than half of pregnancies are unintended, and nearly a third of these end in abortion [2].The estimated unintended pregnancy rate is 145 per 1000 women aged 15-49 years [3].In recent years, there has been an increasing interest in understanding the determinants and barriers to FP uptake in Uganda, and a few previous studies have been conducted to explore these issues [4][5][6][7].However, there is still a need for a comprehensive review of the existing literature on FP in Uganda to understand the factors that influence FP uptake to inform future research and programmatic efforts.
This scoping review of qualitative studies aims to provide an overview of the current state of knowledge, attitudes, and practices of FP and CAC in Uganda.By focusing on qualitative research, this scoping review seeks to provide a deeper understanding of the social, cultural, and economic factors that shape FP decision-making and CAC in Uganda.The Ecological Systems Theory (EST) has been used to identify factors that influence the adoption and dissemination of FP, how new methods or technologies are introduced, and how social and cultural norms influence the uptake of FP practices.The landscape of FP in Uganda is intricate, influenced by a multitude of factors spanning individual, interpersonal, community, institutional, and policy levels.
The EST provides a comprehensive framework to identify the complex interactions shaping FP practices.Over the past decade, Uganda has witnessed evolving dynamics in reproductive health, necessitating a nuanced understanding of the socio-ecological factors at play [7][8][9].At the individual level, knowledge, attitudes, and socio-demographic characteristics significantly impact decisions, while interpersonal relationships and social networks within communities play crucial roles.Cultural norms, values, and the accessibility of FP services at the community level further shape adoption patterns.Institutional factors, such as the quality of the healthcare system, and national policies constitute the broader context influencing FP practices.By exploring these interconnections, this review aims to identify patterns, gaps, and inform targeted interventions, contributing to improved reproductive health outcomes in Uganda.
The EST has been used extensively to study reproductive health behaviors [10,11], including FP uptake [12,13] and abortion [14].Studies have shown that attitudes, knowledge, and cultural norms are all significant predictors of FP behavior [15,16].For example, individuals with positive attitudes toward FP are more likely to use contraceptives [17,18].Meanwhile, individuals who perceive that their social network supports FP are more likely to use contraceptives [19], while those who perceive social disapproval are less likely to do so [20].Finally, individuals who perceive that they have control over their reproductive health are more likely to use contraceptives than those who perceive barriers to access [21].
This scoping review will use the EST as a guiding framework to explore qualitative studies on FP in Uganda to identify gaps and patterns across the socio-ecological spectrum.By synthesizing the findings of qualitative studies using the EST, we aim to provide a comprehensive understanding of the factors that influence FP uptake in Uganda.This knowledge can be used to inform the development of targeted interventions to increase FP uptake and CAC, improve reproductive health outcomes in Uganda, and provide recommendations for future research and programmatic efforts.
funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Study design
The study follows the methodological framework Arksey and O'Malley developed for scoping reviews to identify the research question and relevant studies, select studies, chart the data, and report the findings [22].When one is mapping and exploring the literature on a topic, a scoping review is most appropriate [23,24].
Literature search strategy
A comprehensive literature search was conducted using eight scholarly databases: MEDLINE (via Ovid interface), EMBASE (via Embase.com),Scopus, CINAHL (via EBSCOhost), Web of Science Core Collection (via Thomson Reuters), Global Health (via CABI), PsycINFO (via EBSCOhost) and Women's Studies International (via EBSCOhost).Keyword and controlled vocabulary search terms were used to represent concepts related to sexual and reproductive health in the context of FP or CAC in Uganda.The search was conducted by a health sciences informationist (GKR) in May 2022 then updated in May 2023.The search strategies can be accessed in the repository at https://hdl.handle.net/2027.42/191720.
Geographic search terms were used to focus search retrieval on articles referencing Uganda at the country level, by district [25] or capital city of Kampala.Lastly, a revised qualitative/mixed methods search filter was used in all eight database searches [26].Two unique qualitative/mixed methods search filters were revised for use in Ovid Medline to strive to maximize retrieval of qualitative studies [26,27].Final search strategies were determined through test searching and the use of search syntax to enhance search retrieval.No language limits were applied.
A search was conducted in May 2022, followed by an update of search results in May 2023.Search results were limited to articles published from 2002 to 2022, resulting in 4,217 citations exported to EndNote for processing and removal of duplicate citations.A final count of 1,422 citations were assessed and screened in Rayyan [28] according to inclusion and exclusion criteria.
Inclusion and exclusion criteria
The inclusion criteria for this study were articles that: (1) focused on FP and CAC research studies conducted in Uganda; (2) used qualitative research methods; (3) were published in peer-reviewed journals; and (4) reported the views of male and female Ugandan citizens, healthcare providers, or policymakers.The search included full-text articles published in any language from 2002 to 2023.The timeframe was chosen to identify recent literature and to capture the period after the cessation of the long-running conflict in northern Uganda between the Lord's Resistance Army and the Ugandan Government which lasted for over two decades, causing immense suffering and displacement for the people in the region [29].We included studies with data from multiple countries if Uganda-specific data were reported separately.The exclusion criteria were articles that: (1) focused solely on quantitative research methods or encompassed other publication types (editorials,, protocol papers, or commentaries, because they typically do not present qualitative data; dissertations because they are not peer reviewed; and abstracts because they are not full-length articles); (2) were not relevant to FP or CAC in Uganda; (3) were not published in peer-reviewed journals; and (4) reported the views of non-Ugandan citizens (i.e., refugees from other countries).
Study selection and data extraction
Two independent reviewers screened the titles and abstracts of the identified articles to determine eligibility for inclusion using the web-based tool Rayyan [30].A third reviewer resolved discrepancies.Full-text articles of the selected studies were retrieved, and reviewers from the reviewing team further assessed them for eligibility.The reviewers communicated regularly to achieve consensus about the selection of studies.
Data were extracted from the selected articles by six reviewers using a standardized Microsoft Excel data extraction form that included the author(s), article title, year of publication, study population, study aims, sample size, key findings related to the EST determinants (individual, interpersonal, community, institutional, and policy level), and implications for FP policy and practice in Uganda (S1 Appendix).The quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP) tool for qualitative studies [31].Handsearching of reference lists was not performed.A deductive approach was used in presenting the data.
Results
After screening and full-text review, 71 articles were included in the scoping review.
Individual level factors
Knowledge and attitudes.Several studies highlighted the significance of individual-level factors in FP decisions.Knowledge and attitudes towards contraception emerged as critical determinants of FP use.Individuals with accurate information and positive attitudes were more likely to adopt and sustain FP practices.The main barriers to FP uptake were a lack of knowledge and awareness.In the context of this review of qualitative studies, attitudes refer to an individual's overall positive or negative view toward FP.The most common theme of the findings in the studies (the focus of thirteen studies in total) centered on the individual's reasons to use (or not use) contraceptives.Individuals were more likely to use FP when they desired better child spacing but could be wary of the possible side effects and health concerns [33][34][35][36][37][38][39][40].Men often feared that the side effects of contraception methods would hinder sexual activity [38,41].People are more interested in using and continue to use methods they are familiar with [36].The FP method's efficacy is also considered, as is the cost [30].Individuals were also concerned with their level of privacy [40,[42][43][44] and would be more likely to use contraception when not having to involve a health care provider [37,45,46].Men who agree to use condoms do so to please their partners, protect themselves and their families from HIV, and in some cases, have multiple partners [47].Using contraception can foster a sense of accomplishment in women [48].Table 1
displays representative quotes from articles organized by EST factors.
Several articles focused on adolescents' knowledge and use of contraceptives.Kyegombe et al. [49] found that, while the specific age varies, the concept of childhood must be protected, including from engaging in sexual activity.Parents who did not support adolescent contraception worried it promotes promiscuity and infertility [50].Mbalinda et al. [51] and Nobelius et al. [52] found that cultural norms could dissuade teens from using condoms; the older adolescents used them specifically to prevent pregnancy.Other articles [43,53,54] also found that the use of contraceptives among adolescents depended on the information and sources available and how acceptable contraceptives were in the community.Some parents felt unequipped to discuss FP options with their children and instead avoided the topic [50].
Interpersonal level factors
Social support networks.The influence of social support networks was evident, with family, friends, and colleagues playing roles in shaping FP choices and how much they approved or disapproved of it [40].For example, people reported religious leaders and mothers-in-law mainly discounting the use of FP [60] and adolescents were discouraged from using self-injective contraception over safety concerns [37].Parents who did support their adolescent children cited its use to prevent STDs, unwanted pregnancy, and early school dropout [50].Attitudes of their peers [46,60] and male partners [46,61,62] also influenced FP uptake.
Male involvement and communication.Barriers to FP uptake stemmed from cultural norms prioritizing imparting FP knowledge to adult women over men and adolescents.Several studies [6,51,58,63,64] reported that men indirectly learned about FP methods from their partners' experiences and the knowledge their partners received from healthcare providers.Cultural and gender norms in Uganda negatively impacted men's opinion of FP, which was considered as the domain of women [63,65].Most fears of FP came from misinformation related to the possibility of infertility [66].When facts werepresented, people gained confidence in discussing options with their partners and healthcare providers [67][68][69].A telehealth package studied by Kamulegeya et al. [70] enabled men to share FP info based on informative and timely messaging, which built self-confidence in their knowledge on FP.
Community level factors
Cultural norms and values.A person's use of contraception can be influenced by the community's gender and social norms around expectations of fertility choices [71].Traditional Ugandan culture values having several children [60].Kabagenyi et al. [38] mentions that continued births are a sign by wives of their respect and love for their husbands.These pressures can create a situation where women lack the agency to make decisions about contraception [40].Willcox et al. [72] found that social-cultural pressures have even more influence in lowincome settings.Gaffikin and Aibe [73] found an integrated approach to FP that affirms its importance to the community by demonstrating the links to improved economics and natural resource management.
Individual EST factors
Representative Quote Knowledge "Women listen to a lot to myths.I heard some saying that when one is on family planning their sexual urge gets interrupted and their husbands may leave them due to that insensitivity to sex." (Female, age 42, IDI) [6] "Those pills are dangerous; they go through the fallopian tube and go to that area where eggs come from.So, when the pill falls in the middle of all the eggs, it burns them all. . ...they burn the entire woman's eggs and form a big scar.You may die without ever becoming pregnant".(married woman, 15-19 years, FGD) [53] Attitudes "I cannot practice family planning because I want as many children as I can give birth to.My mother in-law says she wants me to bear many children.Also, God said we should give birth and fill the world [so] why should I limit myself?I want to give birth to 12 children.So far, I have only three children.I give birth to my children here at home and not in the hospital".(Female age 19, IDI, Acholin village) [34] "Women listen to a lot to myths.I heard some saying that when one is on family planning their sexual urge gets interrupted and their husbands may leave them due to that insensitivity to sex." (Female, age 42, IDI) [6]
Interpersonal Level
Social support networks "A key challenge to youth access is that. . .there is a fear of reversibility.'If I use an IUD, how will I get pregnant again?' 'If I use an implant, how will I get pregnant again?'So people don't want to use those long methods because they are scared that if they do, they will never get pregnant.This is a big source of concern for young people.This is a big concern here because once you get married, people expect you to get pregnant."(International NGO representative, IDI) [55] Male involvement and communication "Men in this village do not like to use family planning and they prohibit their wives from using it.So women who come for family planning-they hide from their partners.Men know how and where to check especially for those who use implants they know the right arm to check.... Men know that the implant is put on the right arm and they know the position so they check their wives to find out whether they went secretly to use family planning."(31-year-old female nurse, IDI) [6] Community Level Cultural norms and values "A woman should be obedient by listening to her husband.She should also be respectful.She should care for the children and [be] hardworking in her home.She must be humble in her talking, faithful to her husband, and welcoming".(Female, age 19, IDI) [56] Accessibility of FP services "At the Health Centre the long-term methods are not available.We usually wait for an announcement by Blue Star (a local program that provides long-acting contraceptive methods) that the services are being brought, and that's when we come to the health center."(Young female client at rural health center, FGD) [45] Institutional and Policy Healthcare system "I always talk to the health provider and she finds a way of helping me.I cannot take a decision on my own regarding these challenges.When I got the injection and experienced problems, I came back and talked to her.She told me 'such things happen at the start but you will be fine after some time', and I indeed I got well after."(Female, age 26, IDI) [57] I did not tell anyone about my side effects with implants [stopped using it] except the health providers because people can spread rumours and yet my husband does not want me to use it."(Female, age 25, IDI) [57] Health policy environment "For us men, we really like to bring those services nearer to us because the women come here [to the health centre] for antenatal clinics and when they bring children to be immunized they are taught about family planning.Yet we, who don't bring children to be immunized, don't get that information of the methods."(male, FGD) [58] "New [contraceptive] users were even more affected.They thought that family planning services were also locked-down."(Female health worker, KII) [59] https://doi.org/10.1371/journal.pgph.0003313.t001 Accessibility of family planning services.The availability and service delivery of contraception also influences people's ability to use them [45,71,74].The availability of multiple FP options from which to choose increases the uptake [75].Stockouts are often reported by patients and healthcare providers [40], while policymakers were not aware of the extent of the issue [76].Women who choose self-injections to maintain privacy, and are aware of the safety risks, will opt to properly dispose of the injections with a healthcare worker, even if more convenient options are available [77].There remain limited options for male-directed FP methods (including vasectomies) [63], but men believe if more options were available, men's positive attitude towards FP would increase [78].
Fortin et al. [79] found that structural inequalities and health vulnerabilities interact at the intersection of women's identities (derived from their motherhood, marriage status, employment status, education level, and if they had a disability or a chronic illness).More rural areas experience challenges in accessing FP services, which also deters contraception use [34].The disruption of COVID-19 made it even harder for people to access FP services.People experienced unsteady employment/income, unintended pregnancies, unreliable transportation, and service delivery interruptions [59,80].Lockdowns exacerbated the impact of poverty and gender inequality on FP access [59].
Institutional and policy level factors
Healthcare system.Providers play a role in the perception and delivery of sexual and reproductive health services.Providers want to incorporate better FP into their practice and give knowledge to their patients [51,81,82].However, the availability of providers limits their ability to provide health services [51,83].More services could be provided with more training [81,84,85], and healthcare providers must be supportive to endear trust from their patients [86].More community outreach in public health interventions may counteract the misinformation [87].
Other articles focused on the role of healthcare workers in educating the community on FP.Kibira et al. [57] and Mbalinda et al. [51] found that healthcare providers were the most influential educational sources about FP.Namanda et al. [60] found that their role was even more influential for married individuals.Healthcare providers' sensitivity to their patients' misconceptions and expectations shape the continued use of contraception [44,83].If they exaggerate perceptions of side effects, especially in comparison to the risk of pregnancy, patients are less likely to use contraception [84].
Health policy environment.In terms of policies, some of the articles focused on how policies impacted access to FP services.Kaida et.al. [58] discussed the perception and attitudes of Ugandan men towards family planning, considering the Ugandan National Population policy, the objective of which was to increase men's participation in family planning.The study found that FP services were still not targeted towards men, limiting their access to knowledge, but men are willing to be involved in discussions about FP [58].Tuhebwe et.al. [87] also found that while the intentions of the Uganda reproductive health services standards were to increase access to adolescents, the implementation was underutilized as designed, looked at through the framework of the WHO global standards [88].Providers designed interventions targeting a wide age group of adolescents and were facility-based, while the adolescents in the study preferred community-based services targeted towards a narrower age range.Grindlay [76], focusing on the frequency of stockouts of FP materials, found that policy makers were unaware of the magnitude of the issues faced by providers.Two studies also focused on the negative effects of the COVID-19 mitigation measures instituted by the Ugandan government during the pandemic on access to FP services.The lockdown decreased family income to pay for services, and limited transportation to travel to village health teams for services [59,80].They also magnified pre-existing inequalities in access [59].
In addition to the EST factors, other themes emerged from the included studies: the intersection of HIV/AIDS on FP practice and Ugandan views of CAC.
Intersection of HIV/AIDS on family planning practice
Nine articles discussed the intersection of HIV/AIDS in FP practice.Communication between partners is still important [62,89,90].Kosugi et al. [89] observed that those who perceived they were at a higher risk of HIV/AIDS were more likely to use dual-method contraceptives.HIV prevention can be confusing, and people struggle to balance their perceptions of HIV risk with their desire to have children [91][92][93].Health providers struggle to provide integrated HIV, antenatal, and prenatal care services [94].Testing for HIV is difficult due to a fear of the results and the perceived social stigma [95]; though self-testing may keep the information confidential, they are not as accurate, and a positive result could still prove harmful [96].
Comprehensive abortion care
Seven of the articles focused on the perceptions of comprehensive abortion care.Of the seven articles, two articles focused on men as a study population, four on women, and one on healthcare workers.The primary data collection type was IDIs, with all 7 studies performing IDIs as one of their data collection methods.Only one article [97] used SSIs and FGDs.The publication years also varied.Two articles were published in 2022, four were published between 2005 and 2017, and one article was published in 2023.The primary aims of the articles were also varied, focusing on perceptions, experiences, and attitudes toward comprehensive abortion care.
The perception of abortion care is influenced by the individual's attitude towards it.Women decide abortion is necessary due to financial constraints, unplanned pregnancy, and complicated social networks [98].They may desire to keep their decision to have an abortion private, but that can lead to unsafe abortions and late care-seeking [99].Moore et al. [100] found that men perceived women's reasons for seeking abortions differently from women.B. Nyanzi et al. [101] also found that men's views on abortion are ambivalent, seeing abortions as either a solution or something to be avoided.
Gender and social norms also influence people's views of abortion care.The agency to make such a decision is constrained by gender norms [99].Moore et al. [100] found that men expressed more rigid anti-abortion sentiments than they actually felt.Women often consult many people in their community before deciding on how to access abortion services [102].Kabunga et al. [98] found that women who have an abortion experience a loss of family support and internalized perceived stigma.
Perceived behavior control guided by the health care structures also influences views on abortion care.In terms of the availability of healthcare workers, midwives were viewed as competent and more present at facilities, however doctors were still viewed as needed by women in case of emergencies [103].The perception of abortion care treatment is informed by the outcomes experienced by patients.Treatment with misoprostol to manage incomplete abortions was viewed positively when it was successful, and the women felt safe [104].Such treatment also received unanimous support from healthcare workers because it was deemed safe, effective, and inexpensive, but it does put a strain on the healthcare facility and staff [102].Their satisfaction decreased when they experienced side effects, such as worrying bleeding [104].
Limitations of review articles
In this review, we identified several limitations in the studies analyzed.Firstly, social desirability bias may have influenced participant responses due to concerns about providing socially acceptable answers [46,75,77,99,103] or lack of privacy during data collection [40], especially when sensitive questions were asked [49].Secondly, recall bias might have been present, as participants may have had difficulty accurately recalling events that occurred years ago, potentially leading to inaccuracies in the data [51,67].Additionally, a limitation in the studies was the use of small sample sizes, which could affect the generalizability of the findings to larger populations or different contexts [41,61,69,83,85].Moreover, some studies were geographically limited [96], conducted in small areas in Uganda, restricting the applicability of results to broader populations or other regions.Finally, a few studies relied on a limited number of interviews [62,89], possibly compromising the comprehensiveness and depth of insights obtained for the research topic.These weaknesses highlight areas for improvement in future research and call for careful interpretation of the findings.
Discussion
The purpose of this scoping review of qualitative studies conducted in Uganda was to identify the socio-ecological factors shaping the use of FP and CAC in Uganda.The scoping review aimed to map the existing literature on FP in Uganda, identify key themes, and explore the gaps and future directions for research.The findings illuminate the complex interplay of individual, interpersonal, community, institutional, and policy-level factors that shape FP decision-making in Uganda.By understanding the diverse range of influences across different socio-ecological levels, stakeholders can develop tailored interventions that address specific barriers and promote informed decision-making.
At the individual level, our review highlights the pivotal role of knowledge, attitudes, and socio-demographic characteristics in shaping FP utilization.Educating individuals about FP methods and debunking misconceptions is essential for promoting informed decision-making.Moreover, understanding how socio-demographic factors such as age, education, and income intersect with FP practices can inform targeted interventions aimed at reaching vulnerable populations.Attitudes towards FP were generally positive among both men and women.However, negative attitudes towards FP were also reported, primarily by men, highlighting the need for targeted interventions to address misconceptions and provide accurate information about FP methods.Interpersonal dynamics emerged as critical determinants of FP behavior.Effective communication within couples and the presence of supportive social networks significantly influence contraceptive uptake.The same holds for research performed in other LMICs where factors associated with the unmet need for FP and non-contraception use are common across different settings [105].In a synthesis of systematic reviews of factors influencing contraception choice and use globally, D'Souza and colleagues [106] found that factors affecting contraception use are similar among women globally.Use of FP is influenced by relationship status, women's knowledge, beliefs, and perceptions of side effects and health risks, along with male partners, peers' views, and families' expectations, all having a strong influence [106].Strengthening these relationships through counseling and community-based initiatives can enhance FP decision-making processes.
Community-level factors, including cultural norms and accessibility of services, profoundly impact FP practices.Cultural norms, or the influence of social networks on FP decision-making, were found to be an important factor in contraceptive use.Adolescents' knowledge and use of contraceptives also factored predominantly in research performed in Uganda.Adolescents in Uganda are not alone in their unmet need for FP.Chandra-Mouli et al. [107] assert that all adolescents in LMICs have obstacles accessing their right to contraception, and countries should remove social and medical restrictions to delivering preferred contraception to adolescents.Addressing cultural barriers and ensuring the availability of FP services in remote and marginalized communities are imperative for promoting reproductive health equity.Moreover, community engagement strategies that involve local leaders and stakeholders can foster a supportive environment for FP uptake.
Social support from spouses, family members, and community health workers increased the likelihood of FP uptake in Uganda.However, social norms around gender roles and male dominance in decision-making were identified as barriers to contraceptive use.In a non-Ugandaspecific systematic review, Mandal and colleagues [108] evaluated gender-integrated FP and maternal health interventions in LMICs and proposed that gender constructs, such as genderequitable attitudes and decision-making power, must be adapted to examine how empowerment and improvements in gender-related factors can produce positive FP outcomes.
Institutional and policy-level factors play a crucial role in shaping the FP landscape.Barriers to accessing FP methods, such as lack of knowledge, limited access to services, and cost, were reported by participants.Addressing these barriers through improved access to FP information and services would be important in increasing perceived control over reproductive health decisions in Uganda and beyond.L. M. Williamson et al. [109] conducted a review of qualitative research to examine the limits to modern contraceptive use identified by young women in developing countries and determined that increasing modern contraceptive method use requires community-wide, multifaceted interventions, and the combined provision of information, life skills, support and access to youth-friendly services.Improving the quality and accessibility of healthcare services, as well as advocating for supportive policies, are essential for enhancing FP access and utilization.Additionally, efforts to strengthen healthcare infrastructure and increase funding for FP programs are vital for sustaining reproductive health initiatives in Uganda.
In addition to the EST constructs, other important themes emerged from the included studies, such as the intersection of HIV/AIDS on FP practice and Ugandan views of CAC.These findings highlight the need for targeted interventions that involve healthcare providers and address gender norms and dynamics within communities.Similar to studies in Uganda, a review of evidence about meeting the RH needs of key female populations affected by HIV in LMICs found that restrictive policy environments, stigma and discrimination in health care settings, gender inequality, and economic marginalization restrict access to services and undermine the ability to achieve reproductive intentions safely [110].Meanwhile, a systematic review of the contraceptive and abortion knowledge, attitudes, and practices of adolescents in LMICs to increase the understanding of the sexual and reproductive health dynamics that they face suggests severe limitations in the access to safe and effective methods of contraception and safe abortion services [111].
The findings from this scoping review also suggest several gaps in the existing literature on FP in Uganda.While the role of men in FP decision-making was explored in several articles, there was a limited focus on vasectomy as a FP method.Most studies were conducted in urban areas.More research is needed to understand the views of those living in rural parts of the country.The influence of cultural and religious beliefs on FP behaviors also warrants further investigation.Additionally, there were limited studies about using newer technologies, such as mobile phones, social media, and telehealth, to improve reproductive health and FP.
Application to programing, service provision, and policy
By understanding the diverse range of influences across different socio-ecological levels, stakeholders can develop tailored interventions that address specific barriers and promote informed decision-making.One key area where the findings can impact programming is in the development of tailored interventions.Understanding the unique socio-demographic characteristics and knowledge gaps among different demographic groups allows for the design of targeted interventions that address specific needs.Furthermore, recognizing the importance of interpersonal communication in FP decision-making suggests the need to strengthen communication skills within couples and promote supportive social networks.This could involve implementing couples counseling sessions, establishing community-based support groups, or initiating peer-to-peer education programs.
Moreover, addressing cultural norms and values is essential in promoting FP and CAC services.By engaging with local communities to challenge harmful stereotypes and myths surrounding contraception and abortion, programs can foster a supportive environment for reproductive health decision-making.Additionally, ensuring the accessibility and quality of FP and CAC services is crucial for promoting uptake.Programs can focus on expanding service availability in underserved areas and reducing barriers such as cost, distance, and stigma.
Leveraging evidence from this review can support advocacy efforts for policies that promote reproductive health and rights.Policy programming can also be used for strengthening healthcare infrastructure and implementing policies that safeguard individuals' access to comprehensive reproductive healthcare services.
Strengths and limitations
A strength of this study is that it is the first comprehensive scoping review of qualitative literature in Uganda focused on FP that the authors are aware of.The many databases searched allowed us to capture a wide range of studies focused on FP.In terms of limitations, hand searching or reviewing grey literature sources may have increased the number of articles discovered.Furthermore, the studies included in this review were conducted only within Uganda, which may limit the generalizability of the findings outside the country.
Conclusions
Overall, this scoping review provides a comprehensive overview of the existing literature on FP in Uganda, using the EST as a guiding framework, and identifies key themes and gaps for future research.By addressing factors at multiple levels, including individual, interpersonal, community, institutional, and policy levels, stakeholders can develop holistic interventions that promote reproductive health.The findings highlight the importance of addressing attitudes, cultural norms, and behavior in increasing FP uptake and improving reproductive health outcomes in Uganda.There is a clear need for comprehensive interventions that address socio-cultural norms, expand access to information and services, and tackle structural barriers to SRHR across various contexts.
The findings from this scoping review can be used to inform future FP programming and policy in Uganda, with the potential to improve the accessibility, quality, and utilization of FP services, particularly among marginalized populations.Future research should address the identified gaps in the literature, such as vasectomy as an FP option and the influence of cultural and religious beliefs on FP behaviors, especially in rural areas.An exploration of Ugandan views on newer technologies to improve reproductive health and FP is also warranted.Targeted interventions that involve health care providers and address gender norms and dynamics within communities may be vital to increasing FP uptake in Uganda.Moving forward, interdisciplinary collaboration and longitudinal research are needed to advance our understanding of FP dynamics and improve reproductive health outcomes in Uganda.
Fig 1 shows the PRISMA diagram produced using the PRISMA Flow Diagram tool [32].Sixty-four of the articles focused on FP.Seven specifically focused on CAC.Fig 1 illustrates the number of articles by publication year of the included FP studies.The earliest FP study was published in 2005, and the most recent one in 2023.A majority of FP studies retrieved from the search results were published between 2013 and 2023 (Fig 2).Fig 3 indicates the characteristics of FP study participants.Adults and women are the most common study population types, with 32 studies including adults and 51 studies including women specifically (Fig 3).Twenty-one articles focused on adolescents.Studies also identified study populations of key stakeholders (n = 9) and healthcare workers (n = 10) (Fig 3).Focus group discussion (FGD) (n = 34) and in-depth interviews (IDIs) (n = 35) were the most common data collection types (Fig 4).Key informant interviews (KIIs) were used in 12 studies, and seven studies used semi-structured interviews (SSIs) (Fig 4).Overall, the quality of the included studies was moderate to high, with most studies meeting the majority of the CASP [31] criteria.The studies in this scoping review covered a broad range of primary aims related to FP in Uganda (Fig 4).Fig 5 presents a comprehensive overview of the EST for FP in Uganda, illustrating the interconnectedness of factors at various socio-ecological levels.At the center of the framework is the individual, influenced by factors such as knowledge, attitudes, and socio-demographic characteristics.Interpersonal relationships and social support networks within communities surround the individual, impacting FP decisions.Cultural norms, values, and the accessibility of services at the community level further shape practices.The outermost layer comprises institutional and policy factors, including the quality of healthcare services and national policies, which provide the broader context for FP practices.Understanding these interactions is crucial for developing targeted interventions to enhance reproductive health outcomes in Uganda.
Fig 4 .
Fig 4. Study primary aims.The aims are drawn from the most common words used by authors in their title or aims description, such as knowledge, experiences, and expectations related to family planning.Studies with multiple primary aims are counted multiple times in the figure.https://doi.org/10.1371/journal.pgph.0003313.g004
Table 1 . Representative quotes from articles organized by Ecological Systems Theory levels and factors.
FGD = focus group discussion, FP = family planning. | 2024-07-05T05:08:31.032Z | 2024-07-03T00:00:00.000 | {
"year": 2024,
"sha1": "529a435d7be4f989ea2001031313c72f927006cd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pgph.0003313",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "529a435d7be4f989ea2001031313c72f927006cd",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253366262 | pes2o/s2orc | v3-fos-license | Analysis of research on interventions for the prevention of safety accidents involving infants: a scoping review
Purpose This study aimed to conduct a scoping review of studies on interventions for the prevention of safety accidents involving infants. Methods The scoping review method by Arksey and O'Malley was used to conduct an overview based on information spanning a wide range of fields. Multiple electronic databases, PubMed, CINAHL, RISS, and KISS, were searched for articles written in English or Korean published from 2012 to the present on safety accident prevention interventions. A total of 2,137 papers were found, and 20 papers were ultimately analyzed. Results Most studies were conducted in the United States (55.0%) and in the medical field (45.0%), and most were experimental studies (35.0%). The results were organized across five categories: 1) preventive precautions, 2) characteristics of children's developmental stages, 3) encouraging voluntary participation, 4) continuity of interventions, and 5) teaching methods. Conclusion Safety accident prevention interventions should cover the establishment of a safe home environment, include voluntary participation, and provide routine follow-up interventions. Additionally, practical training and teaching methods that incorporate feedback rather than a lectureoriented approach should be adopted.
tion, are interested in home and child safety. Several Korean and international studies have been conducted in which educational programs on home safety aimed at the parents of infants and toddlers were conducted and analyzed for their effectiveness [4,7,10,11]. Furthermore, many studies on interventions for sleep safety [12][13][14][15], home fire safety accidents [8], first aid [16,17], and car safety accidents [18] have been published, and educational interventions aimed at parents in these studies were confirmed to have a positive effect on the prevention of safety accidents involving children.
Supporting and helping parents to ensure the safety and health of their children is an important task in childcare; therefore, effective interventions for safety accident prevention should be provided for parents [5,[13][14][15]. It is necessary to understand the importance of education to prevent accidents involving children practically and comprehensively.
A scoping review is a literature review method in which the characteristics, scope, and key concepts related to a specific research question are summarized and mapped to obtain an overview of findings across a wide range of fields [19,20]. Therefore, this study aimed to obtain an overview of studies related to interventions for the prevention of safety accidents involving infants using the scoping review method and to collect basic data for the development of future interventions for parents to prevent safety accidents. The specific objectives of this study were as follows: 1) to identify the general characteristics of relevant studies and 2) to identify the characteristics of interventions to prevent safety accidents involving infants.
METHODS
Ethics statement: This study was a literature review of previously published studies and was therefore exempt from institutional review board approval.
Study Design
This study conducted a literature review using the scoping review method proposed by Arksey and O'Malley [20]. Scoping reviews can be used to obtain an overview of information for a broad range of fields [19]. The process of conducting a scoping review involves the following steps: 1) identify the research question, 2) identify relevant studies, 3) select studies to include, 4) chart the data, and 5) collate, summarize, and report the results [20]. This study followed the criteria of the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist [21].
1) Identifying the research question
The research questions in scoping reviews are broadly designed to encompass a wide range of research areas [20]. According to Peters et al. [22], the population, concept, and context of the research question should be defined; in this study, the following definitions were applied: 1) population: infants/toddlers, and their parents; 2) concept: safety accidents and, interventions; 3) context: home. Therefore, the research question in this study was "What are the main findings of published academic articles on interventions aimed at parents to prevent safety accidents involving infants and toddlers in the home?"
2) Identifying relevant studies
According to the guidelines of Seo and Kim [19], which suggest conducting a search using a limited set of information sources, this study searched four major international and Korean search engines. The international search engines were PubMed and CINAHL, and the Korean search engines were RISS and KISS. The main keywords were "parent", "infant", "toddler", "safety", "program", "education", and "intervention". The literature search was conducted using MeSH terms, CINAHL subject heads, and natural language and Boolean operators, which are the main indices for each database, based on the advice of the librarian of the medical library at the researchers' university. To understand the latest trends, the publication period was limited to the last 10 years (from 2012 to the present). The search strategy adopted in this study is shown in Table 1. As a result of the search conducted on May 26, 2022, a total of 2,137 papers were identified, including 317 from PubMed, 180 from CINAHL, 27 from RISS, and 1,613 from KISS. Duplicate articles were removed using Endnote 20 (Clarivate, Philadelphia, PA, USA), with a total of 2,008 articles being removed. A total of 129 articles ultimately required review, and their titles and abstracts were collected ( Figure 1).
3) Selecting studies to include
The inclusion criteria of this study were as follows: 1) journal studies related to interventions for the prevention of safety accidents involving infants, 2) studies published from 2012 to 2022, 3) peer-reviewed studies, and 4) English-or Korean-language studies. The exclusion criteria were as follows: 1) editorials, letters to the editor, and 2) conference proceedings. To control for selection bias, five in-person meetings were held to reach a consensus after an independent review by the researchers. In the first phase, the titles and abstracts were reviewed. In the second phase, the full texts were reviewed. In CHNR Child Health Nurs Res, Vol. 28, No.4, October 2022: 234-246 the third phase, the selected articles were reviewed for data collection. Finally, Gough's weight of evidence (WOE) (2007) was used for the quality evaluation of the selected articles. The logic of the study, quality of the research design, and quality of the research data were evaluated according to the WOE criteria.
4) Charting the data
The charting format based on the criteria of Peters et al. [22] was as follows: 1) first author, 2) year of publication, 3) country of origin, 4) academic field, 5) study design, 6) study aim, 7) study population and sample size, 8) intervention content, 9) duration of the intervention, 10) outcome variables, and 11) key findings. The researchers independently charted 20 articles using Microsoft Excel (Version 2013; Microsoft, Redmond, WA, USA), and each opinion was discussed over the course of three research meetings to compile a single chart. Table 2 summarizes the charts used in this study.
5) Collating, summarizing, and reporting the results
The researchers analyzed and summarized the study's re-sults by faithfully following the guidelines for scoping reviews [20]. Tables and figures were prepared to outline the research results according to the purpose of this study.
Search and Selection of Scoping Review
As a result of a search conducted on May 26, 2022, a total of 2,137 papers were identified, including 317 from PubMed, 180 from CINAHL, 27 from RISS, and 1,613 from KISS. A total of 2,008 duplicate articles were removed using Endnote 20 (Clarivate). Thus, 129 articles required review, and the titles and abstracts of all remaining articles were collected ( Figure 1).
After the titles and abstracts of the 129 articles were reviewed, 97 articles were excluded. The 97 papers included 42 papers related to safety accidents at institutions or kindergartens, 31 papers related to preschool-age children, nine papers related to school-age children, five non-English-or Korean-language papers, a letter, a paper from conference proceedings, and eight papers that did not match the research
CHNR
question. Thirty-two articles were reviewed after obtaining the full texts. Seventeen articles were excluded since they did not meet the inclusion criteria, and five additional articles were included from the references of the reviewed articles. Twenty articles were ultimately reviewed for data collection. As a result of a close review of the criteria for inclusion and exclusion, and focusing on the research design, subjects, major variables, and major results as items for rough data recording, the researchers ultimately selected 20 articles to be analyzed in this study ( Figure 1). All 20 articles were high-quality based on the WOE criteria [23].
General Characteristics of the Reviewed Research
The general characteristics of the reviewed studies are presented in Table 3. The USA had the largest percentage of published studies (55.0%), followed by South Korea (25.0%). By academic field, the highest percentage of articles came from the field of medicine (45.0%), followed by nursing, public health, social welfare, and education. A total of 55.0% of the studies were specifically on mothers, while 40.0% of studies were on parents more broadly.
Falls and fire safety were the most common intervention topics, followed by child safety seats (car restraint systems [CRS]), emergency treatment, and poison prevention. A total of 36.4% of the studies provided an intervention program. Intervention materials most often included videos, followed by safety equipment, supervision, home visits, telephone conversations, text messages, booklets, and PowerPoint slides.
Characteristics of Interventions for the Prevention of Safety Accidents Involving Infants
Five characteristics were identified (Table 4). They included precautions to prevent safety accidents at home, characteristics of children's developmental stages, encouraging voluntary participation, continuity of interventions, and teaching methods to strengthen safety accident prevention competency.
1) Preventive precautions
Interventions for the prevention of safety accidents involving infants should include coverage of preventive precautions.
2) Characteristics of children's developmental stages
The characteristics of a child's stage of development include cognitive development, which involves exploring various objects through the senses and interacting via active play [3,4,7,8,10,11,16,24,26], and physical and motor development [3,7,[9][10][11]15,16,18,24,26,27] should be reflected in interventions for the prevention of safety accidents. Accident prevention and home safety measures should be applied differently, depending on the characteristics of the child's developmental stage since children at younger ages tend to have accidents while using their bodies, including when they are walking or running [10].
5) Teaching methods
Effective teaching methods should be used to strengthen the effectiveness of interventions aimed at parents for the prevention of safety accidents involving infants. In particular, providing a package that contains safety products is an effective method for parents to learn practical home safety management methods [3,7,8,12,14,16,18,24,26]. Effective teaching methods also include hands-on experience managing home safety for infants and toddlers [7,18,24,27], and a program can be developed to improve the competency of parents related to safety accident prevention by providing feedback on their safety accident prevention practices [17,26,27,29].
Third, interventions for the prevention of safety accidents involving infants were more effective when parents participated on a voluntary basis. The voluntary participation of parents has been reported to have positive effects in education in various areas such as cognitive and social-emotional development and problematic behavior reduction in infants and toddlers [31,32]. Moreover, parents who voluntarily participate in their child's education tend to have a better understanding of their child's development and are more likely to positively change their attitude toward their child's education [33]. Various information and communication technologies, including mobile phones [25,28,29], e-mail [25], and mobile applications [9], can be used to encourage parents raising young children to participate. Safety equipment [7,29], home visits [11,16], and supervision [29] should also be incorporated to enhance the active participation of parents. According to a study by Han and Chae [5], parents of infants prefer home visits for one-on-one intensive interventions, as well as interventions that use mobile applications that transcend time and space, there by supporting the results of this study.
Fourth, interventions for the prevention of safety accidents involving infants should include additional ongoing interventions. Additional interventions improve parents' knowledge and attitudes toward safety education and have a positive effect on the prevention of accidents involving infants at home [10]. Perez et al. [17] also reported that additional interventions effectively improved CRS knowledge. Interventions provided multiple times at 2-week intervals can supplement the effect of previous interventions and improve the content of interventions, effectively improving their quality [8]. Particularly, daily repeated interventions over the course of 60 days were found to improve parental behaviors [25]. Therefore, interventions for the prevention of safety accidents during infancy should not only provide information but also consider including additional further interventions to enhance parents' ability to implement safety accident prevention strategies. However, no papers have suggested guidelines for how to provide further education, including the recommended duration and frequency of additional interventions. Therefore, further research should be conducted to create standard guidelines for continued intervention.
Finally, providing practical experience and feedback as a teaching method for safety accident prevention competency reinforcement is a notable factor in interventions for the prevention of safety accidents involving infants. Honda et al. [34] reported that providing practical experience reinforces an active attitude toward injury prevention and safety behavior practices. There is a reported demand for education related to practical management methods among parents of infants and toddlers to reinforce the safety of their children at home [5]. The results of this study also showed that demonstrations and practice using safety equipment are effective educational strategies [7,29].
Feedback is one of the most important factors in educational interventions due to its high effectiveness and relative ease of implementation. It is a key element in the success of teaching methods and improves learning outcomes [35,36]. According to a study by Mello et al. [29] analyzed in this review, the results were positive after feedback was shared by a supervisor during interventions for the prevention of safety accidents aimed at young mothers. Faculty feedback is a teaching method that encourages complete learning, and previous studies have reported high satisfaction among trainees using this method; therefore, these findings support the results of this study [37]. Since there are various types of safety accidents for infants and toddlers, and the physical environment of each family differs [5], the adoption of a complex teaching method for safety education aimed at parents is believed to be particularly effective.
This study is meaningful since it laid a foundation for the CHNR Child Health Nurs Res, Vol.28, No.4, October 2022: 234-246 development of effective interventions for the prevention of safety accidents involving infants in the future by analyzing the research in general. However, since only studies published in English and Korean were included, data collection bias may have occurred. Many papers identified in the search were also excluded because they did not specifically cover interventions for the prevention of safety accidents involving infants and toddlers. Therefore, it is necessary to reconfirm the results of this study after further studies have been conducted.
CONCLUSION
This study was conducted to obtain an overview of the latest findings related to interventions for the prevention of safety accidents involving infants using the scoping review method for 20 papers published over the past 10 years. The main outcomes that can be considered the most important components of effective interventions for safety accident prevention were the creation of a safe home environment, the voluntary participation of the trainees, and continuous additional interventions rather than a one-time educational intervention. Moreover, practical training and teaching methods that incorporate feedback rather than lecture-oriented methods should be included.
Based on these results, when developing interventions to prevent safety accidents involving infants in the future, the goal of interventions should be to strengthen the competency of parents when creating and maintaining a safe home environment for their children. Additionally, to maintain the safety of infants and toddlers, various teaching methods have been proposed using various media, demonstrations, practice, and supervision strategies so that parents can participate in self-directed education whenever possible. Finally, the level of achievement of parents in terms of their competency related to safety accident prevention should be further examined, and continuous education should be considered in the intervention development stage so that parents of infants and toddlers can guarantee their children's safety. However, while this study was conducted to analyze interventions for the prevention of safety accidents, the research analyzed in this study was mainly limited to education. Therefore, various interventions other than educational interventions should be examined in future research. | 2022-11-06T16:12:16.540Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "823e406f32ce66fed446d956faf0ae5f886ca55d",
"oa_license": "CCBYNCND",
"oa_url": "https://www.e-chnr.org/upload/pdf/chnr-28-4-234.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38d1fb80031d92b047ba65788c4fd56090df08a5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.